neurom.io.hdf5¶
Module for morphology HDF5 data loading.
Data is unpacked into a 2-dimensional raw data block:
[X, Y, Z, R, TYPE, ID, PARENT_ID]
- HDF5.V1 Input row format:
points: [X, Y, Z, D] (ID is position) groups: [FIRST_POINT_ID, TYPE, PARENT_GROUP_ID]
There is one such row per measured point.
Functions
Determine whether an HDF5 file is v1 or v2. |
|
Read a file and return a data_wrapper’d data. |
Classes
Helper to create DataWrapper for ‘block’ sections. |
|
Class holding a raw data block and section information. |
-
class
neurom.io.hdf5.
BlockNeuronBuilder
[source]¶ Bases:
object
Helper to create DataWrapper for ‘block’ sections.
This helps create a new DataWrapper when one already has ‘blocks’ (ie: contiguous points, forming all the segments) of a section, and they just need to connect them together based on their parent.
Example
>>> builder = BlockNeuronBuilder() >>> builder.add_section(segment_id, parent_id, segment_type, points) ... >>> morph = builder.get_datawrapper()
Note
This will re-number the IDs if they are not ‘dense’ (ie: have gaps)
-
class
BlockSection
(parent_id, section_type, points)¶ Bases:
tuple
-
class
-
class
neurom.io.hdf5.
DataWrapper
(data_block, fmt, sections=None)[source]¶ Bases:
object
Class holding a raw data block and section information.
-
neurom.io.hdf5.
get_version
(h5file)[source]¶ Determine whether an HDF5 file is v1 or v2.
Return: ‘H5V1’, ‘H5V2’ or None
-
neurom.io.hdf5.
read
(filename, remove_duplicates=False, data_wrapper=<class 'neurom.io.datawrapper.DataWrapper'>)[source]¶ Read a file and return a data_wrapper’d data.
Tries to guess the format and the H5 version.
Unpacks the first block it finds out of (‘repaired’, ‘unraveled’, ‘raw’)
- Parameters
filename – path to file to be read
remove_duplicates – boolean, If True removes duplicate points from the beginning of each section.
data_wrapper – return class