NeuroM 0.0.14

NeuroM is a Python-based toolkit for the analysis and processing of neuron morphologies.


Contents

NeuroM Quick-Start Tutorial

Dependencies

Build-time and runtime

It is highly recommended that all except enum34 be installed into your system before attempting to install NeuroM. The NeuroM package installation takes care of installing enum34.

Installing and building
Testing and documentation

These dependencies are not needed for installing and running NeuroM, but are useful for those who want to contribute to its development.

Installation

It is recommended that you use pip to install into NeuroM into a virtualenv:

Virtualenv setup
$ virtualenv --system-site-packages nrm   # creates a virtualenv called "nrm" in nrm directory
$ source nrm/bin/activate                 # activates virtualenv
(nrm)$                                    # now we are in the nrm virtualenv

Here, the --system-site-packages option has been used. This is because dependencies such as matplotlib aren’t trivial to build in a virtualenv. This setting allows python packages installed in the system to be used inside the virtualenv.

The prompt indicates that the virtualenv has been activated. To de-activate it,

(nrm)$ deactivate

Note that you do not have to work in the nrm directory. This is where python packages will get installed, but you can work anywhere on your file system, as long as you have activated the virtualenv.

Note

In following code samples, the prompts (nrm)$ and $ are used to indicate that the user virtualenv is activated or deactivated respectively.

Note

In following code samples, the prompt >>> indicates a python interpreter session started with the virtualenv activated. That gives access to the neurom installation.

NeuroM installation

Once the virtualenv is set up, there are three ways to install NeuroM

  1. From the official Python Package Index server (PyPI)
  2. From the git repository
  3. From source (for NeuroM developers)
Install from the official PyPI server

Install the latest release:

(nrm)$ pip install neurom

Install a specific version:

(nrm)$ pip install neurom==1.2.3
Install from git

Install a particular release:

(nrm)$ pip install git+https://github.com/BlueBrain/NeuroM.git@neurom-v0.0.8

Install the latest version:

(nrm)$ pip install git+https://github.com/BlueBrain/NeuroM.git
Install from source

Clone the repository and install it:

(nrm)$ git clone https://github.com/BlueBrain/NeuroM.git
(nrm)$ pip install -e ./NeuroM

This installs NeuroM into your virtualenv in “editable” mode. That means that changes made to the source code after the installation procedure are seen by the installed package. To install in read-only mode, omit the -e.

Quick and easy analysis

The neurom.ezy module

The neurom.ezy module contains a Neuron class and helper functions that allow to easily load neuron morphologies from files into neurom data structures. It provides convenient methods to query various properties of the neurons. The functionality is limited, but it is hoped that it will suffice for most analyses.

These are some of the properties can be obtained for a single neurite type or for all neurites regardless of type:

  • Segment lengths
  • Section lengths
  • Number of sections
  • Number of sections per neurite
  • Number of neurites
  • Local and remote bifurcation angles
  • Section path distances
  • Section radial distances

There are also methods for obtaining the soma radius and for plotting a neuron in 2 and 3 dimensions.

See also

The neurom.ezy and neurom.ezy.Neuron documentation for more details and examples.

Data checking applications

There are two user-friendly data checking applications. raw_data_check checks for basic consistency of raw data, and morph_check applies some further semantic checks to the data in order to determine whether it is suitable to construct a neuron structure and whether certain defects within the structure are detected. Both can be invoked from the command line, and take as main argument the path to either a single file or a directory of morphology files.

For example,

$ morph_check test_data/swc/Neuron.swc # single file
INFO: ========================================
INFO: Check file test_data/swc/Neuron.swc...
INFO:                     Has valid soma? PASS
INFO:               All points connected? PASS
INFO:                 Has basal dendrite? PASS
INFO:                           Has axon? PASS
INFO:                Has apical dendrite? PASS
INFO:            Nonzero segment lengths? PASS
INFO:            Nonzero section lengths? PASS
INFO:              Nonzero neurite radii? PASS
INFO:                       Check result: PASS
INFO: ========================================

$ morph_check test_data/swc # all files in directory
# loops over all morphology files found in test_data/swc

For more information, use the help option:

$ morph_check --help
....

$ raw_data_check --help
....

Examples

Morphology file data consistency checks
(nrm)$ morph_check some/data/path/morph_file.swc # single file
INFO: ================================
INFO: Check file some/data/path/morph_file.swc...
INFO: Has valid soma? PASS
INFO: Has Apical Dendrite? PASS
INFO: Has Basal Dendrite? PASS
INFO: All neurites have non-zero radius? PASS
INFO: All segments have non-zero length? PASS
INFO: All sections have non-zero length? PASS
INFO: Check result: PASS
INFO: ================================


(nrm)$ morph_check some/data/path # all files in directory
....
Basic eyz.Neuron usage
  • Load a neuron and obtain some information from it:
>>> from neurom import ezy
>>> nrn = ezy.load_neuron('some/data/path/morph_file.swc')
>>> apical_seg_lengths = nrn.get_segment_lengths(ezy.TreeType.apical_dendrite)
>>> axon_sec_lengths = nrn.get_section_lengths(ezy.TreeType.axon)
  • Visualize a neuronal morphology:
>>> # Initialize nrn as above
>>> fig, ax = ezy.view(nrn)
>>> fig.show()
Basic ezy examples script

These basic examples illustrate the type of morphometrics that can be easily obtained from the ezy module, without the need for any other neurom modules or tools.

The idea here is to pre-package the most common analyses so that users can obtain the morphometrics with a very minimal knowledge of python and neurom.

'''Easy analysis examples

These examples highlight most of the pre-packaged neurom.ezy.Neuron
morphometrics functionality.

'''

from __future__ import print_function
from pprint import pprint
from neurom import ezy
import numpy as np


def stats(data):
    '''Dictionary with summary stats for data

    Returns:
        dicitonary with length, mean, sum, standard deviation,\
            min and max of data
    '''
    return {'len': len(data),
            'mean': np.mean(data),
            'sum': np.sum(data),
            'std': np.std(data),
            'min': np.min(data),
            'max': np.max(data)}


def pprint_stats(data):
    '''Pretty print summary stats for data'''
    pprint(stats(data))


if __name__ == '__main__':

    filename = 'test_data/swc/Neuron.swc'

    #  load a neuron from an SWC file
    nrn = ezy.load_neuron(filename)

    # Get some soma information
    # Soma radius and surface area
    print("Soma radius", nrn.get_soma_radius())
    print("Soma surface area", nrn.get_soma_surface_area())

    # Get information about neurites
    # Most neurite data can be queried for a particular type of neurite.
    # The allowed types are members of the TreeType enumeration.
    # NEURITE_TYPES is a list of valid neurite types.

    # We start by calling methods for different neurite types separately
    # to warm up...

    # number of neurites
    print('Number of neurites (all):', nrn.get_n_neurites())
    print('Number of neurites (axons):', nrn.get_n_neurites(ezy.TreeType.axon))
    print('Number of neurites (apical dendrites):',
          nrn.get_n_neurites(ezy.TreeType.apical_dendrite))
    print('Number of neurites (basal dendrites):',
          nrn.get_n_neurites(ezy.TreeType.basal_dendrite))

    # number of sections
    print('Number of sections:', nrn.get_n_sections())
    print('Number of sections (axons):', nrn.get_n_sections(ezy.TreeType.axon))
    print('Number of sections (apical dendrites):',
          nrn.get_n_sections(ezy.TreeType.apical_dendrite))
    print('Number of sections (basal dendrites):',
          nrn.get_n_sections(ezy.TreeType.basal_dendrite))

    # number of sections per neurite
    print('Number of sections per neurite:',
          nrn.get_n_sections_per_neurite())
    print('Number of sections per neurite (axons):',
          nrn.get_n_sections_per_neurite(ezy.TreeType.axon))
    print('Number of sections per neurite (apical dendrites):',
          nrn.get_n_sections_per_neurite(ezy.TreeType.apical_dendrite))
    print('Number of sections per neurite (basal dendrites):',
          nrn.get_n_sections_per_neurite(ezy.TreeType.basal_dendrite))

    # OK, this is getting repetitive, so lets loop over valid neurite types.
    # The following methods return arrays of measurements. We will gather some
    # summary statistics for each and print them.

    # Section lengths for all and different types of neurite
    for ttype in ezy.NEURITE_TYPES:
        sec_len = nrn.get_section_lengths(ttype)
        print('Section lengths (', ttype, '):', sep='')
        pprint_stats(sec_len)

    # Segment lengths for all and different types of neurite
    for ttype in ezy.NEURITE_TYPES:
        seg_len = nrn.get_segment_lengths(ttype)
        print('Segment lengths (', ttype, '):', sep='')
        pprint_stats(seg_len)

    # Section radial distances for all and different types of neurite
    # Careful! Here we need to pass tree type as a named argument
    for ttype in ezy.NEURITE_TYPES:
        sec_rad_dist = nrn.get_section_radial_distances(neurite_type=ttype)
        print('Section radial distance (', ttype, '):', sep='')
        pprint_stats(sec_rad_dist)

    # Section path distances for all and different types of neurite
    # Careful! Here we need to pass tree type as a named argument
    for ttype in ezy.NEURITE_TYPES:
        sec_path_dist = nrn.get_section_path_distances(neurite_type=ttype)
        print('Section path distance (', ttype, '):', sep='')
        pprint_stats(sec_path_dist)

    # Local bifurcation angles for all and different types of neurite
    for ttype in ezy.NEURITE_TYPES:
        local_bifangles = nrn.get_local_bifurcation_angles(ttype)
        print('Local bifurcation angles (', ttype, '):', sep='')
        pprint_stats(local_bifangles)

    # Remote bifurcation angles for all and different types of neurite
    for ttype in ezy.NEURITE_TYPES:
        rem_bifangles = nrn.get_remote_bifurcation_angles(ttype)
        print('Local bifurcation angles (', ttype, '):', sep='')
        pprint_stats(rem_bifangles)

    # Neuron's bounding box
    print('Bounding box ((min x, y, z), (max x, y, z))',
          nrn.bounding_box())
Advanced ezy examples script

These slightly more complex examples illustrate what can be done with the ezy module in combination with various generic iterators and simple morphometric functions.

The idea here is that there is a great deal of flexibility to build new analyses based on some limited number of orthogonal iterator and morphometric components that can be combined in many ways. Users with some knowledge of python and neurom can easily implement code to obtain new morphometrics.

All of the examples in the previous sections can be implemented in a similar way to those presented here.

'''Advanced easy analysis examples

These examples highlight more advanced neurom.ezy.Neuron
morphometrics functionality.

'''

from __future__ import print_function
from neurom import segments as seg
from neurom import sections as sec
from neurom import bifurcations as bif
from neurom import points as pts
from neurom import iter_neurites
from neurom import ezy
from neurom.core.types import tree_type_checker
from neurom.analysis import morphtree as mt
import numpy as np


if __name__ == '__main__':

    filename = 'test_data/swc/Neuron.swc'

    #  load a neuron from an SWC file
    nrn = ezy.load_neuron(filename)

    # Some examples of what can be done using iteration
    # instead of pre-packaged functions that return lists.
    # The iterations give us a lot of flexibility: we can map
    # any function that takes a segment or section.

    # Get length of all neurites in cell by iterating over sections,
    # and summing the section lengths
    print('Total neurite length:',
          sum(iter_neurites(nrn, sec.end_point_path_length)))

    # Get length of all neurites in cell by iterating over segments,
    # and summing the segment lengths.
    # This should yield the same result as iterating over sections.
    print('Total neurite length:',
          sum(iter_neurites(nrn, seg.length)))

    # get volume of all neurites in cell by summing over segment
    # volumes
    print('Total neurite volume:',
          sum(iter_neurites(nrn, seg.volume)))

    # get area of all neurites in cell by summing over segment
    # areas
    print('Total neurite surface area:',
          sum(iter_neurites(nrn, seg.area)))

    # get total number of points in cell.
    # iter_neurites needs a mapping function, so we pass the identity.
    print('Total number of points:',
          sum(1 for _ in iter_neurites(nrn, pts.identity)))

    # get mean radius of points in cell.
    # p[COLS.R] yields the radius for point p.
    print('Mean radius of points:',
          np.mean([r for r in iter_neurites(nrn, pts.radius)]))

    # get mean radius of segments
    print('Mean radius of segments:',
          np.mean(list(iter_neurites(nrn, seg.radius))))

    # get stats for the segment taper rate, for different types of neurite
    for ttype in ezy.NEURITE_TYPES:
        ttt = ttype
        seg_taper_rate = list(iter_neurites(nrn, seg.taper_rate, tree_type_checker(ttt)))
        print('Segment taper rate (', ttype,
              '):\n  mean=', np.mean(seg_taper_rate),
              ', std=', np.std(seg_taper_rate),
              ', min=', np.min(seg_taper_rate),
              ', max=', np.max(seg_taper_rate),
              sep='')

    # Number of bifurcation points.
    print('Number of bifurcation points:', bif.count(nrn))

    # Number of bifurcation points for apical dendrites
    print('Number of bifurcation points (apical dendrites):',
          sum(1 for _ in iter_neurites(nrn, bif.identity,
                                       tree_type_checker(ezy.TreeType.apical_dendrite))))

    # Maximum branch order
    # We create a custom branch_order section_function
    # and use the generalized iteration method
    @sec.section_function(as_tree=True)
    def branch_order(s):
        '''Get the branch order of a section'''
        return mt.branch_order(s)

    print('Maximum branch order:',
          max(bo for bo in iter_neurites(nrn, branch_order)))

    # Neuron's bounding box
    print('Bounding box ((min x, y, z), (max x, y, z))',
          nrn.bounding_box())

A Tour of NeuroM

This is a more in-depth guide to NeuroM. It will consist of explanations of the overall design philosophy and the main components. The target audience consists of users who want to perform analyses beyond what is described in the quick and easy section.

Design philosophy

NeuroM is designed to be lighweight, simple, testable and extensible. It consists mainly of reusable components with limited functionality that can be combined to construct more complex algorithms. It makes few assumptions about input data, and prefers to fail early and loudly over implicitly fixing any data errors.

As a part of its design, it does a limited amount of processing on input data: it attempts to re-arrange the data into a convenient internal format without adding or removing any information. It can be roughly considered to consist of

  1. A data block containing re-arranged input data.
  2. Tree structures indexing sub-trees in the data block.
  3. Iterators providing different types of iteration over those trees.
  4. Functions to extract information from each iteration point.

It includes higher level classes representing the soma and a whole neuron. A neuron is simply a collection of trees representing the neurites, and a soma.

Data Layout

The internal data layout is simply an array of points. It is stored in memory as a 2-dimensional numpy array, with each row representing a data point. This data block is constructed when reading in a morphology file in any of the accepted formats. No information is added or removed from the input data.

Trees

Trees are hierachical, recursive structures used to represent individual neurites in a morphology. NeuroM represents neurites as trees containing measurement point information at each node. The role of a tree is simply to maintain the hierarchy of its components. As will be shown, most of the interesting work is done through a combination of different types of tree iteration, and different mapping and conversion functions.

Iterators

NeuroM provides different means of iterating over a single tree. These provide means to visit different nodes of the tree - e.g. all nodes, all nodes between a given node and the root node, forking points, end points - but also different groupings of nodes - e.g. sections (sequences of nodes between bifurcation points), triplets (sequences of three consecutive nodes), segments (pairs of consecutive nodes).

The idea behind this general approach is that most interesting features of a tree can be obtained by mapping or reducing a suitable iteration sequence with a suitable mapping or reducing function. The general form of a mapping is

tree = ....
xs = [get_x(i) for i in some_iterator(tree)]

An example of a reduce operation is

tree = ....
y = sum(get_y(i) for i in some_iterator(tree))

NeuroM provides various functions that can be applied to nodes, segments, triplets sections (these take the place of get_x and get_y in the examples above.) With knowledge of iterators and functions, it is possible to implement great things in single lines of code. However, some helper functions have been provided for the most common tree operations. We will get back to that later. First, let’s start by listing the different iterator types, before implementing some real examples in the cook-book section.

Currently, these are the tree iterators provided in the neurom.core.tree module. They are functions that have a tree object as parameter and return a suitable iterator:

  • ipreorder: depth first pre-order traversal of nodes
  • ipostorder: depth-first post-order traversal of nodes
  • iupstream: iterate to root node of tree
  • ileaf: leaf or end-nodes
  • iforking_point: nodes with more than one child
  • ibifurcation_point: nodes with two children
  • isegment: pairs of consecutive nodes
  • itriplet: triplets of consecutive nodes
  • isection: sequences of points between forking points. These include the forking point. Points joining sections are repeated.

Todo

Generate above list from docstrings

All of these iterators resolve to tree objects, but most analyses are interested in the data stored in each node of the tree. This is kept in a value field of the tree. To ease access to the data, and iterator adaptor is provided:

  • val_iter

This transforms a tree iterator so that it converts trees to values. It works for nested structures, such as segments, triplets and sections. So for example, printing the radius of all leaves of a tree would be done like this:

from neurom.core.tree import ileaf, val_iter
t = ... # a neurom.core.tree.Tree object
for leaf in val_iter(ileaf(t)):
    print leaf[3] # radius is 4th component of data

Cook-book

Now, for some real life examples. These examples rely on trees. An easy way to get some is to load a morphology file into a neuron object.

from neurom.io.utils import load_neuron
nrn = load_neuron('test_data/swc/Neuron.swc')
trees = nrn.neurites

We will assume trees has been obtained in a similar way in the following examples.

Get the total length of a tree

This can be achieved by summing the lengths of all the segments in the tree. For this, we iterate over all segments, calculate each segment length, and sum all lengths together:

from neurom.core.tree import isegment, val_iter
from neurom.analysis.morpmath import segment_length
tree = trees[0]
tree_length = sum(segment_length(s) for s in val_iter(isegment(tree)))
Get the path length to an end-point

This is the distance between a leaf node and the root, and can be calculated by iterating upstream from the leaf to the root, summing the distance as we go along:

from neurom.core.tree import isegment, ileaf, iupstream, val_iter
from neurom.analysis.morphmath import segment_length
# for demonstration purposes, get the first leaf we find:
tree = tree[0]
first_leaf = ileaf(tree).next()
# now iterate segment-wise, upstream, and sum the lengths
path_len = sum(segment_length(s) for s in val_iter(isegment(first_leaf, iupstream)))

This example is conceptually the same as the previous one, except for one crucial point: we start the iteration from a leaf node, and iterate towards the root. This is the reason for the extra complexity:

  • We use leaf iterator ileaf to get the first leaf node. This is somewhat beyond the scope of this example, but it is an interesting example of use of a different kind of iterator
  • We iterate in segments using isegment, but we tell it to iterate upstream. That is what the second parameter to isegment does: it transforms the order of iteration.

A variant of the last example is to use the helper function neurom.core.tree.imap_val. This is an iterator mapping function that transforms the target of the iteration from a tree object to the data stored in the tree. In other words, it applies val_iter internally:

from neurom.core.tree import isegment, ileaf, iupstream, imap_val
from neurom.analysis.morphmath import segment_length

first_leaf = ... # get a leaf of the tree (see previous example)

path_len = sum(imap_val(segment_length, isegment(first_leaf, iupstream)))

If this all seems too complicated, remember that it is a general approach that will allow you to do many more things other than getting the path length to the root. But if that is all you care about, NeuroM has a packaged function for it:

from neurom.analysis.morphtree import path_length
...
# assume leaf is a leaf node obtained by means that are irrelevant to this example
path_len = path_length(leaf)

NeuroM morphology definitions

These are NeuroM specific working definitions of various components of neuron morpholigies.

This is to be based on Basic Definitions to be used in NeuroM

Point

A point is a vector of numbers [X, Y, Z, R, TYPE, ID, PID] where the components are

  • X, Y, Z: Cartesian coordinates of position
  • R: Radius
  • TYPE: One of the NeuroM valid point types
  • ID: Unique identifier of the point.
  • PID: ID of the parent of the point.

Typically only the first four or five components are of interest to morphology analysis. The rest are used to construct the soma and hierarchical tree structures of the neuron, and to check its semantic validity.

Note

For most of what follows, it suffices to consider a point as a vector of [X, Y, Z, R, TYPE]. The remaining components ID and PID can be considered book-keeping.

Todo

Point types may need to be restricted to align SWC with H5. This is dependent on future H5 specs.

Soma

A soma can be represented by one, three or more points. The soma is classified based on the number of points it contains thus:

  • Type A: 1 point defining the center and radius.
  • Type B: 3 points. Only the centers of the points are considered. The first point defines the center. The radius is extimated from the mean distance between the center and the two remaining points.
  • Type C: More than three points. Only the centers of the points are considered. The first point defines the center. The radius is estimated from the mean distance between the center and the remaining points.

Todo

Expand list if and when specifications require new types of soma.

The soma interface exports a center and radius. These can be calculated in different ways, but the default is to use the center and radius for type A and the mean center and radius for types B and C.

Todo

In the future, type B may be interpreted as 3 points on an ellipse. In this case, the points would have to be non-collinear. Currently there is no such restriction.

See also

Neurite tree

A neurite consists of a tree structure with a set of points in each vertex or node. The tree structure implies the following:

  • A node can only have one parent.
  • A node can have an arbitrary number of children.
  • No loops are present in the structure.

Todo

Should neurite trees be restricted to being binary trees? If so, no more than two children per node would be allowed.

Different type of points are allowed in the same tree as long as same conventions are followed

Todo

The conventions governing the types of points in a neurite tree need to be well defined

In NeuroM neurite trees are implemented using the recursive structure neurom.core.tree.Tree, with each node holding a reference to a morphology point.

Neuron

A neuron structure consists of a single soma and a collection of trees.

The trees that are expected to be present depend on the type of cell:

  • Interneuron (IN): basal dendrite, axon
  • Pyramidal cell (PC): basal dendrite, apical dendrite, axon

API Documentation

neurom.core Core functionality and data types of NeuroM
neurom.core.types Type enumerations
neurom.core.tree Generic tree class and iteration functions
neurom.core.neuron Neuron classes and functions
neurom.core.point Point classes and functions
neurom.core.dataformat Data format definitions
neurom.io IO operations module for NeuroM
neurom.io.readers Module for morphology data loading and access
neurom.io.check Module with consistency/validity checks for raw data blocks
neurom.io.utils Utility functions and classes for higher level RawDataWrapper access
neurom.io.swc Module for morphology SWC data loading
neurom.io.hdf5 Module for morphology HDF5 data loading
neurom.view View tools to visualize morphologies
neurom.view.common Module containing the common functionality to be used by view-plot modules.
neurom.view.view Python module of NeuroM to visualize morphologies
neurom.analysis Morphometrics of neurons
neurom.analysis.morphmath Mathematical and geometrical functions used to compute morphometrics
neurom.analysis.morphtree Basic functions used for tree analysis
neurom.ezy Quick and easy neuron morphology analysis tools
neurom.ezy.neuron Neuron class with basic analysis and plotting capabilities.
neurom.ezy.population Population Class with basic analysis and plotting capabilities
neurom.exceptions Module containing NeuroM specific exceptions
neurom.stats Statistical analysis helper functions

Developer Documentation

Development Workflow

  • Fork from github
  • Develop on your fork
  • Test locally
  • Make a pull request

Before making a pull request, make sure that your fork is up to date and that all the tests pass locally. This will make it less likely that your pull request will get rejected by making breaking chages or by failing the test requirements.

Running the tests

The tests require that you have cloned the repository, since the test code is not distributed in the package. It is recommended to use nosetests for this. There are two options:

Use the provided Makefile to run the tests using make:

$ git clone https://github.com/BlueBrain/NeuroM.git
$ cd NeuroM
$ make test

This runs pep8, pylint and the unit tests in sequence.

This method takes care of installing all extra dependencies needed for running the tests, diagnosing the results, performing linting on the source code. These dependencies are installed into a virtuanelv named neurom_test_venv:

The Makefile also has targets for running only pylint and pep8 individually:

$ make lint       # runs pep8 and pylint if that succeeds
$ make run_pep8   # run only pep8
$ make run_pylint # run only pep8

Note that you can also install the test dependencies and run the tests inside of your own virtualenv:

(nrm)$ pip install -r requirements_dev.txt

This installs the following packages into your virtualenv unless they are already installed:

mock>=1.3.0
pep8>=1.6.0
astroid>=1.3,<1.4
pylint>=1.4.0,<1.5
nose>=1.3.0
coverage==3.7
nosexcover>=1.0.8
sphinx>=1.3.0

Then, run the tests manually in the virtualenv. For example,

(nrm)$ nosetests -v --with-coverage --cover-min-percentage=100 --cover-package neurom

Warning

To ensure that the test requirements are the same as those run in continuous integration, you should run the tests using the make test command. This is the same command used in continuous integration. Failure to pass means will result in a pull request being rejected.

Building the Documentation

The documentation requires that you clone the repository. Once you have done that, there’s a make target to build the HTML version of the documentation:

$ git clone https://github.com/BlueBrain/NeuroM.git
....
$ cd NeuroM # repository location
$ make doc

This builds the documentation in doc/build. To view it, point a browser at doc/build/html/index.html

License

Copyright © 2015, Ecole Polytechnique Federale de Lausanne, Blue Brain Project All rights reserved.

This file is part of NeuroM <https://github.com/BlueBrain/NeuroM>

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
  2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
  3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.


Indices and tables