Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nonseq #245

Draft
wants to merge 365 commits into
base: develop
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 139 commits
Commits
Show all changes
365 commits
Select commit Hold shift + click to select a range
bb770e8
Documentation
Willian-Girao May 28, 2024
221a958
Refactor
Willian-Girao May 28, 2024
1eaafe4
Converting SNN to DynapcnnNetwork and training the hw model directly
Willian-Girao May 28, 2024
d744dd6
Refactor + Bug Fixing
Willian-Girao May 30, 2024
5edcf9a
Refactor
Willian-Girao May 30, 2024
c55d185
Refactor
Willian-Girao May 30, 2024
db9c670
Missing from previous commit
Willian-Girao May 31, 2024
b938d3f
full new deployment example #1
Willian-Girao May 31, 2024
a33ca0d
removed folder with algo. exploration simulations
Willian-Girao May 31, 2024
cf9ad4e
Merge branch 'develop' into nonseq
bauerfe Jul 5, 2024
74f1118
(WIP) new baseline DynapcnnLayer
Willian-Girao Aug 20, 2024
7d1b811
(WIP) memory summary methods added
Willian-Girao Aug 22, 2024
b65428a
(WIP) functionality in 'build_from_graph()' now uses a handler class …
Willian-Girao Aug 22, 2024
763142a
(WIP) old version of DynapcnnLayer (with network-level knowledge) bec…
Willian-Girao Aug 22, 2024
d0fd696
DynapcnnLayerHandler instances passed as argument to access entry poi…
Willian-Girao Aug 22, 2024
5c8c4fe
build_from_graph() now returns a dict similar to the one containin th…
Willian-Girao Aug 22, 2024
6ac2f39
(WIP) DynapcnnLayerHandlers passed down to DynapcnnNetworkModule
Willian-Girao Aug 22, 2024
562c634
build_config() now using class DynapcnnLayerHandler to write configur…
Willian-Girao Aug 22, 2024
70a5ef3
get_valid_mapping() now writting core ID to the handler of a Dynapcnn…
Willian-Girao Aug 22, 2024
3c536b4
giving access to DynapcnnLayerHandler
Willian-Girao Aug 22, 2024
44b0b96
self.dynapcnnlayers_handlers used for chip configuration and removed …
Willian-Girao Aug 22, 2024
c6a829f
find_core_id() modified to use handlers dict instead of DynapcnnNetwo…
Willian-Girao Aug 22, 2024
16ac0ca
(WIP) handlers needed in the forward function - both HW and 'torch's …
Willian-Girao Aug 22, 2024
7d57984
deprecated previous implementation of DynapcnnLayer (with network-lev…
Willian-Girao Aug 22, 2024
a1cecd4
refactor: DynapcnnLayer constructor
Willian-Girao Sep 13, 2024
72974e4
refactor: encapsulating instance variables with @property
Willian-Girao Sep 13, 2024
7d63bf1
refactor: superfluous variables removed
Willian-Girao Sep 13, 2024
8fb101f
refactor: summary() using self._pool directly in its returned dictionary
Willian-Girao Sep 13, 2024
e203855
refactor: using self._get_conv_output_shape() in place of previous in…
Willian-Girao Sep 13, 2024
72c8e6e
Remove obsolete files dynapcnn_layer_v2.py and dynapcnn_layer_old.py
bauerfe Sep 17, 2024
f9a797d
Simplify DynapcnnLayer by removing _pool_layers attribute.
bauerfe Sep 17, 2024
dda052d
DynapcnnLayer: Reintroduce methods get_output_shape and zero_grad
bauerfe Sep 17, 2024
210a34d
Keep DynapcnnCompatibleNetwork for now. (remove in future release to …
bauerfe Sep 17, 2024
05546a4
Minor modifications to dynapcnn.py
bauerfe Sep 17, 2024
7c3f7a2
Minor changes to NIRGraphExtractor.py:
bauerfe Sep 17, 2024
6a04d88
Rename NIRGraphExtractor.py to nir_graph_extractor.py
bauerfe Sep 17, 2024
9986b0b
Refactor NIRtoDynapcnnNetworkGraph._get_edges_from_nir
bauerfe Sep 17, 2024
a18a060
Refactor nir_graph_extractor.py
bauerfe Sep 26, 2024
e138673
NIRtoDynapcnnNetowrkGraph properties return copies so that original o…
bauerfe Sep 26, 2024
f48f43f
Refactor edge_handler
bauerfe Sep 27, 2024
0a6b616
Improve type hints in edges handler
bauerfe Sep 27, 2024
8fcf594
Run black and isort
bauerfe Sep 27, 2024
3b78e69
Graph extractor removes nodes-in place
bauerfe Oct 2, 2024
8e3c7cd
Fix indentation of DynapcnnCompatibleNetwork
bauerfe Oct 2, 2024
32bbfed
Remove dependency on DynapcnnNetwork for Dynapcnn Config builder to p…
bauerfe Oct 2, 2024
4ab8492
Fix multiple minor bugs causing graph extractor test to fail
bauerfe Oct 2, 2024
8c28e61
DynapcnnNetwork does not need to keep track of removed nodes anymore
bauerfe Oct 2, 2024
847ff1b
Merge branch 'nonseq' of github.com:synsense/sinabs into nonseq
bauerfe Oct 2, 2024
c48687e
Fix set merger in GraphExtractor
bauerfe Oct 2, 2024
414e7a7
Merge branch 'nonseq' of github.com:synsense/sinabs into nonseq
bauerfe Oct 2, 2024
83be978
DynapcnnNetwork does not need to copy entry nodes from tracer
bauerfe Oct 2, 2024
0359610
Properly update modules map when removing nodes from graph extractor
bauerfe Oct 2, 2024
1b02d23
Nir Graph Extractor: Rename remove-nodes method
bauerfe Oct 2, 2024
6c59b3a
Fix dynapcnnlayer test to use layer handler
bauerfe Oct 2, 2024
8052cde
Unified name GraphExtractor
bauerfe Oct 2, 2024
5a1c6d1
Simplify graph extraction from NIR
bauerfe Oct 2, 2024
57abd11
Simplify name-to-module map generation
bauerfe Oct 2, 2024
defd334
Minor change to exception: Sequential is Module
bauerfe Oct 2, 2024
8efe58f
Fix failing graph extractor test
bauerfe Oct 2, 2024
80c9ef9
Refactor GraphExtractor.remove_nodes_by_class method
bauerfe Oct 2, 2024
86889b8
Remove need for merge_handler: Remove merge nodes directly as "ignore…
bauerfe Oct 3, 2024
80dcff1
GraphExtractor does formal verification of extracted graph.
bauerfe Oct 3, 2024
d7d0e93
General linting and minor refactoring
bauerfe Oct 3, 2024
46078a4
sinabs_edges_utils becomes connectivity_specs
bauerfe Oct 3, 2024
19d4ebd
Fix minor bugs
bauerfe Oct 3, 2024
aec0951
Remove obsolete graph_tracer.py
bauerfe Oct 3, 2024
dde3b78
Tidy up edge types
bauerfe Oct 3, 2024
9aeb937
Rename module_map to indx_2_module_map for consistency and clarity
bauerfe Oct 3, 2024
3b5180c
Node to dcnnl mapping: support pooling-pooling edegs. work independen…
bauerfe Oct 3, 2024
c3b5fcf
Fix bugs from previous commit
bauerfe Oct 3, 2024
088157b
Try reducing io shape extraction effort
bauerfe Oct 4, 2024
b4fabcf
(WIP) Update dynapcnn layer instantiation
bauerfe Oct 4, 2024
a44d40e
Fix non-optimal unpacking syntax
bauerfe Oct 4, 2024
6195461
Fix syntax bug
bauerfe Oct 4, 2024
5ec0ae5
Fix indentation
bauerfe Oct 4, 2024
a173f1a
Run black and isort
bauerfe Oct 4, 2024
8c6ec25
Fix missing import
bauerfe Oct 4, 2024
452af1a
Rerun black
bauerfe Oct 4, 2024
befc244
(WIP) Update dynapcnn layer instantiation: handling of input shapes f…
bauerfe Oct 8, 2024
af0922f
Remove methods from DynapcnnLayerHandler that will be obsolete after …
bauerfe Oct 8, 2024
03cb842
Fix type hint for `remve_nodes_by_class` method
bauerfe Oct 9, 2024
e84e2fa
Refactor DynapcnnLayer generation
bauerfe Oct 9, 2024
95e6c6d
Bugfix: layer_info always has "destinations" entry
bauerfe Oct 9, 2024
3cbd1d6
Fix bugs related to DynapcnnLayer refactoring
bauerfe Oct 9, 2024
c9ed7a2
(WIP): Update dynapcnn layer tests
bauerfe Oct 9, 2024
ebb2be3
(WIP): Update DynapcnnLayer tests
bauerfe Oct 10, 2024
fb0e661
Finish updating dynapcnn layer unit tests
bauerfe Oct 11, 2024
8d4c340
Separate dynapcnn_layer_utils module
bauerfe Oct 11, 2024
39192e9
Enable pooling without subsequent destination layer
bauerfe Oct 14, 2024
111a1a0
Final layer destinations get unique negative integers
bauerfe Oct 14, 2024
7875238
(WIP) DynapcnnNetwork forward pass happens in DynapcnnNetworkModule. …
bauerfe Oct 14, 2024
ed6345b
Rerun black
bauerfe Oct 14, 2024
537c7a7
Add complete type hint to DynapcnnNetwork.forward
bauerfe Oct 14, 2024
227b484
Update dynapcnn network unit tests
bauerfe Oct 15, 2024
bb3e211
Ensure exit layers generate output by setting destination None
bauerfe Oct 15, 2024
25a14d2
Remove need for DynapcnnLayerHandler (WIP)
bauerfe Oct 15, 2024
3fd4106
Temporarily add dynapcnn_layer_handler definition again to prevent im…
bauerfe Oct 15, 2024
1d17010
DynapcnnNetworkModule using torch compatible ModuleDict
bauerfe Oct 15, 2024
8c020b2
Update dynapcnn layer tests
bauerfe Oct 15, 2024
5082450
Move layer and network-module instantiation to graph-extractor
bauerfe Oct 15, 2024
70e649e
Make optional for
bauerfe Oct 17, 2024
2d3d32c
doc
Willian-Girao Oct 22, 2024
482499e
Restore original dynapcnn layer attribute names conv_layer and spk_layer
bauerfe Oct 22, 2024
7b63aca
(WIP) Remove dependency on DynapcnnLayerHandler for deployment
bauerfe Oct 22, 2024
5e59539
Update class definitions for ConfigBuilder child classes
bauerfe Oct 23, 2024
98eb42f
Remove dynapcnn_layer_handler.py
bauerfe Oct 23, 2024
62f1e88
Replace `chip_layers_ordering` by layer2core_map.
bauerfe Oct 23, 2024
0d77c73
Update DynapcnnNetwork layer monitoring
bauerfe Oct 23, 2024
59c3a42
Remove now obsolete methods `get_output_core_id` and `get_input_core_…
bauerfe Oct 23, 2024
7887583
Fix minor import issues
bauerfe Oct 24, 2024
7280e8b
Fix import related issues in tests
bauerfe Oct 24, 2024
864a2ff
GraphExtractor: maintain NIRTorch node naming scheme
bauerfe Oct 24, 2024
30b03d0
Edges handler: Make all edge types except weight-neuron optional
bauerfe Oct 24, 2024
82b7c1e
Minor code cleanup in edges handler
bauerfe Oct 24, 2024
eeefbdc
DynapcnnNetwork: Bring back methods `make_config` and `is_compatible_…
bauerfe Oct 24, 2024
704e13e
DynapcnnNetwork: Fix dynapcnn_layers attribute lookup
bauerfe Oct 24, 2024
2f87ef2
DynapcnnNetwork: Add method `has_dvs_layer`
bauerfe Oct 24, 2024
de5ba33
DynapcnnNetworkModule: Try saving `dynapcnn_layers` with integer indi…
bauerfe Oct 24, 2024
d970f18
Fix bugs in mapping
bauerfe Oct 24, 2024
25fb659
Move to utils. New function: . Fix handling of pooling in deployment
bauerfe Oct 24, 2024
a7bbc36
Remove redundant warning for monitoring pooled layers
bauerfe Oct 24, 2024
c9e6caa
minor edit
Willian-Girao Oct 25, 2024
90f9575
Integrate tests for sequential models in test_dynapcnnnetwork
bauerfe Oct 25, 2024
8495b4e
Fix test_auto_mapping
bauerfe Oct 25, 2024
ff68bc1
(WIP - DVS input)
Willian-Girao Oct 25, 2024
6a0bdca
(WIP - DVS input)
Willian-Girao Oct 25, 2024
e9bf8a1
(WIP - DVS input)
Willian-Girao Oct 25, 2024
d0ae8a5
WIP DVS - DVS node not given
Willian-Girao Oct 28, 2024
cb63e6f
WIP DVS - DVS node not given
Willian-Girao Oct 28, 2024
49c013a
WIP DVS - DVS node not given
Willian-Girao Oct 28, 2024
5366513
WIP DVS - DVS node not given
Willian-Girao Oct 28, 2024
460e7c6
WIP DVS - DVS node not given
Willian-Girao Oct 28, 2024
bcaec92
WIP DVS - DVS node not given
Willian-Girao Oct 28, 2024
eedc62d
WIP DVS - DVS node not given
Willian-Girao Oct 28, 2024
e6aafe0
WIP DVS - DVS node not given
Willian-Girao Oct 28, 2024
5003086
WIP DVS - DVS node not given
Willian-Girao Oct 28, 2024
d0c5389
WIP DVS - DVS node not given
Willian-Girao Oct 28, 2024
6a807ef
WIP DVS - DVS node not given
Willian-Girao Oct 28, 2024
3711bc0
WIP DVS - DVS node not given
Willian-Girao Oct 28, 2024
f301599
WIP DVS - DVS node not given
Willian-Girao Oct 28, 2024
0915c1a
WIP DVS - DVS node not given
Willian-Girao Oct 28, 2024
8a6553e
WIP DVS - DVS node not given
Willian-Girao Oct 28, 2024
0fa7bf6
WIP DVS - DVS node not given
Willian-Girao Oct 29, 2024
cf5be86
WIP DVS - DVS node not given
Willian-Girao Oct 29, 2024
acba916
WIP DVS - DVS node not given
Willian-Girao Oct 29, 2024
d7f6be3
WIP DVS - DVS node not given
Willian-Girao Oct 29, 2024
5b3edb2
WIP DVS - DVS node not given
Willian-Girao Oct 29, 2024
c945a41
WIP DVS - DVS node not given
Willian-Girao Oct 29, 2024
22b69c6
WIP - DVS node not given
Willian-Girao Oct 29, 2024
beade18
WIP - DVS node not given
Willian-Girao Oct 29, 2024
89200d9
WIP - DVS node not given
Willian-Girao Oct 29, 2024
94a67c4
WIP - DVS node not given
Willian-Girao Oct 29, 2024
3a5e419
WIP - DVS node not given
Willian-Girao Oct 29, 2024
430af98
DONE - DVS node not given
Willian-Girao Oct 29, 2024
2988ce2
DONE - DVS node given
Willian-Girao Oct 30, 2024
7b37885
WIP - SW forward with DVS
Willian-Girao Oct 30, 2024
c9e5e70
WIP - SW forward with DVS
Willian-Girao Oct 30, 2024
a709914
WIP - SW forward with DVS
Willian-Girao Oct 30, 2024
b9cc18c
WIP - SW forward with DVS
Willian-Girao Oct 30, 2024
3428c49
WIP - SW forward with DVS
Willian-Girao Oct 30, 2024
986bdfb
DONE - SW forward with DVS
Willian-Girao Oct 30, 2024
51c65a9
Graph extraction: Ignore some classes right away
bauerfe Oct 30, 2024
7ef74e8
fix doorbell test
bauerfe Oct 30, 2024
7e22662
More meaningful exceptions for invalid graph structures
bauerfe Oct 30, 2024
ef02dab
Fix dynapcnn layer scaling
bauerfe Oct 30, 2024
1866a7c
Infer shape after removing flatten
bauerfe Oct 30, 2024
b75885d
Fix behavior when entry nodes are removed from graph
bauerfe Oct 30, 2024
2c60c14
Update unit tests
bauerfe Oct 30, 2024
b68e588
DONE - chip deployment with DVS
Willian-Girao Oct 30, 2024
e3ad6a5
notebooks used to validate deployment with DVS
Willian-Girao Oct 30, 2024
b1b4ec7
notebooks used to validate deployment with DVS
Willian-Girao Oct 30, 2024
5835dae
Merge branch 'nonseq' of github.com:synsense/sinabs into nonseq
bauerfe Oct 31, 2024
e30d67f
Merge local diverging changes
bauerfe Oct 31, 2024
b81cbf1
Fix dynapcnnnetwork test
bauerfe Oct 31, 2024
9c9ec26
Ensure functioning across different nirtorch versions
bauerfe Oct 31, 2024
b2e9214
Fix unit tests
bauerfe Oct 31, 2024
a55e4ef
Merge nonseq
bauerfe Oct 31, 2024
a79addf
DONE - DVSLayer->pooling edge
Willian-Girao Oct 31, 2024
577509a
DONE - DVSLayer->pooling edge
Willian-Girao Oct 31, 2024
02ca1b2
(WIP) Minor refactoring of new dvs layer support
bauerfe Oct 31, 2024
c23acf8
Merge branch 'nonseq_dvs' of github.com:synsense/sinabs into nonseq_dvs
bauerfe Oct 31, 2024
b24edb8
Fix TorchGraph handlers
bauerfe Oct 31, 2024
b3e2ad3
Minor changes to DVS part
bauerfe Oct 31, 2024
2970d66
Update `extend_readout_layer` function to work with new DynapcnnNetwork
bauerfe Oct 31, 2024
3d4793f
Update failing unit tests in `test_large_net`
bauerfe Oct 31, 2024
bb0e132
Add `memory_summary` back to DynapcnnNetwork
bauerfe Oct 31, 2024
6b1a927
WIP improving DVS setup
Willian-Girao Nov 1, 2024
fbf7fff
WIP improving DVS setup
Willian-Girao Nov 1, 2024
a9b45f7
DONE - improving DVS setup
Willian-Girao Nov 1, 2024
41cfc04
Merge branch 'nonseq' of https://github.com/synsense/sinabs into nons…
Willian-Girao Nov 1, 2024
327cfa1
(WIP) Merge Conv2d with BatchNorm2d
Willian-Girao Nov 4, 2024
bf8d450
(WIP) Merge Conv2d with BatchNorm2d
Willian-Girao Nov 4, 2024
10b887d
(WIP) Merge Conv2d with BatchNorm2d
Willian-Girao Nov 4, 2024
50991af
(WIP) Merge Conv2d with BatchNorm2d
Willian-Girao Nov 4, 2024
5ad1211
(DONE) Merge Conv2d with BatchNorm2d
Willian-Girao Nov 4, 2024
14f647b
(DONE) Merge Linear with BatchNorm1d
Willian-Girao Nov 4, 2024
1ec5bd0
Minor revisions in nir graph. Fix merge_polarities
bauerfe Nov 5, 2024
4e297e3
GraphExtractor: Tidy up init method
bauerfe Nov 5, 2024
3cdf548
DynapcnnNetwork: Don't send DVS node info to config builder
bauerfe Nov 5, 2024
6469d91
Minor revisions in dynapcnn layer utils
bauerfe Nov 5, 2024
5d8beee
dynapcnn layer utils: minor syntax improvements
bauerfe Nov 5, 2024
641e4ac
(WIP) DVS layer gets index 'dvs'
bauerfe Nov 5, 2024
6c9e3cc
DVS layer info in separate dict
bauerfe Nov 5, 2024
ef2c6d7
Reformat
bauerfe Nov 5, 2024
c196f98
Remove obsolete functions
bauerfe Nov 5, 2024
3fa8bcf
DVSLayer: Remove redundante comparisons
bauerfe Nov 5, 2024
f914374
Edges handler: Proper checks before merging dvs and pooling layers
bauerfe Nov 5, 2024
ccfc5d2
Make deepcopy of provided DVSLayers
bauerfe Nov 5, 2024
f39833a
Merge nonseq
bauerfe Nov 5, 2024
22edb5f
Reduce code redundancy in batch norm merging
bauerfe Nov 5, 2024
4764923
Blacken edges handler. Fix import in graph extractor
bauerfe Nov 5, 2024
799e2c1
Run black on tests
bauerfe Nov 5, 2024
0a45350
Run Black
bauerfe Nov 5, 2024
5139f1c
Remove outdated functions
bauerfe Nov 5, 2024
cf3a318
Handle edge case where snn is a sequential with only a dvslayer
bauerfe Nov 5, 2024
36bceb8
More meaningful exceptions
bauerfe Nov 5, 2024
c5bdfdf
New test file for networks that should fail
bauerfe Nov 6, 2024
291bba7
Ensure network state is maintained when generating dynapcnn network
bauerfe Nov 6, 2024
37d08d1
Test for incorrect node types
bauerfe Nov 6, 2024
ff39502
Fix a range of bugs
bauerfe Nov 6, 2024
72040d2
WIP: Merge dvs pooling layer
bauerfe Nov 6, 2024
1a5eb27
Fix issues for graphs with DVSLayer and batch norm.
bauerfe Nov 8, 2024
beca062
Fix handling of isolated layers.
bauerfe Nov 8, 2024
660d102
Fix issues related to DVS. Ensure only IAFSqueeze is used in Dynapcnn…
bauerfe Nov 8, 2024
afab026
Correctly handle dvs_input False when dvs layer is provided: Disable …
bauerfe Nov 8, 2024
eb173f1
Improve docstring of DynapcnnNetwork to explain behavior of `dvs_input`.
bauerfe Nov 8, 2024
189cfb8
WIP: Fix DVS input unit tests.
bauerfe Nov 8, 2024
a2bbb4c
(WIP): Fix doorbell tests.
bauerfe Nov 8, 2024
eec090d
Fix doorbell test
bauerfe Nov 8, 2024
876962f
Sort dynapcnn layers in network by key
bauerfe Nov 8, 2024
9460edc
Further bugfixes and improved readability of dynapcnn network repr.
bauerfe Nov 8, 2024
8897d24
Fix monitoring. Enable monitoring exit layers with -1
bauerfe Nov 8, 2024
2680ac1
Properly copy DVSLayer when instantiating DynapcnnNetwork. Fix DVS in…
bauerfe Nov 12, 2024
1cb7b5c
Reintroduce missing methods of DynapcnnNetwork: `reset_states`, `zero…
bauerfe Nov 12, 2024
f1e308d
Support model with only DVS
bauerfe Nov 12, 2024
b8827fa
Fix unit test `test_single_neuron....py`
bauerfe Nov 12, 2024
4d46656
Fix speckmini unit test
bauerfe Nov 12, 2024
681987f
Fix neuron leak unit test
bauerfe Nov 12, 2024
9cdb712
Provide more meaningful error when specific device is not found
bauerfe Nov 12, 2024
57333ce
Remove duplicate test
bauerfe Nov 12, 2024
c0291e1
Remove obsolete TODO
bauerfe Nov 12, 2024
a794647
Minor fixes.
bauerfe Nov 12, 2024
10e1bab
Undo erroneous comment
bauerfe Nov 12, 2024
205077d
Run black and isort
bauerfe Nov 12, 2024
db0b9b5
Fix 'deque' type hint
bauerfe Nov 12, 2024
9fbdf81
Try resolving "Subscripted generics cannot be used with class and ins…
bauerfe Nov 12, 2024
c23ad4e
Re-run black
bauerfe Nov 12, 2024
1954b61
Fix initialization issue
bauerfe Nov 13, 2024
9a194ad
Fix non-deterministic dynapcnn-network test
bauerfe Nov 13, 2024
347e225
Improve numerical robustness of input diff hook unit test
bauerfe Nov 13, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
830 changes: 830 additions & 0 deletions examples/dynapcnn_network/snn_deployment.ipynb

Large diffs are not rendered by default.

308 changes: 308 additions & 0 deletions sinabs/backend/dynapcnn/NIRGraphExtractor.py
bauerfe marked this conversation as resolved.
Show resolved Hide resolved
bauerfe marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
@@ -0,0 +1,308 @@
# author : Willian Soares Girao
# contact : [email protected]

import torch, sinabs, nirtorch, copy
import torch.nn as nn
from typing import Tuple, Dict, List, Union
from .utils import topological_sorting

class NIRtoDynapcnnNetworkGraph():
def __init__(self, spiking_model: nn.Module, dummy_input: torch.tensor):
""" Class implementing the extraction of the computational graph from `spiking_model`, where
each node represents a layer in the model and the list of edges represents how the data flow between
the layers.

Parameters
----------
- spiking_model (nn.Module): a sinabs-compatible spiking network.
- dummy_input (torch.tensor): a random input sample to be fed through the model to acquire both
the computational graph (via `nirtorch`) and the I/O shapes of each node. Its a 4-D shape
with `(batch, channels, heigh, width)`.
"""

# extract computational graph.
nir_graph = nirtorch.extract_torch_graph(spiking_model, dummy_input, model_name=None).ignore_tensors()

# converts the NIR representation into a list of edges with nodes represented as integers.
self._edges_list, self._name_2_indx_map, self._entry_nodes = self._get_edges_from_nir(nir_graph)

# recovers the associated `nn.Module` (layer) of each node.
self.modules_map = self._get_named_modules(spiking_model)

# retrieves what the I/O shape for each node's module is.
self._nodes_io_shapes = self._get_nodes_io_shapes(dummy_input)

####################################################### Publich Methods #######################################################

@property
def entry_nodes(self) -> List[int]:
return self._entry_nodes

@property
def get_edges_list(self):
return self._edges_list

@property
def name_2_indx_map(self):
return self._name_2_indx_map

@property
def nodes_io_shapes(self):
return self._nodes_io_shapes

def remove_ignored_nodes(self, default_ignored_nodes: tuple) -> Tuple[list, dict]:
Willian-Girao marked this conversation as resolved.
Show resolved Hide resolved
""" Recreates the edges list based on layers that `DynapcnnNetwork` will ignore. This
is done by setting the source (target) node of an edge where the source (target) node
will be dropped as the node that originally targeted this node to be dropped.

Parameters
----------
- default_ignored_nodes (tuple): a set of layers (`nn.Module`) that should be ignored from the graph.
Willian-Girao marked this conversation as resolved.
Show resolved Hide resolved

Returns
----------
- remapped_edges (list): the new list of edges after nodes flagged by `default_ignored_nodes` have been removed.
- remapped_nodes (dict): updated nodes' IDs after nodes flagged by `default_ignored_nodes` have been removed.
"""
edges = copy.deepcopy(self._edges_list)
parsed_edges = []
removed_nodes = []

# removing ignored nodes from edges.
for edge_idx in range(len(edges)):
Willian-Girao marked this conversation as resolved.
Show resolved Hide resolved
_src = edges[edge_idx][0]
_trg = edges[edge_idx][1]

if isinstance(self.modules_map[_src], default_ignored_nodes):
removed_nodes.append(_src)
# all edges where node '_src' is target change it to node '_trg' as their target.
for edge in edges:
Willian-Girao marked this conversation as resolved.
Show resolved Hide resolved
if edge[1] == _src:
new_edge = (edge[0], _trg)
elif isinstance(self.modules_map[_trg], default_ignored_nodes):
Willian-Girao marked this conversation as resolved.
Show resolved Hide resolved
removed_nodes.append(_trg)
# all edges where node '_trg' is source change it to node '_src' as their source.
for edge in edges:
if edge[0] == _trg:
new_edge = (_src, edge[1])
else:
new_edge = (_src, _trg)

if new_edge not in parsed_edges:
parsed_edges.append(new_edge)

removed_nodes = list(set(removed_nodes))

# remapping nodes indexes.
remapped_nodes = {}
for node_indx, __ in self.modules_map.items():
_ = [x for x in removed_nodes if node_indx > x]
remapped_nodes[node_indx] = node_indx - len(_)

for x in removed_nodes:
del remapped_nodes[x]

# remapping nodes names in parsed edges.
remapped_edges = []
for edge in parsed_edges:
remapped_edges.append((remapped_nodes[edge[0]], remapped_nodes[edge[1]]))

return remapped_edges, remapped_nodes

# TODO - it would be good if I/O shapes were returned by the NIR graph.
def get_node_io_shapes(self, node: int) -> Tuple[torch.Size, torch.Size]:
""" Returns the I/O tensors' shapes of `node`.

Returns
----------
- input shape (torch.Size): shape of the input tensor to `node`.
- output shape (torch.Size): shape of the output tensor from `node`.
"""
return self._nodes_io_shapes[node]['input'], self._nodes_io_shapes[node]['output']

####################################################### Pivate Methods #######################################################

def _get_edges_from_nir(self, nir_graph: nirtorch.graph.Graph) -> Tuple[List[Tuple[int, int]], Dict[str, int], List[int]]:
""" Standardize the representation of `nirtorch.graph.Graph` into a list of edges (`Tuple[int, int]`) where
each node in `nir_graph` is represented by an interger (with the source node starting as `0`).

Parameters
----------
- nir_graph (nirtorch.graph.Graph): a NIR graph representation of `spiking_model`.

Returns
----------
- edges_list (list): tuples describing the connections between layers in `spiking_model`.
- name_2_indx_map (dict): `key` is the original variable name for a layer in `spiking_model` and `value
is an integer representing the layer in a standard format.
- entry_nodes (list): IDs of nodes acting as entry points for the network (i.e., receiving external input).
"""
edges_list = []
name_2_indx_map = {}
idx_counter = 0 # TODO maybe make sure the input node from nir always gets assined `0`.

nodes_IDs = [0]

for src_node in nir_graph.node_list:
# source node.
if src_node.name not in name_2_indx_map:
name_2_indx_map[src_node.name] = idx_counter
idx_counter += 1

nodes_IDs.append(idx_counter)

for trg_node in src_node.outgoing_nodes:
# target node.
if trg_node.name not in name_2_indx_map:
name_2_indx_map[trg_node.name] = idx_counter
idx_counter += 1

nodes_IDs.append(idx_counter)

edges_list.append((name_2_indx_map[src_node.name], name_2_indx_map[trg_node.name]))

# finding entry/exits nodes of the graph.
all_sources = [x[0] for x in edges_list]
all_targets = [x[1] for x in edges_list]

entry_nodes = list(set(all_sources) - set(all_targets))

return edges_list, name_2_indx_map, entry_nodes

def _get_named_modules(self, model: nn.Module) -> Dict[int, nn.Module]:
""" Find for each node in the graph what its associated layer in `model` is.

Parameters
----------
- model (nn.Module): the `spiking_model` used as argument to the class instance.

Returns
----------
- modules_map (dict): the mapping between a node (`key` as an `int`) and its module (`value` as a `nn.Module`).
"""
modules_map = {}

if isinstance(model, nn.Sequential): # TODO shouldn't accept `nn.Sequential` any longer.
bauerfe marked this conversation as resolved.
Show resolved Hide resolved
bauerfe marked this conversation as resolved.
Show resolved Hide resolved
# access modules via `.named_modules()`.
for name, module in model.named_modules():
if name != '':
# skip the module itself.
modules_map[self._name_2_indx_map[name]] = module

elif isinstance(model, nn.Module):
# access modules via `.named_children()`.
for name, module in model.named_children():
modules_map[self._name_2_indx_map[name]] = module

else:
raise ValueError('Either a nn.Sequential or a nn.Module is required.')

return modules_map

# TODO - it would be good if I/O shapes were returned by the NIR graph.
def _get_nodes_io_shapes(self, input_dummy: torch.tensor) -> Dict[int, Dict[str, torch.Size]]:
""" Iteratively calls the forward method of each `nn.Module` (i.e., a layer/node in the graph) using the topologically
sorted nodes extracted from the computational graph of the model being parsed.

Parameters
----------
- input_dummy (torch.tensor): a sample (random) tensor of the sort of input being fed to the network.

Returns
----------
- nodes_io_map (dict): a dictionary mapping nodes to their I/O shapes.
"""
nodes_io_map = {}

# topological sorting of the graph.
temp_edges_list = copy.deepcopy(self._edges_list)
for node in self._entry_nodes:
temp_edges_list.append(('input', node))
sorted_nodes = topological_sorting(temp_edges_list)

# propagate inputs through the nodes.
for node in sorted_nodes:

if isinstance(self.modules_map[node], sinabs.layers.merge.Merge):
# find `Merge` arguments (at this point the output of Merge has to have been calculated).
arg1, arg2 = self._find_merge_arguments(node)

# retrieve arguments output tensors.
arg1_out = nodes_io_map[arg1]['output']
arg2_out = nodes_io_map[arg2]['output']

# TODO - this is currently a limitation inpused by the validation checks done by Speck once a configuration: it wants two
# different input sources to a core to have the same output shapes.
if arg1_out.shape != arg2_out.shape:
raise ValueError(f'Layer `sinabs.layers.merge.Merge` (node {node}) require two input tensors with the same shape: arg1.shape {arg1_out.shape} differs from arg2.shape {arg2_out.shape}.')

# forward input through the node.
_output = self.modules_map[node](arg1_out, arg2_out)

# save node's I/O tensors.
nodes_io_map[node] = {'input': arg1_out, 'output': _output}

else:

if node in self._entry_nodes:
# forward input dummy through node.
_output = self.modules_map[node](input_dummy)

# save node's I/O tensors.
nodes_io_map[node] = {'input': input_dummy, 'output': _output}

else:
# find node generating the input to be used.
input_node = self._find_source_of_input_to(node)
_input = nodes_io_map[input_node]['output']

# forward input through the node.
_output = self.modules_map[node](_input)

# save node's I/O tensors.
nodes_io_map[node] = {'input': _input, 'output': _output}

# replace the I/O tensor information by its shape information.
for node, io in nodes_io_map.items():
nodes_io_map[node]['input'] = io['input'].shape
nodes_io_map[node]['output'] = io['output'].shape

return nodes_io_map

def _find_source_of_input_to(self, node: int) -> int:
""" Finds the first edge `(X, node)` returns `X`.

Parameters
----------
- node (int): the node in the computational graph for which we whish to find the input source (either another node in the
graph or the original input itself to the network).

Returns
----------
- input source (int): ID of the node in the computational graph providing the input to `node`. If `node` is
receiving outside input (i.e., it is a starting node) the return will be -1. For example, this will be the case
when a network with two independent branches (each starts from a different "input node") merge along the computational graph.
"""
for edge in self._edges_list:
if edge[1] == node:
return edge[0]

return -1

def _find_merge_arguments(self, merge_node: int) -> Tuple[int, int]:
""" A `Merge` layer receives two inputs. Return the two inputs to `merge_node` representing a `Merge` layer.

Returns
----------
- args (tuple): the IDs of the nodes that provice the input arguments to a `Merge` layer.
"""
args = []

for edge in self._edges_list:
if edge[1] == merge_node:
args.append(edge[0])

if len(args) == 2:
return tuple(args)
else:
raise ValueError(f'Number of arguments found for `Merge` node {merge_node} is {len(args)} (should be 2).')
12 changes: 10 additions & 2 deletions sinabs/backend/dynapcnn/__init__.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,13 @@
from .dynapcnn_network import ( # second one for compatibility purposes
DynapcnnCompatibleNetwork,
bauerfe marked this conversation as resolved.
Show resolved Hide resolved
from .dynapcnn_network import (
DynapcnnNetwork,
)

from .dynapcnn_layer import (
DynapcnnLayer,
)

from .dynapcnn_layer_handler import (
DynapcnnLayerHandler,
)

from .dynapcnn_visualizer import DynapcnnVisualizer
Loading
Loading