Skip to content

Commit

Permalink
Merge branch 'main' of github.com:mhhennig/spikeinterface
Browse files Browse the repository at this point in the history
  • Loading branch information
mhhennig committed Jul 19, 2024
2 parents 2e141a9 + dc9a4fa commit 662b252
Show file tree
Hide file tree
Showing 60 changed files with 2,250 additions and 498 deletions.
15 changes: 12 additions & 3 deletions doc/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -159,6 +159,8 @@ spikeinterface.preprocessing
.. autofunction:: common_reference
.. autofunction:: correct_lsb
.. autofunction:: correct_motion
.. autofunction:: get_motion_presets
.. autofunction:: get_motion_parameters_preset
.. autofunction:: depth_order
.. autofunction:: detect_bad_channels
.. autofunction:: directional_derivative
Expand Down Expand Up @@ -324,15 +326,22 @@ spikeinterface.curation
------------------------
.. automodule:: spikeinterface.curation

.. autoclass:: CurationSorting
.. autoclass:: MergeUnitsSorting
.. autoclass:: SplitUnitSorting
.. autofunction:: apply_curation
.. autofunction:: get_potential_auto_merge
.. autofunction:: find_redundant_units
.. autofunction:: remove_redundant_units
.. autofunction:: remove_duplicated_spikes
.. autofunction:: remove_excess_spikes

Deprecated
~~~~~~~~~~
.. automodule:: spikeinterface.curation
:noindex:

.. autofunction:: apply_sortingview_curation
.. autoclass:: CurationSorting
.. autoclass:: MergeUnitsSorting
.. autoclass:: SplitUnitSorting


spikeinterface.generation
Expand Down
1 change: 1 addition & 0 deletions doc/how_to/drift_with_lfp.rst
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,7 @@ Preprocessing
Contrary to the **dredge_ap** approach, which needs detected peaks and peak locations, the **dredge_lfp**
method is estimating the motion directly on traces.
Importantly, the method requires some additional pre-processing steps:

* ``bandpass_filter``: to "focus" the signal on a particular band
* ``phase_shift``: to compensate for the sampling misalignement
* ``resample``: to further reduce the sampling fequency of the signal and speed up the computation. The sampling frequency of the estimated motion will be the same as the resampling frequency. Here we choose 250Hz, which corresponds to a sampling interval of 4ms.
Expand Down
1,474 changes: 1,353 additions & 121 deletions doc/how_to/handle_drift.rst

Large diffs are not rendered by default.

Binary file removed doc/how_to/handle_drift_files/handle_drift_13_0.png
Binary file not shown.
Binary file removed doc/how_to/handle_drift_files/handle_drift_13_1.png
Binary file not shown.
Binary file removed doc/how_to/handle_drift_files/handle_drift_13_2.png
Binary file not shown.
Binary file removed doc/how_to/handle_drift_files/handle_drift_15_0.png
Binary file not shown.
Binary file removed doc/how_to/handle_drift_files/handle_drift_15_1.png
Binary file not shown.
Binary file removed doc/how_to/handle_drift_files/handle_drift_15_2.png
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified doc/how_to/handle_drift_files/handle_drift_17_1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 2 additions & 2 deletions doc/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,12 +14,12 @@ state-of-the-art spike sorters, post-process and curate the output, compute qual

.. warning::

Version 0.101.0 introduces a major API improvement: the :code:`SortingAnalyzer`.`
Version 0.101.0 introduces a major API improvement: the :code:`SortingAnalyzer`.
To read more about the motivations, checkout the
`enhancement proposal <https://github.com/SpikeInterface/spikeinterface/issues/2282>`_.
Learn how to :ref:`update your code here <tutorials/waveform_extractor_to_sorting_analyzer:From WaveformExtractor to SortingAnalyzer>`
and read more about the :code:`SortingAnalyzer`, please refer to the
:ref:`core <modules/core:SortingAnalyzer>` and :ref:`postprocessing <modules/postprocessing>` module
:ref:`core <modules/core:SortingAnalyzer>` and :ref:`postprocessing <modules/postprocessing:Postprocessing module>` module
documentation.


Expand Down
22 changes: 17 additions & 5 deletions doc/modules/curation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -261,11 +261,23 @@ format is the definition; the second part of the format is manual action):
}
.. note::
The curation format was recently introduced (v0.101.0), and we are still working on
properly integrating it into the SpikeInterface ecosystem.
Soon there will be functions vailable, in the curation module, to apply this
standardized curation format to ``SortingAnalyzer`` and a ``BaseSorting`` objects.
The curation format can be loaded into a dictionary and directly applied to
a ``BaseSorting`` or ``SortingAnalyzer`` object using the :py:func:`~spikeinterface.curation.apply_curation` function.

.. code-block:: python
from spikeinterface.curation import apply_curation
# load the curation JSON file
curation_json = "path/to/curation.json"
with open(curation_json, 'r') as f:
curation_dict = json.load(f)
# apply the curation to the sorting output
clean_sorting = apply_curation(sorting, curation_dict=curation_dict)
# apply the curation to the sorting analyzer
clean_sorting_analyzer = apply_curation(sorting_analyzer, curation_dict=curation_dict)
Using the ``SpikeInterface GUI``
Expand Down
4 changes: 2 additions & 2 deletions doc/modules/postprocessing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,12 @@ Extensions as AnalyzerExtensions
--------------------------------

There are several postprocessing tools available, and all of them are implemented as a
:py:class:`~spikeinterface.core.ResultExtension`. If the :code:`SortingAnalyzer` is saved to disk, all computations on
:py:class:`~spikeinterface.core.AnalyzerExtension`. If the :code:`SortingAnalyzer` is saved to disk, all computations on
top of it will be saved alongside the :code:`SortingAnalyzer` itself (sub folder, zarr path or sub dict).
This workflow is convenient for retrieval of time-consuming computations (such as pca or spike amplitudes) when reloading a
:code:`SortingAnalyzer`.

:py:class:`~spikeinterface.core.ResultExtension` objects are tightly connected to the
:py:class:`~spikeinterface.core.AnalyzerExtension` objects are tightly connected to the
parent :code:`SortingAnalyzer` object, so that operations done on the :code:`SortingAnalyzer`, such as saving,
loading, or selecting units, will be automatically applied to all extensions.

Expand Down
10 changes: 8 additions & 2 deletions doc/releases/0.101.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,15 @@ Estimated: 19th July 2024
Main changes:

* Implementation of `SortingAnalyzer` (#2398)
* Improved auto-merging functions and enable `SortingAnalyzer` to merge units and extensions (#3043, #3154, #3203)
* Improved auto-merging functions and enable `SortingAnalyzer` to merge units and extensions (#3043, #3154, #3203, #3208)
* Added framework for hybrid recording generation (#2436, #2769, #2857)
* Refactored motion correction with the `Motion` class and the DREDGE AP and LFP methods (#2915, #3062)
* Extendeded benchmarking of `sortingcomponents` (#2501, #2518, #2586, #2811, #2959)
* Added a powerful drift generator module (#2683)

core:

* Implement a simple system to have backward compatibility for Analyzer extension (#3215)
* Units aggregation preserve unit ids of aggregated sorters (#3180)
* Better error message for `BaseExtractor.load()` (#3170)
* Saving provenance with paths relative to folder (#3165)
Expand Down Expand Up @@ -98,6 +99,7 @@ core:

preprocessing:

* Update doc handle drift + better preset (#3232)
* Remove name class attribute in preprocessing module (#3200)
* Add option to use ref_channel_ids in global common reference (#3139)
* Adding option to overwrite while doing correct_motion and saving to a folder (#3088)
Expand Down Expand Up @@ -150,6 +152,7 @@ extractors:

sorters:

* Patch for SC2 after release AND bugs in auto merge (#3213)
* Improve error log to json in run_sorter (#3057)
* Add support for kilosort>=4.0.12 (#3055)
* Make sure we check `is_filtered()` rather than bound method during run basesorter (#3037)
Expand Down Expand Up @@ -179,7 +182,7 @@ sorters:

postprocessing:

* Fix pca transform error (#3178)
* Fix pca transform error (#3178, #3224)
* Fix `spike_vector_to_indices()` (#3048)
* Remove un-used argument (#3021)
* Optimize numba cross-correlation and extend `correlograms.py` docstrings and tests (#3017)
Expand All @@ -199,6 +202,7 @@ qualitymetrics:

curation:

* Add `apply_curation()` (#3208)
* Port auto-merge changes and refactor (#3203)
* Implement `apply_merges_to_sorting()` (#3154)
* Proposal of format to hold the manual curation information (#2933)
Expand Down Expand Up @@ -249,6 +253,7 @@ generation:

sortingcomponents:

* Fix estimate_motion when time_vector is set (#3218)
* Fix select peaks (#3132)
* Dredge lfp and dredge ap (#3062)
* Use "available" for memory caching (#3008)
Expand All @@ -269,6 +274,7 @@ sortingcomponents:

documentation:

* Analyzer docstring cleanup (#3220)
* Eradicate sphinx warnings (#3188)
* Convert doc references from `wf_extractor` -> `sorting_analyzer` (#3185)
* Add explainer of compute always computing in the analyzer (vs WaveformExtractor behavior) documentation (#3173)
Expand Down
85 changes: 52 additions & 33 deletions examples/how_to/handle_drift.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,12 @@
# jupyter:
# jupytext:
# cell_metadata_filter: -all
# formats: py,ipynb
# formats: py:light,ipynb
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.6
# jupytext_version: 1.16.2
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
Expand Down Expand Up @@ -55,6 +55,8 @@

import spikeinterface.full as si

from spikeinterface.preprocessing import get_motion_parameters_preset, get_motion_presets

# -

base_folder = Path("/mnt/data/sam/DataSpikeSorting/imposed_motion_nick")
Expand All @@ -70,6 +72,7 @@


def preprocess_chain(rec):
rec = rec.astype('float32')
rec = si.bandpass_filter(rec, freq_min=300.0, freq_max=6000.0)
rec = si.common_reference(rec, reference="global", operator="median")
return rec
Expand All @@ -79,33 +82,46 @@ def preprocess_chain(rec):

job_kwargs = dict(n_jobs=40, chunk_duration="1s", progress_bar=True)

# ### Run motion correction with one function!
#
#
# Correcting for drift is easy! You just need to run a single function.
# We will try this function with 3 presets.
# We will try this function with some presets.
#
# Internally a preset is a dictionary of dictionaries containing all parameters for every steps.
#
# Here we also save the motion correction results into a folder to be able to load them later.

# internally, we can explore a preset like this
# every parameter can be overwritten at runtime
from spikeinterface.preprocessing.motion import motion_options_preset
# ### preset and parameters
#
# Motion correction has some steps and eevry step can be controlled by a method and related parameters.
#
# A preset is a nested dict that contains theses methods/parameters.

preset_keys = get_motion_presets()
preset_keys

one_preset_params = get_motion_parameters_preset("kilosort_like")
one_preset_params

# ### Run motion correction with one function!
#
# Correcting for drift is easy! You just need to run a single function.
# We will try this function with some presets.
#
# Here we also save the motion correction results into a folder to be able to load them later.

motion_options_preset["kilosort_like"]
# lets try theses presets
some_presets = ("rigid_fast", "kilosort_like", "nonrigid_accurate", "nonrigid_fast_and_accurate", "dredge", "dredge_fast")

# lets try theses 3 presets
some_presets = ("rigid_fast", "kilosort_like", "nonrigid_accurate")
# some_presets = ('kilosort_like', )

# compute motion with 3 presets
# compute motion with theses presets
for preset in some_presets:
print("Computing with", preset)
folder = base_folder / "motion_folder_dataset1" / preset
if folder.exists():
shutil.rmtree(folder)
recording_corrected, motion_info = si.correct_motion(
rec, preset=preset, folder=folder, output_motion_info=True, **job_kwargs
recording_corrected, motion, motion_info = si.correct_motion(
rec, preset=preset, folder=folder, output_motion=True, output_motion_info=True, **job_kwargs
)

# ### Plot the results
Expand All @@ -127,11 +143,20 @@ def preprocess_chain(rec):
# The motion vector is computed for different depths.
# The corrected peak locations are flatter than the rigid case.
# The motion vector map is still be a bit noisy at some depths (e.g around 1000um).
# * The preset **nonrigid_accurate** seems to give the best results on this recording.
# The motion vector seems less noisy globally, but it is not "perfect" (see at the top of the probe 3200um to 3800um).
# Also note that in the first part of the recording before the imposed motion (0-600s) we clearly have a non-rigid motion:
# * The preset **dredge** is offcial DREDge re-implementation in spikeinterface.
# It give the best result : very fast and smooth motion estimation. Very few noise.
# This method also capture very well the non rigid motion gradient along the probe.
# The best method on the market at the moement.
# An enormous thanks to the dream team : Charlie Windolf, Julien Boussard, Erdem Varol, Liam Paninski.
# Note that in the first part of the recording before the imposed motion (0-600s) we clearly have a non-rigid motion:
# the upper part of the probe (2000-3000um) experience some drifts, but the lower part (0-1000um) is relatively stable.
# The method defined by this preset is able to capture this.
# * The preset **nonrigid_accurate** this is the ancestor of "dredge" before it was published.
# It seems to give the good results on this recording but with bit more noise.
# * The preset **dredge_fast** similar than dredge but faster (using grid_convolution).
# * The preset **nonrigid_fast_and_accurate** a variant of nonrigid_accurate but faster (using grid_convolution).
#
#

for preset in some_presets:
# load
Expand All @@ -140,8 +165,8 @@ def preprocess_chain(rec):

# and plot
fig = plt.figure(figsize=(14, 8))
si.plot_motion(
motion_info,
si.plot_motion_info(
motion_info, rec,
figure=fig,
depth_lim=(400, 600),
color_amplitude=True,
Expand Down Expand Up @@ -173,6 +198,8 @@ def preprocess_chain(rec):
folder = base_folder / "motion_folder_dataset1" / preset
motion_info = si.load_motion_info(folder)

motion = motion_info["motion"]

fig, axs = plt.subplots(ncols=2, figsize=(12, 8), sharey=True)

ax = axs[0]
Expand All @@ -190,24 +217,16 @@ def preprocess_chain(rec):

color_kargs = dict(alpha=0.2, s=2, c=c)

loc = motion_info["peak_locations"]
peak_locations = motion_info["peak_locations"]
# color='black',
ax.scatter(loc["x"][mask][sl], loc["y"][mask][sl], **color_kargs)

loc2 = correct_motion_on_peaks(
motion_info["peaks"],
motion_info["peak_locations"],
rec.sampling_frequency,
motion_info["motion"],
motion_info["temporal_bins"],
motion_info["spatial_bins"],
direction="y",
)
ax.scatter(peak_locations["x"][mask][sl], peak_locations["y"][mask][sl], **color_kargs)

peak_locations2 = correct_motion_on_peaks(peaks, peak_locations, motion,rec)

ax = axs[1]
si.plot_probe_map(rec, ax=ax)
# color='black',
ax.scatter(loc2["x"][mask][sl], loc2["y"][mask][sl], **color_kargs)
ax.scatter(peak_locations2["x"][mask][sl], peak_locations2["y"][mask][sl], **color_kargs)

ax.set_ylim(400, 600)
fig.suptitle(f"{preset=}")
Expand All @@ -228,7 +247,7 @@ def preprocess_chain(rec):
keys = run_times[0].keys()

bottom = np.zeros(len(run_times))
fig, ax = plt.subplots()
fig, ax = plt.subplots(figsize=(14, 6))
for k in keys:
rtimes = np.array([rt[k] for rt in run_times])
if np.any(rtimes > 0.0):
Expand Down
5 changes: 3 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "spikeinterface"
version = "0.101.0rc1"
version = "0.101.0"
authors = [
{ name="Alessio Buccino", email="[email protected]" },
{ name="Samuel Garcia", email="[email protected]" },
Expand All @@ -25,7 +25,7 @@ dependencies = [
"tqdm",
"zarr>=2.16,<2.18",
"neo>=0.13.0",
"probeinterface>=0.2.22",
"probeinterface>=0.2.23",
"packaging",
]

Expand Down Expand Up @@ -139,6 +139,7 @@ test_extractors = [

test_preprocessing = [
"ibllib>=2.36.0", # for IBL
"torch",
]


Expand Down
Loading

0 comments on commit 662b252

Please sign in to comment.