With a bit more understanding about what spike detection is and how it can be done, we are now going to do it ourselves by using the Spikeprocessing class.
We begin by revisiting example11() and then make the necessary changes. Our goal is to replicate the result shown in Figure 16.
def example11():
from_in_s=1095
to_in_s=1123
sampling_frequency = 5000
#load real data
filteredSignal = np.load("FilteredData.npy")
filteredSignalChunk = filteredSignal[int(from_in_s*sampling_frequency):int(to_in_s*sampling_frequency)]
plt.plot(filteredSignalChunk)
In example11 we are loading in recorded and already filtered neural data. We then cut out a chunk between positions 1095 and 1123 (in seconds) and then plot the signal. We don't care about the axis labels here to keep the code as minimal as possible. But the y-axis would be the amplitude in micro or millivolt and the x-axis would be the time axis in seconds/milliseconds.
In order to detect the spikes we have to add:
spikeProcessing = pipeline.SpikeProcessing(sampling_frequency, sigmas=-5)
This creates an instance of the SpikeProcessing class using the sampling frequency of our signal -- 5kHz -- and how many standard deviations we would like to use as a threshold. The value is set to -5 -- the same value that is shown in Figure 17. The value is negative because we would like to detect extracellular action potentials.
if we have a look at the source code we can see, that the detection method is set to the default value:
def __init__(self, fs, sigmas = -5, stdvAlgorithm = STDVAlgorithm.MAD):
This is set to the median method because it provides a good compromise between computational cost and accuracy for the purpose of visualization.
The possible values for choosing an algorithm are defined in STDVAlgorithm:
from enum import Enum class STDVAlgorithm (Enum): STD = 0 MAD = 1 ENVELOPE = 2 FAST_ENVELOPE = 3
Getting the spike positions within the desired range is achieved by this line of code:
spikes_in_range, spike_threshold = spikeProcessing.getSpikesInRange(filteredSignalChunk, start = 0.0)
where filteredSignalChunk is the neural signal and start indicates the position from which the spike detection should be performed.
The start parameter tells getSpikesInRange() from which the noise level should be computed. This is useful for signals with stimulation artifacts at the beginning. Including parts that are contaminated with artifacts into the noise level estimation is likely to yield bad or wrong results
In addition, we also get the threshold computed by the selected thresholding method.
A convenience method is getNumberOfSpikes() which also performs spike detection but only gives you the number of detected spikes.
These also methods call other methods of the spike processing class which are of no interest right now.
We are almost done! To replicate Figure 16, all we need is to add these three lines:
spikes_in_range = spikes_in_range*sampling_frequency plt.plot(filteredSignalChunk) plt.plot(spikes_in_range, [spike_threshold]*spikes_in_range.shape[0], 'ro', ms=2)
The first line multiplies the spikes_in_range with the sampling_frequency in order to convert them to positions within the signal array.
The second line plots the signal chunk and the last line plots the red markers at the spike locations.
Finally, here is the complete code:
def example12(): from_in_s=1095 to_in_s=1123 sampling_frequency = 5000 #load real data filteredSignal = np.load("FilteredData.npy") spikeProcessing = pipeline.SpikeProcessing(sampling_frequency, sigmas=-5) filteredSignalChunk = filteredSignal[int(from_in_s*sampling_frequency):int(to_in_s*sampling_frequency)] spikes_in_range, spike_threshold = spikeProcessing.getSpikesInRange(filteredSignalChunk, start = 0.0) spikes_in_range = spikes_in_range*sampling_frequency plt.plot(filteredSignalChunk) plt.plot(spikes_in_range, [spike_threshold]*spikes_in_range.shape[0], 'ro', ms=2)