ISEK 2024 - Decoding intraneural recordings using SNN
26 Jun 2024This year I went to JAPANNN (what?!) to attend the ISEK congress for the first time. I submitted a poster titled “A neuromorphic approach to decoding motor intentions from intraneural recordings of a trans-radial amputee”.
In this project, I explored the possibility of using Spiking Neural Networks (SNNs) to decipher gestures from intraneural recordings. There are many questions to answer here:
-
Why SNNs? First, I think SNNs are cool :D SNNs are an exciting area of research. Unlike traditional Artificial Neural Networks (ANNs), SNNs are designed to be more biologically plausible. That is they borrow concepts and computational primitives from the human brain. This means they mimic the way the human brain processes information more closely.
SNNs, both inputs and outputs are represented as spike trains, which are sequences of discrete events over time, rather than continuous values. The fundamental components of SNNs are neurons and synapses, just like in the brain. These neurons communicate by sending spikes to each other through the synapses. Both neurons and synapses are modelled via differential equation describing their dynamics.
So what do we do with them? Considering how efficient and fascinating our brains are, now I want you to imagine that we can mimic some of these brain-like computations and implement them on electronic chips. This could lead to highly efficient computing system right? But apart from that, we, humans are constantly adapting to an every-changing environment, we learn, we are capable of multitasking (well not all of us :D) but certainly, our brains are performing multitasking computations all the time. By emulating these brain-like processes, SNNs could revolutionize areas like neuroprosthetic, where real-time processing, power-efficiency and adaptability are crucial.
-
Why intraneural recordings? Previous studies Valle et al. (2018), Čvančara et al. (2023) have shown that stimulating electrodes inserted in the nerves can restore a wide repertoire of sensations to amputees. In this project, I am demonstrating that we can also decode gestures using the same type of electrodes (Transverse Intrafascicular Multichannel Electrodes, or TIMEs) and a SNN-compatible pipeline. This could allow for the development of complete, low-power, bi-directional prostheses, providing both sensory feedback and control for users.
Fun Fact: Poster Design Adventures
I have made two mistakes designing this poster. First, I created an A0 poster when it was supposed to be A1. Okay, not too bad of a mistake… but I also designed it in portrait when it was supposed to be landscape (nice!!). I’m attaching the nicer portrait version here because I’m not super proud of the landscape one.
Design philosophy
I wanted to keep text to a minimum and only have figures and illustrations with captions. All illustration are made with BioRender and the poster is designed using Latex.
Poster Abstract
Advanced prosthetic devices hold the promise of restoring dexterity for upper- limb amputees by decoding the user’s motor intentions and delivering tactile sensory feedback during object manipulation. These bidirectional interfaces operate by recording and processing neural signals from multiple electrodes while concurrently stimulating the afferent pathway to elicit sensations. Cru- cially, both the decoding and encoding systems must unfold in real-time within a closed-loop framework, and continuously adapt to a changing neural signal. Furthermore, considering the wearability constraint of prostheses, processing of neural signals and stimulation modules should be implemented on low-power, low-latency, compact hardware placed in close proximity to the recording site. Consequently, careful consideration of algorithms and processing techniques alongside their hardware implementation is essential for the system realization. In this poster, we depart from conventional machine learning-based approach and adopt a neuromorphic, event-based computing paradigm for the processing of the recorded neural signals. Our primary focus is on the decoding pathway of the bidirectional prosthesis. We present the stages within a neuromorphic pipeline and demonstrate their advantages, in terms of pre-processing compu- tation requirements, accuracy, and linearity on gesture decoding from nerve recordings, electroneurography (ENG), of a trans-radial amputee acquired us- ing Transverse Intrafascicular Multichannel Electrodes (TIMEs). Neuromorphic computing is a computational paradigm that stands in stark contrast to conventional processing. By emulating the biophysical properties of biological neurons and synapses, neuromorphic systems adopts key biologi- cal features such as fault-tolerance, robust computation and event-driven, low- power processing. A neuromorphic pipeline starts with the encoding of recorded signals into a stream of events. The encoding stage should be carefully crafted to extract key features from the signal. Our findings demonstrate that us- ing leaky-integrate-and-fire neuron model effectively converts the power of each ENG channel into a sequence of spike trains and further increases the linear separability of the gestures. Next, a spiking neural network is trained to predict the intended motor task based on the firing rates of output neurons.