Spiking Neural Networks

Spiking is a way to encode digital communications over a long distance (the spike rate and timing of individual spikes relative to others are the variations by which a spiking signal is encoded), because analog values are destroyed when sent a long distance over an active medium. Think smoke signals in the American West, talking drums in West Africa, or Morse Code on the telegraphs of the 19th and early 20th centuries.

But analog signals work fine locally, so in a sense, spikes are similar to packets in mesh interconnect. Pure spiking works in all-purpose machines like CPUs and GPUs, but the hardware’s numeric capacity is wasted, and it doesn’t use scarce random access memory bandwidth optimally.

Like most algorithms, SNNs can be baked onto silicon. When companies like IBM and Intel discuss their “neuromorphic” chips, such as IBM’s TrueNorth, they’re usually referring to a custom chip, or ASIC, that contains a spiking mechanism in the form of an signal accumulator that fires once a certain type of input surpasses a threshhold.

Spiking neural networks can learn using gradient descent, according to research by Dongsung Huh and Terry Sejnowski.

SNN Advantages

  • Low energy usage
  • Greater parallelizability due to local-only interactions
  • (Maybe) better able to learn non-differentiable functions

Further Reading on Spiking Neural Networks

Chris V. Nicholson

Chris V. Nicholson is a venture partner at Page One Ventures. He previously led Pathmind and Skymind. In a prior life, Chris spent a decade reporting on tech and finance for The New York Times, Businessweek and Bloomberg, among others.