Misplaced Pages

Galves–Löcherbach model

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
The topic of this article may not meet Misplaced Pages's general notability guideline. Please help to demonstrate the notability of the topic by citing reliable secondary sources that are independent of the topic and provide significant coverage of it beyond a mere trivial mention. If notability cannot be shown, the article is likely to be merged, redirected, or deleted.
Find sources: "Galves–Löcherbach model" – news · newspapers · books · scholar · JSTOR (September 2020) (Learn how and when to remove this message)
3D Vizualization of the Galves–Löcherbach model simulating the spiking of 4000 neurons (4 layers with one population of inhibitory neurons and one population of excitatory neurons each) in 180 time intervals.

The Galves–Löcherbach model (or GL model) is a mathematical model for a network of neurons with intrinsic stochasticity.

In the most general definition, a GL network consists of a countable number of elements (idealized neurons) that interact by sporadic nearly-instantaneous discrete events (spikes or firings). At each moment, each neuron N fires independently, with a probability that depends on the history of the firings of all neurons since the last time N last fired. Thus each neuron "forgets" all previous spikes, including its own, whenever it fires. This property is a defining feature of the GL model.

In specific versions of the GL model, the past network spike history since the last firing of a neuron N may be summarized by an internal variable, the potential of that neuron, that is a weighted sum of those spikes. The potential may include the spikes of only a finite subset of other neurons, thus modeling arbitrary synapse topologies. In particular, the GL model includes as a special case the general leaky integrate-and-fire neuron model.

Formal definition

The GL model has been formalized in several different ways. The notations below are borrowed from several of those sources.

The GL network model consists of a countable set of neurons with some set I {\displaystyle I} of indices. The state is defined only at discrete sampling times, represented by integers, with some fixed time step Δ {\displaystyle \Delta } . For simplicity, let's assume that these times extend to infinity in both directions, implying that the network has existed since forever.

In the GL model, all neurons are assumed evolve synchronously and atomically between successive sampling times. In particular, within each time step, each neuron may fire at most once. A Boolean variable X i [ t ] {\displaystyle X_{i}} denotes whether the neuron i I {\displaystyle i\in I} fired ( X i [ t ] = 1 {\displaystyle X_{i}=1} ) or not ( X i [ t ] = 0 {\displaystyle X_{i}=0} ) between sampling times t Z {\displaystyle t\in \mathbb {Z} } and t + 1 {\displaystyle t+1} .

Let X [ t : t ] {\displaystyle X} denote the matrix whose rows are the histories of all neuron firings from time t {\displaystyle t'} to time t {\displaystyle {}t} inclusive, that is

X [ t : t ] = ( ( X i [ s ] ) t s t ) i I {\displaystyle X=((X_{i})_{t'\leq s\leq t})_{i\in I}}

and let X [ : t ] {\displaystyle X} be defined similarly, but extending infinitely in the past. Let τ i [ t ] {\displaystyle \tau _{i}} be the time before the last firing of neuron i {\displaystyle i} before time t {\displaystyle t} , that is

τ i [ t ] = m a x { s < t | X i [ s ] = 1 } . {\displaystyle \tau _{i}=\mathop {\mathrm {max} } \{s<t\;{\mathrel {\big |}}\;X_{i}=1\}.}

Then the general GL model says that

P r o b ( X i [ t ] = 1 | X [ : t 1 ] ) = Φ i ( X [ τ i [ t ] : t 1 ] ) {\displaystyle \mathop {\mathrm {Prob} } {\biggl (}\,X_{i}=1\;{\mathrel {\bigg |}}\;X\,{\biggr )}\;\;=\;\;\Phi _{i}{\biggl (}X{\bigl \,{\mathrel {:}}\,t-1{\bigr ]}{\biggr )}}
Illustration of the general Galves-Löcherbach model for a neuronal network of 7 neurons, with indices I = { 1 , 2 , , 7 } {\displaystyle I=\{1,2,\ldots ,7\}} . The matrix of 0s and 1s represents the firing history X [ : t ] {\displaystyle X} up to some time t {\displaystyle t} , where row i {\displaystyle i} shows the firings of neuron i {\displaystyle i} . The rightmost column shows X i [ t 1 ] {\displaystyle X_{i}} . The blue digit indicates the last firing of neuron 3 before time t {\displaystyle t} , which occurred in the time step between τ 3 [ t ] {\displaystyle \tau _{3}} and τ 3 [ t ] + 1 {\displaystyle \tau _{3}+1} . The blue frame encloses all firing events that influence the probability of neuron 3 firing in the step from t {\displaystyle t} to t + 1 {\displaystyle t+1} (blue arrow and empty blue box). The red details indicate the corresponding concepts for neuron 6.

Moreover, the firings in the same time step are conditionally independent, given the past network history, with the above probabilities. That is, for each finite subset K I {\displaystyle K\subset I} and any configuration a i { 0 , 1 } , i K , {\displaystyle a_{i}\in \{0,1\},i\in K,} we have

P r o b ( k K { X k [ t ] = a k } | X [ : t 1 ] ) = k K P r o b ( X k [ t ] = a k | X [ τ k [ t ] : t 1 ] ) {\displaystyle \mathop {\mathrm {Prob} } {\biggl (}\,\bigcap _{k\in K}{\bigl \{}X_{k}=a_{k}{\bigr \}}\;{\mathrel {\bigg |}}\;X{\biggr )}\;\;=\;\;\prod _{k\in K}\mathop {\mathrm {Prob} } {\biggl (}\,X_{k}=a_{k}\;{\mathrel {\bigg |}}\;X{\bigl \,{\mathrel {:}}\,t-1{\bigl ]}\,{\biggr )}}

Potential-based variants

In a common special case of the GL model, the part of the past firing history X [ τ i [ t ] : t 1 ] {\displaystyle X{\bigl \,{\mathrel {:}}\,t-1{\bigr ]}} that is relevant to each neuron i I {\displaystyle i\in I} at each sampling time t {\displaystyle t} is summarized by a real-valued internal state variable or potential V i [ t ] {\displaystyle V_{i}} (that corresponds to the membrane potential of a biological neuron), and is basically a weighted sum of the past spike indicators, since the last firing of neuron i {\displaystyle i} . That is,

V i [ t ] = t = τ i [ t ] t 1 ( E i [ t ] + j I w j i X j [ t ] ) α i [ t τ i [ t ] , t 1 t ] {\displaystyle V_{i}\;\;=\;\;\sum _{t'=\tau _{i}}^{t-1}{\biggl (}E_{i}+\sum _{j\in I}w_{j\rightarrow i}\,X_{j}{\biggr )}\alpha _{i}{\bigl ,t-1-t'{\bigr ]}}

In this formula, w j i {\displaystyle w_{j\rightarrow i}} is a numeric weight, that corresponds to the total weight or strength of the synapses from the axon of neuron j {\displaystyle j} to the dendrites of neuron i {\displaystyle i} . The term E i [ t ] {\displaystyle E_{i}} , the external input, represents some additional contribution to the potential that may arrive between times t {\displaystyle t'} and t + 1 {\displaystyle t'+1} from other sources besides the firings of other neurons. The factor α i [ r , s ] {\displaystyle \alpha _{i}} is a history weight function that modulates the contributions of firings that happened r {\displaystyle r} whole steps after the last firing of neuron i {\displaystyle i} and s {\displaystyle s} whole steps before the current time.

Then one defines

P r o b ( X i [ t ] = 1 | X [ : t 1 ] ) = ϕ i ( V i [ t ] ) {\displaystyle \mathop {\mathrm {Prob} } {\biggl (}\,X_{i}=1\;{\mathrel {\bigg |}}\;X\,{\biggr )}\;\;=\;\;\phi _{i}(V_{i})}

where ϕ i {\displaystyle \phi _{i}} is a monotonically non-decreasing function from R {\displaystyle \mathbb {R} } into the interval [ 0 , 1 ] {\displaystyle } .

If the synaptic weight w j i {\displaystyle w_{j\to i}} is negative, each firing of neuron j {\displaystyle j} causes the potential V i {\displaystyle V_{i}} to decrease. This is the way inhibitory synapses are approximated in the GL model. The absence of a synapse between those two neurons is modeled by setting w j i = 0 {\displaystyle w_{j\to i}=0} .

Leaky integrate and fire variants

In an even more specific case of the GL model, the potential V i {\displaystyle V_{i}} is defined to be a decaying weighted sum of the firings of other neurons. Namely, when a neuron i {\displaystyle i} fires, its potential is reset to zero. Until its next firing, a spike from any neuron j {\displaystyle j} increments V i {\displaystyle V_{i}} by the constant amount w j i {\displaystyle w_{j\rightarrow i}} . Apart from those contributions, during each time step, the potential decays by a fixed recharge factor μ i {\displaystyle \mu _{i}} towards zero.

In this variant, the evolution of the potential V i {\displaystyle V_{i}} can be expressed by a recurrence formula

V i [ t + 1 ] = { 0 i f X i [ t ] = 1 μ i V i [ t ] i f X i [ t ] = 0 } + E i [ t ] + j I w j i X j [ t ] {\displaystyle V_{i}\;\;=\;\;\left\{{\begin{array}{ll}0&\mathrm {if} \;X_{i}=1\\\mu _{i}\,V_{i}&\mathrm {if} \;X_{i}=0\end{array}}\right\}\;+\;E_{i}\;+\;\sum _{j\in I}w_{j\to i}\,X_{j}}

Or, more compactly,

V i [ t + 1 ] = ( 1 X i [ t ] ) μ i V i [ t ] + E i [ t ] + j I w j i X j [ t ] {\displaystyle V_{i}\;\;=\;\;(1-X_{i})\,\mu _{i}\,V_{i}\;+\;E_{i}\;+\;\sum _{j\in I}w_{j\to i}\,X_{j}}

This special case results from taking the history weight factor α [ r , s ] {\displaystyle \alpha } of the general potential-based variant to be μ i s {\displaystyle \mu _{i}^{s}} . It is very similar to the leaky integrate and fire model.

Reset potential

If, between times t {\displaystyle t} and t + 1 {\displaystyle t+1} , neuron i {\displaystyle i} fires (that is, X i [ t ] = 1 {\displaystyle X_{i}=1} ), no other neuron fires ( X j [ t ] = 0 {\displaystyle X_{j}=0} for all j i {\displaystyle j\neq i} ),and there is no external input ( E i [ t ] = 0 {\displaystyle E_{i}=0} ), then V i [ t + 1 ] {\displaystyle V_{i}} will be w i i {\displaystyle w_{i\to i}} . This self-weight therefore represents the reset potential that the neuron assumes just after firing, apart from other contributions. The potential evolution formula therefore can be written also as

V i [ t + 1 ] = { V i R i f X i [ t ] = 1 μ i V i [ t ] i f X i [ t ] = 0 } + E i [ t ] + j I { i } w j i X j [ t ] {\displaystyle V_{i}\;\;=\;\;\left\{{\begin{array}{ll}V_{i}^{\mathsf {R}}&\mathrm {if} \;X_{i}=1\\\mu _{i}\,V_{i}&\mathrm {if} \;X_{i}=0\end{array}}\right\}\;+\;E_{i}\;+\;\sum _{j\in I\setminus \{i\}}w_{j\to i}\,X_{j}}

where V i R = w i i {\displaystyle V_{i}^{\mathsf {R}}=w_{i\to i}} is the reset potential. Or, more compactly,

V i [ t + 1 ] = X i [ t ] V i R + ( 1 X i [ t ] ) μ i V i [ t ] + E i [ t ] + j I { i } w j i X j [ t ] {\displaystyle V_{i}\;\;=\;\;X_{i}\,V_{i}^{\mathsf {R}}\;\;+\;\;(1-X_{i})\,\mu _{i}\,V_{i}\;+\;E_{i}\;+\;\sum _{j\in I\setminus \{i\}}w_{j\to i}\,X_{j}}

Resting potential

These formulas imply that the potential decays towards zero with time, when there are no external or synaptic inputs and the neuron itself does not fire. Under these conditions, the membrane potential of a biological neuron will tend towards some negative value, the resting or baseline potential V i B {\displaystyle V_{i}^{\mathsf {B}}} , on the order of −40 to −80 millivolts.

However, this apparent discrepancy exists only because it is customary in neurobiology to measure electric potentials relative to that of the extracellular medium. That discrepancy disappears if one chooses the baseline potential V i B {\displaystyle V_{i}^{\mathsf {B}}} of the neuron as the reference for potential measurements. Since the potential V i {\displaystyle V_{i}} has no influence outside of the neuron, its zero level can be chosen independently for each neuron.

Variant with refractory period

Some authors use a slightly different refractory variant of the integrate-and-fire GL neuron, which ignores all external and synaptic inputs (except possibly the self-synapse w i i {\displaystyle w_{i\to i}} ) during the time step immediately after its own firing. The equation for this variant is

V i [ t + 1 ] = { V i R i f X i [ t ] = 1 μ i V i [ t ] + E i [ t ] + j I { i } w j i X j [ t ] i f X i [ t ] = 0 {\displaystyle V_{i}\;\;=\;\;\left\{{\begin{array}{ll}V_{i}^{\mathsf {R}}&\quad \mathrm {if} \;X_{i}=1\\\displaystyle \mu _{i}\,V_{i}\;+\;E_{i}\;+\;\sum _{j\in I\setminus \{i\}}w_{j\to i}\,X_{j}&\quad \mathrm {if} \;X_{i}=0\end{array}}\right.}

or, more compactly,

V i [ t + 1 ] = X i [ t ] V i R + ( 1 X i [ t ] ) ( μ i V i [ t ] + E i [ t ] + j I { i } w j i X j [ t ] ) {\displaystyle V_{i}\;\;=\;\;X_{i}\,V_{i}^{\mathsf {R}}\;\;+\;\;(1-X_{i})\,{\biggl (}\mu _{i}\,V_{i}\;+\;E_{i}\;+\;\sum _{j\in I\setminus \{i\}}w_{j\to i}\,X_{j}{\biggr )}}

Forgetful variants

Even more specific sub-variants of the integrate-and-fire GL neuron are obtained by setting the recharge factor μ i {\displaystyle \mu _{i}} to zero. In the resulting neuron model, the potential V i {\displaystyle V_{i}} (and hence the firing probability) depends only on the inputs in the previous time step; all earlier firings of the network, including of the same neuron, are ignored. That is, the neuron does not have any internal state, and is essentially a (stochastic) function block.

The evolution equations then simplify to

V i [ t + 1 ] = { V i R i f X i [ t ] = 1 0 i f X i [ t ] = 0 } + E i [ t ] + j I { i } w j i X j [ t ] {\displaystyle V_{i}\;\;=\;\;\left\{{\begin{array}{ll}V_{i}^{\mathsf {R}}&\mathrm {if} \;X_{i}=1\\0&\mathrm {if} \;X_{i}=0\end{array}}\right\}\;+\;E_{i}\;+\;\sum _{j\in I\setminus \{i\}}w_{j\to i}\,X_{j}}
V i [ t + 1 ] = X i [ t ] V i R + E i [ t ] + j I { i } w j i X j [ t ] {\displaystyle V_{i}\;\;=\;\;X_{i}\,V_{i}^{\mathsf {R}}\;\;+\;\;E_{i}\;+\;\sum _{j\in I\setminus \{i\}}w_{j\to i}\,X_{j}}

for the variant without refractory step, and

V i [ t + 1 ] = { V i R i f X i [ t ] = 1 E i [ t ] + j I { i } w j i X j [ t ] i f X i [ t ] = 0 {\displaystyle V_{i}\;\;=\;\;\left\{{\begin{array}{ll}V_{i}^{\mathsf {R}}&\quad \mathrm {if} \;X_{i}=1\\\displaystyle E_{i}\;+\;\sum _{j\in I\setminus \{i\}}w_{j\to i}\,X_{j}&\quad \mathrm {if} \;X_{i}=0\end{array}}\right.}
V i [ t + 1 ] = X i [ t ] V i R + ( 1 X i [ t ] ) ( E i [ t ] + j I { i } w j i X j [ t ] ) {\displaystyle V_{i}\;\;=\;\;X_{i}\,V_{i}^{\mathsf {R}}\;\;+\;\;(1-X_{i})\,{\biggl (}E_{i}\;+\;\sum _{j\in I\setminus \{i\}}w_{j\to i}\,X_{j}{\biggr )}}

for the variant with refractory step.

In these sub-variants, while the individual neurons do not store any information from one step to the next, the network as a whole still can have persistent memory because of the implicit one-step delay between the synaptic inputs and the resulting firing of the neuron. In other words, the state of a network with n {\displaystyle n} neurons is a list of n {\displaystyle n} bits, namely the value of X i [ t ] {\displaystyle X_{i}} for each neuron, which can be assumed to be stored in its axon in the form of a traveling depolarization zone.

History

The GL model was defined in 2013 by mathematicians Antonio Galves and Eva Löcherbach. Its inspirations included Frank Spitzer's interacting particle system and Jorma Rissanen's notion of stochastic chain with memory of variable length. Another work that influenced this model was Bruno Cessac's study on the leaky integrate-and-fire model, who himself was influenced by Hédi Soula. Galves and Löcherbach referred to the process that Cessac described as "a version in a finite dimension" of their own probabilistic model.

Prior integrate-and-fire models with stochastic characteristics relied on including a noise to simulate stochasticity. The Galves–Löcherbach model distinguishes itself because it is inherently stochastic, incorporating probabilistic measures directly in the calculation of spikes. It is also a model that may be applied relatively easily, from a computational standpoint, with a good ratio between cost and efficiency. It remains a non-Markovian model, since the probability of a given neuronal spike depends on the accumulated activity of the system since the last spike.

Contributions to the model were made, considering the hydrodynamic limit of the interacting neuronal system, the long-range behavior and aspects pertaining to the process in the sense of predicting and classifying behaviors according to a fonction of parameters, and the generalization of the model to the continuous time.

The Galves–Löcherbach model was a cornerstone to the NeuroMat project.

See also

References

  1. ^ Galves, A.; Löcherbach, E. (2013). "Infinite Systems of Interacting Chains with Memory of Variable Length—A Stochastic Model for Biological Neural Nets". Journal of Statistical Physics. 151 (5): 896–921. arXiv:1212.5505. Bibcode:2013JSP...151..896G. doi:10.1007/s10955-013-0733-9. S2CID 254698364.
  2. Baccelli, François; Taillefumier, Thibaud (2019). "Replica-mean-field limits for intensity-based neural networks". arXiv:1902.03504 .
  3. ^ Brochini, Ludmila; et al. (2016). "Phase transitions and self-organized criticality in networks of stochastic spiking neurons". Scientific Reports. 6. article 35831. arXiv:1606.06391. Bibcode:2016NatSR...635831B. doi:10.1038/srep35831. PMC 5098137. PMID 27819336.
  4. Cessac, B. (2011). "A discrete time neural network model with spiking neurons: II: Dynamics with noise". Journal of Mathematical Biology. 62 (6): 863–900. arXiv:1002.3275. doi:10.1007/s00285-010-0358-4. PMID 20658138. S2CID 1072268.
  5. Plesser, H. E.; Gerstner, W. (2000). "Noise in Integrate-and-Fire Neurons: From Stochastic Input to Escape Rates". Neural Computation. 12 (2): 367–384. doi:10.1162/089976600300015835. PMID 10636947. S2CID 14108665.
  6. De Masi, A.; Galves, A.; Löcherbach, E.; Presutti, E. (2015). "Hydrodynamic limit for interacting neurons". Journal of Statistical Physics. 158 (4): 866–902. arXiv:1401.4264. Bibcode:2015JSP...158..866D. doi:10.1007/s10955-014-1145-1. S2CID 254694893.
  7. Duarte, A.; Ost, G. (2014). "A model for neural activity in the absence of external stimuli". arXiv:1410.6086 .
  8. Fournier, N.; Löcherbach, E. (2014). "On a toy model of interacting neurons". arXiv:1410.3263 .
  9. Yaginuma, K. (2015). "A Stochastic System with Infinite Interacting Components to Model the Time Evolution of the Membrane Potentials of a Population of Neurons". Journal of Statistical Physics. 163 (3): 642–658. arXiv:1505.00045. doi:10.1007/s10955-016-1490-3. S2CID 254746914.
  10. "Modelos matemáticos do cérebro", Fernanda Teixeira Ribeiro, Mente e Cérebro, Jun. 2014
Stochastic processes
Discrete time
Continuous time
Both
Fields and other
Time series models
Financial models
Actuarial models
Queueing models
Properties
Limit theorems
Inequalities
Tools
Disciplines
Categories: