# Hodgin-Huxley model for a single neuron We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I am viewing (through edX ) an introduction course to computational neuroscience. In the second lecture, the Hodgin-Huxley model is considered. I am going over some of the questions and have encountered a problem with one of them (a picture of the exercise is attached below). I have a strong background in mathematics, but my background in biology is yet very poor. I am having a hard time connecting the biology to the math. Can anyone help with this question:  Thank you!!

Since you didn't get the right answer to #6, let's review the basis for this model.

The basis of the model is precisely the statement in #6: an individual channel has a defined conductance when it is conducting, and if all of them are conducting then the total conductance is the single-channel conductance times the number of channels (conductances in parallel add).

Confusion comes from the fact that, in the terminology used here, an "open" channel is not necessarily conducting. The probability that a single channel is conducting is the product \$r^{n_1}s^{n_2}\$.

\$r^{n_1}\$ describes the "activation" process, while \$s^{n_2}\$ describes the "inactivation" process. These are two separate processes, with the activation process now known to be driven by voltage-dependent changes in the configuration of transmembrane domains of the channel protein, while the inactivation process is a charged part of the protein inside the cell that more slowly moves to block the channel when the cell is depolarized. See this summary by Clay Armstrong, who contributed much to our understanding of these processes.

So in the terminology here, a channel can be "open" (in terms of the \$r\$ process) but still "inactivated" (through the \$s\$ process) and thus be non-conducting. It can also be "closed" (in terms of the \$r\$ process) and "inactivated", in which case it is also non-conducting.

The way this is modeled is that \$s=1\$ means no inactivation, while \$s=0\$ means fully inactivated. So that covers question 3. This is in contrast to the activation process, in which \$r=1\$ is full activation.

Question 4 might be a bit misleading as there may be a hidden assumption that you are starting from being at potential \$u_0\$ for a long-enough time and then increasing \$u\$ suddenly (which is how action potentials normally are generated). In that case your understanding of differential equations should make it clear that \$ au_r\$ and \$ au_s\$ are the time constants for the responses of the \$r\$ and \$s\$ processes to a change in \$u\$. The process with a shorter time constant responds more quickly, so activation precedes inactivation (on the average). The different powers associated with the \$r\$ and \$s\$ processes (\$n_1,n_2\$) might confuse things a bit, but try running this model and you should be convinced.

This might seem convoluted at first, but Hodgkin and Huxley worked out this model based solely on their electophysiological studies (with help from Bernard Katz). It's amazing how this model's postulates of multiple single channels and separate activation and inactivation processes have been so well verified at the molecular level over the past decades.

Preface
Part I. Foundations of Neuronal Dynamics:
1. Introduction
2. The Hodgkin–Huxley model
3. Dendrites and synapses
4. Dimensionality reduction and phase plane analysis
Part II. Generalized Integrate-and-Fire Neurons:
5. Nonlinear integrate-and-fire models
7. Variability of spike trains and neural codes
8. Noisy input models: barrage of spike arrivals
9. Noisy output: escape rate and soft threshold
10. Estimating models
11. Encoding and decoding with stochastic neuron models
Part III. Networks of Neurons and Population Activity:
12. Neuronal populations
13. Continuity equation and the Fokker–Planck approach
14. The integral-equation approach
15. Fast transients and rate models
Part IV. Dynamics of Cognition:
16. Competing populations and decision making
17. Memory and attractor dynamics
18. Cortical field models for perception
19. Synaptic plasticity and learning
20. Outlook: dynamics in plastic networks
Bibliography
Index. ### Neuronal Dynamics

#### General Resources

Find resources associated with this title

This title is supported by one or more locked resources. Access to locked resources is granted exclusively by Cambridge University Press to instructors whose faculty status has been verified. To gain access to locked resources, instructors should sign in to or register for a Cambridge user account.

Please use locked resources responsibly and exercise your professional discretion when choosing how you share these materials with your students. Other instructors may wish to use locked resources for assessment purposes and their usefulness is undermined when the source files (for example, solution manuals or test banks) are shared online or via social networks.

Supplementary resources are subject to copyright. Instructors are permitted to view, print or download these resources for use in their teaching, but may not change them or use them for commercial gain.

## Hodgin-Huxley model for a single neuron - Biology

In 1952, Hodgkin and Huxley wrote a series of five papers that described the experiments they conducted that were aimed at determining the laws that govern the movement of ions in a nerve cell during an action potential. The first paper examined the function of the neuron membrane under normal conditions and outlined the basic experimental method pervasive in each of their subsequent studies. The second paper examined the effects of changes in sodium concentration on the action potential as well as the resolution of the ionic current into sodium and potassium currents. The third paper examined the effect of sudden potential changes on the action potential (including the effect of sudden potential changes on the ionic conductance). The fourth paper outlined how the inactivation process reduces sodium permeability. The final paper put together all of the information from the previous papers and turned them into a mathematical models.

A.L Hodgkin and A.F. Huxley developed a mathematical model to explain the behavior of nerve cells in a squid giant axon in 1952. Their model, which was developed well before the advent of electron microscopes or computer simulations, was able to give scientists a basic understanding of how nerve cells work without having a detailed understanding of how the membrane of a nerve cell looked. To create their mathematical model, Hodgkin and Huxley looked at squid giant axons. They used squid giant axons because squids had axons large enough to manipulate and use their specially built glass electrodes on.(Click here for more information on materials and methods) From their experimentation with a squid axon, they were able to create a circuit model that seemed to match how the squid axon carried an action potential.

Current flowing through the membrane can be carried via the charging and discharging of a capacitor or via ions flowing through variable resistances in parallel with the capacitor. Each of the resistances corresponds to charge being carried by different components. In the nerve cell these components are sodium and potassium ions and a small leakage current that is associated with the movement of other ions, including calcium. Each current (I Na , I K , and I L ) can be determined by a driving force which is represented by a voltage difference and a permeability coefficient, which is represented by a conductance in the circuit diagram. Conductance is the inverse of resistance. These equations can easily be derived using Ohm’s law (V=IR)

g Na and g K are both functions of time and membrane potential. E Na , E K , E L , C m and g L are all constants that are determined via experimentation.(Click here for more information on currents)

The influences of membrane potential on permeability were discovered to perform as follows. Under depolarization conditions, there is a transient increase in sodium conductance and a slower but more sustained increase in potassium conductance. These changes can be reversed during repolarization. The nature of these permeability changes was not fully understood when Hodgkin and Huxley did their work. They did not know what the cellular membrane looked like on the micro scale. They did not know about the existence of ion channels and ion pumps in the membrane. Based off of their finding, however, they were able to conclude that changes in permeability were dependant on membrane potential and not membrane current. Molecules aligning or moving with the electric field cause a change in permeability. Originally they supposed that sodium ions crossed the membrane via lipid carrier molecules that were negatively charged. What they observed however, proved that this was not the case. Rather, they supposed that sodium movement depends on the distribution of charged particles which do not act as carriers in the usual sense but rather allow sodium to pass through the membrane when they occupy particular sites on the membrane. This turned out to be the case. These charged particles are ion channels. In the case of sodium permeability, the carrier molecules (as they are referred to by Hodgkin and Huxley) are inactivated when there is a high potential difference. Potassium permeability is similar to sodium permeability but there are some key differences. The activating carrier molecules have an affinity for potassium, not sodium. They move more slowly and they are not blocked or inactivated. (Click here for more information on conductances)

To build their mathematical model that describes how the membrane current works during the voltage clamp experiment, they used the basic circuit equation

where I is the total membrane current density (inward current positive), Ii is the ionic current density (inward current positive), V is the displacement of membrane potential (depolarization is negative), C m is the membrane capacitance, t is time. They chose to model the capacity current and ionic current in parallel because they found that the ionic current when the derivative was set to zero and the capacity current when the ionic current is set to zero were similar. We can enrich this equation further by realizing that

where I Na is the sodium current, I K is the potassium current and I L is the leakage current. We can further expand on this model by adding the following relationships:

Where E R is the resting potential. When examining the graph of the potassium current versus the potassium potential difference, you can see that in the beginning, it’s or third order equation will describe it. But at the end, during the end, it seems to be more first order. In order to explain this in the conductance formula, we let

where is a constant, and n is a dimensionless variable that varies from 0 to 1. It is the proportion of ion channels that are open. To further understand where n comes from we can derive the equation

where alpha is the rate of closing of the channels and beta is the rate of opening. Together, they give us the total rate of change in the channels during an action potential. The sodium conductance is described by the equation

whereis a constant and m is the proportion of activating carrier molecules (ion channels) and h is the proportion of inactivation carrier molecules (ion channels). M and h can be further described by

where alpha and beta are again rate constants that are similar to the rate constants for the potassium conductance. (Click here for more information on inactivation)

Graph used to determine the values for the potassium conductance rate constants alpha and beta

Graph used to determine the values for the sodium activation conductance rate constants (m), alpha and beta

Graph used to detmine values for the sodium inactivation conductance rate constants (h), alpha and beta

Ironically, it can be hard to find mathematical modelling in biology that is not differential equations. But here are some examples.

The Hodgkin-Huxley model (or other biological neuron models) of the cellular dynamics of neurons. The Hindmarsh-Rose model is another simple model that exhibits bursting.

Mathematical models of oncological tumor growth (e.g.  or ).

Among predator-prey models, there are quite a few DE models beyond Lotka-Volterra. You can also extend to spatial distributions using PDEs (e.g.  or ). Fisher's equation is a related model of gene propagation.

Turing's model of developmental morphogenesis (e.g.  or ).

In pharmacology, you can model ADME kinetics via DEs called PBPK models (e.g. here)

The rate equations play a large role in biochemistry. Of course, this is related to the Michaelis-Menten equation.

## BIOELECTRIC PHENOMENA

### 11.1 INTRODUCTION

Chapter 3 briefly described the nervous system and the concept of a neuron. Here the description of a neuron is extended by examining its properties at rest and during excitation. The concepts introduced here are basic and allow further investigation of more sophisticated models of the neuron or groups of neurons by using GENESIS (a general neural simulation program—see suggested reading by J.M. Bower and D. Beeman) or extensions of the Hodgkin– Huxley model by using more accurate ion channel descriptions and neural networks. The models introduced here are an important first step in understanding the nervous system and how it functions.

Models of the neuron presented in this chapter have a rich history of development. This history continues today as new discoveries unfold that supplant existing theories and models. Much of the physiological interest in models of a neuron involves the neuron's use in transferring and storing information, whereas much engineering interest involves the neuron's use as a template in computer architecture and neural networks. To fully appreciate the operation of a neuron, it is important to understand the properties of a membrane at rest by using standard biophysics, biochemistry, and electric circuit tools. In this way, a more qualitative awareness of signaling via the generation of the action potential can be better understood.

The Hodgkin and Huxley theory that was published in 1952 described a series of experiments that allowed the development of a model of the action potential. This work was awarded a Nobel prize in 1963 (shared with John Eccles) and is covered in Section 11.6 . It is reasonable to question the usefulness of covering the Hodgkin–Huxley model in a textbook today given all of the advances since 1952. One simple answer is that this model is one of the few timeless classics and should be covered. Another is that all current, and perhaps future, models have their roots in this model.

Section 11.2 describes a brief history of bioelectricity and can be easily omitted on first reading of the chapter. Following this, Section 11.3 describes the structure and provides a qualitative description of a neuron. Biophysics and biochemical tools useful in understanding the properties of a neuron at rest are presented in Section 11.4 . An equivalent circuit model of a cell membrane at rest consisting of resistors, capacitors, and voltage sources is described in Section 11.5 . Finally, Section 11.6 describes the Hodgkin–Huxley model of a neuron and includes a brief description of their experiments and the mathematical model describing an action potential.

## Author Contributions

JS designed and implemented the core of the DynaSim Toolbox and Graphical User Interface, wrote the paper, and created the online user documentation. AS was the first alpha tester, helped promote the package, ran benchmark simulations, and contributed to the Benchmarks section. SA was the second alpha tester, added MEX compilation via the MATLAB Coder, and maintained GNU Octave compatibility. DS added parallel processing via the MATLAB Parallel Computing Toolbox. ER helped establish a core team of developers and a development workflow with version control. BP-P helped add support for parallel analysis and plotting of large numbers of simulated datasets. NK supervised the project and encouraged laboratory members to implement models in DynaSim. All authors reviewed the paper.

Preface
Part I. Foundations of Neuronal Dynamics:
1. Introduction
2. The Hodgkin–Huxley model
3. Dendrites and synapses
4. Dimensionality reduction and phase plane analysis
Part II. Generalized Integrate-and-Fire Neurons:
5. Nonlinear integrate-and-fire models
7. Variability of spike trains and neural codes
8. Noisy input models: barrage of spike arrivals
9. Noisy output: escape rate and soft threshold
10. Estimating models
11. Encoding and decoding with stochastic neuron models
Part III. Networks of Neurons and Population Activity:
12. Neuronal populations
13. Continuity equation and the Fokker–Planck approach
14. The integral-equation approach
15. Fast transients and rate models
Part IV. Dynamics of Cognition:
16. Competing populations and decision making
17. Memory and attractor dynamics
18. Cortical field models for perception
19. Synaptic plasticity and learning
20. Outlook: dynamics in plastic networks
Bibliography
Index. ### Neuronal Dynamics

#### General Resources

Find resources associated with this title

This title is supported by one or more locked resources. Access to locked resources is granted exclusively by Cambridge University Press to lecturers whose faculty status has been verified. To gain access to locked resources, lecturers should sign in to or register for a Cambridge user account.

Please use locked resources responsibly and exercise your professional discretion when choosing how you share these materials with your students. Other lecturers may wish to use locked resources for assessment purposes and their usefulness is undermined when the source files (for example, solution manuals or test banks) are shared online or via social networks.

Supplementary resources are subject to copyright. Lecturers are permitted to view, print or download these resources for use in their teaching, but may not change them or use them for commercial gain.

## Understanding the Action Potential Generation in the Hodgkin-Huxley model

### Dibya Thapa

PG Scholar, Galgotias University

AbstractThe generation and the propagation of electrical signals in the basic unit of nervous system “neuron” has been a topic of interest to physiologists and researchers for centuries. In this effort of trying to map the functions of the brain the work of Hodgkin and Huxley has been a remarkable legacy. 60 years have passed since they have published their paper, still it continues to inspire research in the fields of physiology and system biology. In this paper I have tried to explain how information is transmitted in the brain based on the Hodgkin Huxley model equations.

Keywordsaction potential, brain, Hodgkin Huxley, neuron, system biology.

Neurons are capable of processing as well as transmitting information in the form of ―action potential‖ involving the flow of sodium(Na) and potassium(K).The potential inside the cell membrane under resting condition is about -70Mv which is known to be the resting membrane potential (RMV) and of the surrounding is 0mV. Sodium concentration is more on the outside whereas potassium is concentrated in a higher quantity on the inside. Voltage and concentration gradients lead to the flow of ions into and out of the cell. ―Hyperpolarisation‖ which is the flow of current out of the cell makes the membrane potential more negative whereas the contradictory is called as depolarization. When depolarized sufficiently the membrane potential rises above a certain threshold level leading to the generation of an action potential. Once generated it is passed on via the axon activating synapses during the course of propagation. Action potentials are generated in the form of spikes and can propagate over large distances.

Hodgkin and Huxley in 1939 provided a concrete proof that membrane potential exceeds greatly during an action potential. Their experiments led to a series of five seminal papers which were all published in The Journal of

Physiology in 1952.It also won them the 1963 Nobel Prize in the field of Physiology or Medicine . For their experiment they choose the squid axon whish was large enough to see and stick wires into. They threaded a silver wire inside the axon which helped them measure the electrical potential inside and deliver enough current to maintain particular voltage.

This technique of maintaining a constant voltage was known as voltage clamp. Performing the same experiment under potassium and sodium free solutions under different voltage conditions helped them find out how sodium and potassium currents change with the change in membrane potential. The collected data was then used to construct the parallel conductance model which represented the electrical properties of a segment of a nerve. As we can see in fig 1 it consists of four parallel conducting pathways connecting the inside and outside of the membrane. Membrane capacitor and fixed membrane conductance are on the left side. Resistor is connected to a battery and the batteries are denoted by two parallel lines of different length, the long line indicating the positive pole.

Fig 1: Parallel Conductance Model

The passive conductance in this model is called as leak conductance and since the channels carrying this current are not sensitive to voltage hence the leak conductance remains same for all voltage providing constant ―leakiness‖ for current. Potential of the battery with gleak is Eleak which

### International Journal of Emerging Technology and Advanced Engineering

Website: www.ijetae.com (ISSN 2250-2459,ISO 9001:2008 Certified Journal, Volume 4, Issue 5, May 2014)

56 If anyone is turned on, then associated battery will dominate the membrane potential but if both are turned on at the same time the batteries will discharge massively overheat and blow up [2,5,6].

Fig 1 depicts the resistances relates to charge being carried by sodium and potassium ions and a leakage current. Current flowing is carried by the capacitor or by ions. Using Ohm’s law (V=IR) the following equations were derived [7,8].

Sodium conductance (gNa) and potassium conductance (gK) depend on time and membrane potential. Sodium potential (VNa ), potassium potential (VK), leakage potential (Vl), are constants. In order to build their mathematical model they used the basic circuit equation i m

### I

here I = the total membrane current density . Ii= the ionic current density . V=displacement of the membrane potential from its resting value. Cm= the membrane capacity per unit area (assumed constant). t=time. Ionic current was again divided into a sodium current, a potassium current and a leakage current: Ii = INa+ IK +Il (5)

Where INa , IK , Il denotes sodium current, potassium current and leakage current respectively. The model was further expanded by adding (1),(2),(3). IN a= gNa(V-VNa) (6)

Where VR denotes the resting potential. The potassium conductance is described by the equation gk= k (10)

Where is a constant, n varies from 0 to 1 and is the proportion of ion channels that are open. The variable n can be derived from the equation :

### Dn

Where α(alpha) and β(beta) signify the speed of closing and opening of channels. The sodium conductance is described by the equation gNa = m3 Na h (12)

Where Na is a constant, m variable is responsible for activating sodium channels and h for the inactivation of sodium ion channels. They can be calculated as

### Dh

α(alpha) and β(beta ) can be calculated as: n0 = no no no

### International Journal of Emerging Technology and Advanced Engineering

Website: www.ijetae.com (ISSN 2250-2459,ISO 9001:2008 Certified Journal, Volume 4, Issue 5, May 2014)

57 Therefore the final equation is :

### DV

+ kn4(V-VK) + m3 Nah ( V-VNa) + gl (V-Vl)

A. HH model with constant input

To mimic the behavior of membrane potential we opt to simulate the model equations. Fig 2 represents the evolution of membrane potential and the phenomenon of spike generation in HH model. The time interval [0,T] was divided into equal parts of size dt=0.01sec. Euler Maruyama method is applied. Whenever the membrane potential reaches a certain threshold voltage Vth in this case

-70mV a spike is generated and immediately the membrane potential is reset to its resting potential. Membrane potential again evolves and reaches the threshold potential to generate another spike. Time interval between two consecutive spikes gives the Inter Spike Interval (ISI).

0 10 20 30 40 50 60 70 80 90 100

Fig 2: Spikes generation with Vth= -- 70

TABLE 1 Parameters Used

All biological systems enclose noise and statistical fluctuations.Understanding its role in cellular dynamicsis the main challenge across computational biology 

### International Journal of Emerging Technology and Advanced Engineering

Website: www.ijetae.com (ISSN 2250-2459,ISO 9001:2008 Certified Journal, Volume 4, Issue 5, May 2014)

### DV

+ k (n4 + ξK(t)) (V-VK) + Na (m3h+ ξNa(t)

Fig 3 depicts the results of solving the HH model after the introduction of noise in the model equation.

0 20 40 60 80 100 120 140 160 180 200

Fig 3: Change in the membrane potential with respect to time

C. Inter Spike Interval

The membrane potential of the neuron rises due to the electrochemical processes inside it [2,9]. The time epoch when the membrane potential reaches a certain threshold value for the first time is called as FPT (first passage time) .A spike is generated and then the membrane potential decreases reaching to a value called resting potential. The first passage time is random in nature .

0 0.02 0.04 0.06 0.08 0.1 0.12 0.14

Fig 4. Probabilty Distribution Function of simulated ISI distribution

Inter Spike Interval or ISI is the difference between time epochs of two consecutive spikes. Collection of ISI yields the probability distribution.

In our expedition of trying to understand the brain function, it will be best if we concentrate on models which put forward new and promising lines of experimentation, and Hodgkin Huxley model fits well into the category. It has not only helped us understand the concept of generation of action potential, but also laid down the framework for many research works. Our main motive was to understand how brains compute and for this we used computers to simulate while using our information from various other

fields such as mathematics, physics and other

All of the above study has been achieved by me with the aid of MATLAB/SIMULINK. The simulation results are an outcome of the mentioned software.

 Jamie I. Vandenberg and Stephen G. Waxman. 2012. Hodgkin and Huxley and the basis for electrical signaling: a remarkable legacy still going strong. J Physiol.

### International Journal of Emerging Technology and Advanced Engineering

Website: www.ijetae.com (ISSN 2250-2459,ISO 9001:2008 Certified Journal, Volume 4, Issue 5, May 2014)

 W Gerstner and W M Kistler .2001.Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press.  M.N Shadlen and W.T Newsome.1994.Noise, Neural Codes and

Cortical Organization, Current Opinion in Neurobiology.

 A.L. Hodgkin and A. F. Huxley.1952.A quantative description of membrane current and its application to conduction and excitation in nerve. J.Physiol.

 Erik Skaugen and Lars Walloe.1979.Firing behavior in a stochastic nerve membrane model based upon the Hodgkin-Huxley equations, Acta Physiol Scand.

 Ji-Huan He. 2005 .A modified Hodgkin–Huxley model .Elsevier.  Richard E. Plant, 1976.The Geometry of the Hodgkin-Huxley

Model. Computer Programs in Biomedicine.allala

 C. Koch.1999.Biophysics of computation. Oxford University Press.

 A.V.Holden, 1976. The response of excitable membrane models to a cyclic input. Biological Cybernetics.

 H.C. Tuckwell and J Feng .2004.A Theoretical Overview from Computational Neuroscience, A Comprehensive Approach Ed: Feng .J, CRC Press LLC.

 G Gerstein and B.L Mandelbrot .1964. Random Walk Models for the Spike Activity of a Single Neuron, Biophysical Journal .  Joshua H. Goldwyn1 and Eric Shea-Brown. Nov 2011.The What and

Where of Adding Channel Noise to the Hodgkin-Huxley Equations, PLoS Computational Biology.

## 4. Discussion

Figure 5 explains the relationship among AG, , and f. When designing an experiment or establishing a neural network with certain neuron models, we can check this universal –AG plot and see whether this new design meets our requirements. In Table 3, three different scenarios are listed as examples for one to understand several possible applications of this plot. In the first case, if we only allow the angle of phase shift beneath the upper bound of , then the AG score must be >12 when the frequency of current injection is at Hz if we substitute the input current with a higher frequency, say Hz, then this time the AG score needs to be >1150 this requirement of AG score further raises to a level that is >11,440 when we decide to apply an input current with 1000 Hz frequency (Fig. 5b). FIG. 5. (a) The AG scores corresponding to phase shift degree under different frequencies of current injection. This plot can be applied to any situation regardless of the model type, therefore, we call it an universal plot. In this study we mainly consider the balanced LIF neuron model, but actually the AG score functions of Hodgkin–Huxley models and Connor–Stevens models can also apply this plot. (b–d) The AG values should be larger than the thresholds (red dashed lines) to guarantee the angles of phase shift are limited within acceptable ranges. Three different scenarios are corresponding to the examples mentioned in Section 4 (b) is the case when the phase shift is limited within while the phase shift is only acceptable less or equal to 0.1% in (c) and in (d), phase shift is limited at .

Table 3. Examples of Preferable AG Scores When Building up a Neuron Model

Alternatively, the limitation of the angle of phase shift can be described in other forms such as a relative proportion within a single cycle, given in Table 3. Under this kind of situation, we still can translate the limitation back into corresponding degree unit. In Figure 5c, when the level of phase shift is constrained within 0.1% per cycle, which equivalently means that , then we can easily find out that the desired AG score should be at least >160 when the frequency of current injection is Hz, while the AG score needs to be raised up to at least 7960 at Hz, and only when the AG score is >15,920, our criterion can be guaranteed as Hz. Now, if Hz and our tolerance for the phase shift phenomenon is restricted at , then, as shown in Figure 5d, this time the AG score should be >57,315, but with the frequency of current injection changes to Hz, the requirement of the AG score can be relieved and drops down to 2874 (Table 3).

The response of a neuron model to an external periodic stimulus can be affected by two summarized factors: one is the active role f and the other the passive role AG. In this study, we present the agility function AG for a balanced LIF model with Poisson distributed background noise in the form of Eq. (20). In Section 3.3, we also illustrated how the agility function can be applied to Hodgkin–Huxley models and Connor–Stevens models, making them possible to be compared with other types of neuron models. Although the values of the gating variables will change in response to the level of membrane potential and hence can also be regarded as time functions, we still can take the averaged values of these gating variables instead of measuring the real-time values of them because the purpose of calculating the agility score is to categorize neurons under different environments, and these models need not necessarily be limited within a short time period.

Previous studies also reported similar results (Brunel et al., 2001 Burkitt et al., 2003) between background noise and the phase shift but were lack of quantitative descriptions. The novelty of this study is that we present an explicit function as a tool for us to directly calculate the exact degree of phase shift, and the method of derivation of the equations presented here is relatively easy and straightforward for further applications. The AG score allows us to normalize various neuron models from different studies and makes them comparable with one another under the same conditions. This result also provides an explanation for how large-scale computational neuronal networks are able to overcome the input–output delay problem.