Daniel Gardner, Ph.D.
- Professor of Physiology and Biophysics in Neuroscience (secondary appointment)
- Professor of Physiology and Biophysics in Neurology (secondary appointment)
- Head, Laboratory of Neuroinformatics
1300 York Avenue, Room D-404
New York, NY 10065
- Electrophysiology (Patch-clamp; whole cell; multi-electrode arrays)
- Machine Learning, AI, Deep Learning
- Mathematical Modeling and Simulations
Neuromorphic Neural Networks
One of the most exciting unsolved problems of biomedical science is how the cellular and network properties of individual neurons, their connectivity, and the information they convey, give rise to the complex behavior of the brain. In 1993, The Neurobiology of Neural Networks (Gardner, D., MIT Press) presented parallels between artificial neural network (ANN) models and neuronal circuitry and offered neuromorphic implementations or extensions. In the following 30 years, advances in both neurobiology and ANNs extend parallels and possibilities, and my lab is now revisiting and extending our earlier work. Current computationally capable ANNS include not only classic perceptron-derived feed-forward parvo-layered nets, but also magno-layered convolutional ‘deep learning’ ANNs and recurrent nets including LSTMs. ANNs evolve function by tuning synaptic weights across layers, including hidden layers; we find parallel, hidden-layer-like, responses in several real nervous systems systems. We are also assembling, from contemporary synaptic physiology, multiple mechanisms for plastic alteration of synaptic strength compatible with useful ANN designs. Most recently, I was invited to submit a perspective for the Journal of Computational Neuroscience, in which I pose these related questions: What do brains compute, and how do they do it? And, Has evolution conserved a small canonical set of computations performed by neural circuitry? The Perspective further presents the arguments for (and against) canonical computation, urges neurophysiologists and neuroanatomists to add the question to ongoing research, and proposes community efforts to widen the scope. I further establish Encubed: the Neuromorphic Neural Networks proposal as one such effort.
This proposed communication and collaboration by neuroscientists and ANN developers has great potential both for new and neuromorphic ANN designs and also for analyses of the computations performed by neural networks formed by real neurons.
Neural coding—the representation and processing of information with spike trains—is fundamental to sensation, perception, decision, and action, and can become dysfunctional in disease states. However, the features of neuronal activity that convey and manipulate information are not yet known. Our Spike Train Analysis Toolkit STAToolkit provides a suite of information-theoretic analytic algorithms that enable asking specific questions about how real spike train data—recorded from any of a large range of preparations and protocols—represents or codes specific features of a stimulus or motor action. This implementation behaves like a typical Matlab toolbox, but the underlying computations are coded in C and optimized for efficiency.
Believing strongly that informational, computational, or theoretical biology should never be divorced from experimental work, this thrust also continues my laboratory’s long-standing interest in neural networks, their neuronal and synaptic components, and their emergent properties. Using techniques I developed and introduced for simultaneous voltage-clamping of multiple interconnected neurons, we will analyze the information carrying and processing capabilities of a narrow channel / hidden layer consisting of two neurons (B4 and B5) with identical structural connectivity but with different neuromodulatory co-transmitters, signals, and potential for differential programmability. This network also offers evidence for postsynaptic, and therefore retrosynaptic, specification of transmitter release. This model system and testbed bridges the gap between the single neurons characteristic of invertebrates and the massively parallel columns and modules found in mammalian brains. Related experiments may test aspects of the fire-together, wire-together hypothesis. This work descends as well from studies of interneuronal organization begun 55 years ago in the laboratory of Eric R. Kandel.
- Gardner, D. Where are the cores in this thing?…and what are they computing? (with apologies to Larry Abbott). J. Computational Neurosci. 50: 133-138, 2022. https://doi.org/10.1007/s10827-021-00809-1
- Gardner, E.P. and Gardner, D. Sensory Coding. In: Principles of Neural Science, 6th edition. Edited by E.R. Kandel, J.D. Koester, S.H. Mack, and S.A. Siegelbaum. New York: McGraw-Hill, 2021, pp. 385-407.
- Gardner, D. and Gardner, E.P. Conserved canonical circuits & computations, continued. Soc. Neurosci. Abstr. 46: 2021.
- Gardner D. Do biological neural networks perform identifiable and verifiable canonical computations? Soc. Neurosci. Abstr. 45: Program No. 728.01, 2019.
- Gardner, D. Neuromorphic neural networks: converging globally, acting locally. Neurosci. Abstr.44: Program No. 339.01, 2018.
- Gardner, D. and Gardner, E.P. Neural network hidden layer units may model cortical neuron responses. Neurosci. Abstr.42: Program No. 756.04, 2016.
- Gardner, D., Victor, J.D., and Gardner, E.P. Neuroanalysis.org: Information-theoretic open-source methods to analyze somatosensory coding. HHMI Janelia Farm conference, Mammalian Circuits Underlying Touch Sensation. September 22-25, 2013.
- Goldberg, D.H., Victor, J.D., Gardner, E.P, and Gardner, D. Spike Train Analysis Toolkit: Enabling wider application of information-theoretic techniques to neurophysiology. Neuroinformatics7, 165-178, 2009; PMC2818590.
- Gardner, D., Akil, H., Ascoli, G.A., Bowden, D.M., Bug, W., Donohue , D.E., Goldberg, D.H., Bernice Grafstein, B., Grethe, J.S., Gupta, A., Halavi, M., Kennedy, D.N., Marenco, L., Martone, M.E., Miller, P.L., Müller, H.-M., Robert, A. Shepherd, G.M., Sternberg, P.W., Van Essen, D.C., and Williams, R.W. The Neuroscience Information Framework: A data and knowledge environment for neuroscience. Neuroinformatics6(3): 149–160, 2008; PMC2661130.
- Gardner, D., Goldberg, D.H., Grafstein, B., Robert, A., and Gardner, E.P. Terminology for neuroscience data discovery: Multi-tree syntax and investigator-derived semantics. Neuroinformatics 6(3): 161–174, 2008; PMC2663521.
- Petilla Interneuron Nomenclature Group (39 authors including D. Gardner). Petilla terminology: nomenclature of features of GABAergic interneurons of the cerebral cortex. Nature Rev. Neurosci.9: 557-568, 2008.
- Victor, J.D., Goldberg, D., and Gardner, D. Dynamic programming algorithms for comparing multineuronal spike trains via cost-based metrics and alignments. Neurosci. Meth.161: 351–360, 2007; PMC1995551
- Gardner, D., Abato, M., Knuth, K.H., and Robert, A. Neuroinformatics for neurophysiology: The role, design and use of databases. In: Databasing the Brain: From Data to Knowledge (Neuroinformatics). edited by S.H. Koslow and S. Subramaniam. New York: John Wiley & Sons, Inc., pp 47-67, 2005.
- Gardner, D. Neurodatabase.org: networking the microelectrode. Nature Neurosci.7(5):486-487, 2004.
- Gardner, D., Toga, A.W., Ascoli, G.A., Beatty, J., Brinkley, J.F., Dale, A.M., Fox, P.T., Gardner, E.P., George, J.S., Goddard, N., Harris, K.M., Herskovits, E.H., Hines, M., Jacobs, G.A., Jacobs, R.E., Jones, E.G., Kennedy, D.N., Kimberg, D.Y., Mazziotta, J.C., Miller, P., Mori, S., Mountain, D.C., Reiss, A.L., Rosen, G.D., Rottenberg, D.A., Shepherd, G.M., Smalheiser, N.R., Smith, K.P., Strachan, T., Van Essen, D.C., Williams, R.W., and Wong, S.T.C. Towards effective and rewarding data sharing. Neuroinformatics1: 289-295, 2003.
- Gardner, D., Abato, M., Knuth, K.H., DeBellis, R., and Erde, S.M. Dynamic publication model for neurophysiology databases. Trans. R. Soc. Lond. B. 356: 1229–1247, 2001; PMC1088512.
- Gardner, D. (editor) The Neurobiology of Neural Networks. Cambridge: MIT Press/Bradford Books, 1993.
- Gardner, D. Presynaptic transmitter release is specified by postsynaptic neurons of Aplysia buccal ganglia. Neurophysiol.66: 2150-2154, 1991.
- Gardner, D. Towards neural neural networks. In: The Neurobiology of Neural Networks. edited by D. Gardner. Cambridge: MIT Press/Bradford Books, pp. 1 – 11, 1993.
- Gardner, D. and Stevens, C.F. Rate-limiting step of inhibitory post-synaptic current decay in Aplysiabuccal ganglia. Physiol. London304: 145-164, 1980; PMC1282922.
- Gardner, D. and Kandel, E.R. Diphasic post-synaptic potential: a chemical synapse capable of mediating conjoint excitation and inhibition. Science176: 675-678, 1972.
- Gardner, D. Bilateral symmetry and interneuronal organization in the buccal ganglia of Aplysia. Science173: 550-553, 1971.