|Author||: John T. Schmidt|
|Publisher||: Academic Press|
|Release Date||: 2019-10-15|
|ISBN 10||: 0128185805|
|Pages||: 472 pages|
Self-organizing Neural Maps: From Retina to Tectum describes the underlying processes that determine how retinal fibers self-organize into an orderly visual map. The formation of neural maps is a fundamental organizing concept in neurodevelopment that can shed light on developmental mechanisms and the functions of genes elsewhere. The book presents a summary of research in the retinotectal field with an ultimate goal of synthesizing how underlying mechanisms in neural development harmoniously come together to create life. A broad spectrum of neuroscientists and biomedical scientists with differing backgrounds and varied expertise will find this book useful. Describes the mechanisms relating to the developmental wiring of the retinotectal system Brings together the state-of-the-art research in axon guidance and neuronal activity mechanisms in map formation Focuses on topographical maps and inclusion of multiple animal models, from fish to mammals Explores the molecular guidance and activity dependent cue components involved in neurodevelopment
"In this book, Peter Robin Hiesinger explores historical and contemporary attempts to understand the information needed to make biological and artificial neural networks. Developmental neurobiologists and computer scientists with an interest in artificial intelligence - driven by the promise and resources of biomedical research on the one hand, and by the promise and advances of computer technology on the other - are trying to understand the fundamental principles that guide the generation of an intelligent system. Yet, though researchers in these disciplines share a common interest, their perspectives and approaches are often quite different. The book makes the case that "the information problem" underlies both fields, driving the questions that are driving forward the frontiers, and aims to encourage cross-disciplinary communication and understanding, to help both fields make progress. The questions that challenge researchers in these fields include the following. How does genetic information unfold during the years-long process of human brain development, and can this be a short-cut to create human-level artificial intelligence? Is the biological brain just messy hardware that can be improved upon by running learning algorithms in computers? Can artificial intelligence bypass evolutionary programming of "grown" networks? These questions are tightly linked, and answering them requires an understanding of how information unfolds algorithmically to generate functional neural networks. Via a series of closely linked "discussions" (fictional dialogues between researchers in different disciplines) and pedagogical "seminars," the author explores the different challenges facing researchers working on neural networks, their different perspectives and approaches, as well as the common ground and understanding to be found amongst those sharing an interest in the development of biological brains and artificial intelligent systems"--
Vols. for 1942- include proceedings of the American Physiological Society.
Publishes original critical reviews of the significant literature and current developments in psychology.
One of the most challenging and fascinating problems of the theory of neural nets is that of asymptotic behavior, of how a system behaves as time proceeds. This is of particular relevance to many practical applications. Here we focus on association, generalization, and representation. We turn to the last topic first. The introductory chapter, "Global Analysis of Recurrent Neural Net works," by Andreas Herz presents an in-depth analysis of how to construct a Lyapunov function for various types of dynamics and neural coding. It includes a review of the recent work with John Hopfield on integrate-and fire neurons with local interactions. The chapter, "Receptive Fields and Maps in the Visual Cortex: Models of Ocular Dominance and Orientation Columns" by Ken Miller, explains how the primary visual cortex may asymptotically gain its specific structure through a self-organization process based on Hebbian learning. His argu ment since has been shown to be rather susceptible to generalization.
Hebb's postulate provided a crucial framework to understand synaptic alterations underlying learning and memory. Hebb's theory proposed that neurons that fire together, also wire together, which provided the logical framework for the strengthening of synapses. Weakening of synapses was however addressed by "not being strengthened", and it was only later that the active decrease of synaptic strength was introduced through the discovery of long-term depression caused by low frequency stimulation of the presynaptic neuron. In 1994, it was found that the precise relative timing of pre and postynaptic spikes determined not only the magnitude, but also the direction of synaptic alterations when two neurons are active together. Neurons that fire together may therefore not necessarily wire together if the precise timing of the spikes involved are not tighly correlated. In the subsequent 15 years, Spike Timing Dependent Plasticity (STDP) has been found in multiple brain brain regions and in many different species. The size and shape of the time windows in which positive and negative changes can be made vary for different brain regions, but the core principle of spike timing dependent changes remain. A large number of theoretical studies have also been conducted during this period that explore the computational function of this driving principle and STDP algorithms have become the main learning algorithm when modeling neural networks. This Research Topic will bring together all the key experimental and theoretical research on STDP.
|Author||: Edward L. Keller,David S. Zee|
|Release Date||: 1986|
|Pages||: 496 pages|
The human brain, wi th its hundred billion or more neurons, is both one of the most complex systems known to man and one of the most important. The last decade has seen an explosion of experimental research on the brain, but little theory of neural networks beyond the study of electrical properties of membranes and small neural circuits. Nonetheless, a number of workers in Japan, the United States and elsewhere have begun to contribute to a theory which provides techniques of mathematical analysis and computer simulation to explore properties of neural systems containing immense numbers of neurons. Recently, it has been gradually recognized that rather independent studies of the dynamics of pattern recognition, pattern format::ion, motor control, self-organization, etc. , in neural systems do in fact make use of common methods. We find that a "competition and cooperation" type of interaction plays a fundamental role in parallel information processing in the brain. The present volume brings together 23 papers presented at a U. S. -Japan Joint Seminar on "Competition and Cooperation in Neural Nets" which was designed to catalyze better integration of theory and experiment in these areas. It was held in Kyoto, Japan, February 15-19, 1982, under the joint sponsorship of the U. S. National Science Foundation and the Japan Society for the Promotion of Science. Participants included brain theorists, neurophysiologists, mathematicians, computer scientists, and physicists. There are seven papers from the U. S.