Computational Models for Neuroscience: Human Cortical Information Processing

Author:   Robert Hecht-Nielsen ,  Thomas McKenna
Publisher:   Springer London Ltd
Edition:   Softcover reprint of the original 1st ed. 2003
ISBN:  

9781447111115


Pages:   299
Publication Date:   10 November 2013
Format:   Paperback
Availability:   Manufactured on demand   Availability explained
We will order this item for you from a manufactured on demand supplier.

Our Price $448.77 Quantity:  
Add to Cart

Share |

Computational Models for Neuroscience: Human Cortical Information Processing


Add your own review!

Overview

Full Product Details

Author:   Robert Hecht-Nielsen ,  Thomas McKenna
Publisher:   Springer London Ltd
Imprint:   Springer London Ltd
Edition:   Softcover reprint of the original 1st ed. 2003
Dimensions:   Width: 15.50cm , Height: 1.70cm , Length: 23.50cm
Weight:   0.498kg
ISBN:  

9781447111115


ISBN 10:   1447111117
Pages:   299
Publication Date:   10 November 2013
Audience:   Professional and scholarly ,  Professional & Vocational
Format:   Paperback
Publisher's Status:   Active
Availability:   Manufactured on demand   Availability explained
We will order this item for you from a manufactured on demand supplier.

Table of Contents

1 The Neurointeractive Paradigm: Dynamical Mechanics and the Emergence of Higher Cortical Function.- 1.1 Abstract.- 1.2 Introduction.- 1.3 Principles of Cortical Neurointeractivity.- 1.4 Dynamical Mechanics.- 1.5 The Neurointeractive Cycle.- 1.6 Developmental Emergence.- 1.7 Explaining Emergence.- 1.8 References.- 2 The Cortical Pyramidal Cell as a Set of Interacting Error Backpropagating Dendrites: Mechanism for Discovering Nature 's Order.- 2.1 Abstract.- 2.2 Introduction.- 2.2.1 Defining the Problem.- 2.2.2 How Does the Brain Discover Orderly Relations?.- 2.3 Implementation of the Proposal.- 2.3.1 How Might Error Backpropagation Learning Be Implemented in Dendrites?.- 2.3.2 How Can Dendrites Be Set Up to Teach Each Other?.- 2.3.3 How to Divide Connections Among the Dendrites?.- 2.4 Cortical Minicolumnar Organization and SINBAD Neurons.- 2.5 Associationism.- 2.5.1 SINBAD as an Associationist Theory.- 2.5.2 Countering Nativist Arguments.- 2.6 Acknowledgements.- References.- 3 Performance of Intelligent Systems Governed by Internally Generated Goals.- 3.1 Abstract.- 3.2 Introduction.- 3.3 Perception as an Active Process.- 3.4 Nonlinear Dynamics of the Olfactory System.- 3.5 Chaotic Oscillations During Learning Novel Stimuli.- 3.6 Generalization and Consolidation of New Perceptions with Context.- 3.7 The Central Role of the Limbic System.- 3.8 Conclusions.- 3.9 Acknowledgements.- References.- 4 A Theory of Thalamocortex.- 4.1 Abstract.- 4.2 Active Neurons.- 4.3 Neuronal Connections within Thalamocortex.- 4.4 Cortical Regions.- 4.5 Feature Artractor Associative Memory Neural Network.- 4.6 Antecedent Support Associative Memory Neural Network.- 4.7 Hierarchical Abstractor Associative Memory Neural Network.- 4.8 Consensus Building.- 4.9 Brain Command Loop.- 4.10 Testing this Theory.- 4.11 Acknowledgements.- Appendix A: Sketch of an Analysis of the Simplified Feature Artractor Associative Memory Neural Network.- Appendix B: Experiments with a Simplified Antecedent Support Associative Memory Neural Network.- Appendix C: An Experiment with Consensus Building.- References.- 5 Elementary Principles of Nonlinear Synaptic Transmission.- 5.1 Abstract.- 5.2 Introduction.- 5.3 Frequency-dependent Synaptic Transmission.- 5.4 Nonlinear Synapses Enable Temporal Integration.- 5.5 Temporal Information.- 5.6 Packaging Temporal Information.- 5.7 Size of Temporal Information Packages.- 5.8 Classes of Temporal Information Packages.- 5.9 Emergence of the Population Signal.- 5.10 Recurrent Neural Networks.- 5.11 Combining Temporal Information in Recurrent Networks.- 5.12 Organization of Synaptic Parameters.- 5.13 Learning Dynamics, Learning to Predict.- 5.14 Redistribution of Synaptic Efficacy.- 5.15 Optimizing Synaptic Prediction.- 5.16 A Nested Learning Algorithm.- 5.17 Retrieving Memories from Nonlinear Synapses.- 5.18 Conclusion.- 5.19 Acknowledgements.- Appendix A: Sherrington 's Leap.- Appendix B: Functional Significance.- Appendix C: Visual Patch Recordings.- Appendix D: Biophysical Basis of Parameters.- Appendix E: Single Connection, Many Synapses.- Appendix F: The Model.- Appendix G: Synaptic Classes.- Appendix H: Paired Pulses.- Appendix I: Digitization of Synaptic Parameters.- Appendix J: Steady State.- Appendix K: Inhibitory Synapses.- Appendix L: Lack of Boundaries.- Appendix M: Speed of RI Accumulation.- Appendix N: Network Efficiency.- Appendix O: The Binding Problem of the Binding Problem.- References.- 6 The Development of Cortical Models to Enable Neural-based Cognitive Architectures.- 6.1 Introduction.- 6.1.1 Computational Neuroscience Paradigms and Predictions.- 6.2 The Challenge of Cognitive Architectures.- 6.2.1 General Cognitive Skills.- 6.2.2 A Survey of Current Cognitive Architectures.- 6.2.3 Assumptions and Limitations of Current Cognitive Architectures.- 6.3 The Prospects for a Neural-based Cognitive Architecture.- 6.3.1 Limitations of Artificial Neural Networks.- 6.3.2 Biological Networks Emerging from Computational Neuroscience: Sensory and Motor Modules.- 6.3.3 Forebrain Systems Supporting Cortical Function.- 6.4 Elements of a General Cortical Model.- 6.4.1 Single Neuron Models or Processor Elements.- 6.4.2 Microcircuitry.- 6.4.3 Dynamic Synaptic Connectivity.- 6.4.4 Ensemble Dynamics and Coding.- 6.4.5 Transient Coherent Structures and Cognitive Dynamics.- 6.5 Promising Models and their Capabilities.- 6.5.1 Biologically Based Cortical Systems.- 6.5.2 A Cortical System Based on Neurobiology, Biological Principles and Mathematical Analysis: Cortronics.- 6.5.3 Connectionist Architectures with Biological Principles: The Convergence of Cognitive Science and Computational Neuroscience.- 6.6 The Challenges of Demonstrating Cognitive Ability.- 6.6.1 Robotics and Autonomous Systems.- 6.7 Co-development Strategies for Automated Systems and Human Performers.- 6.8 Acknowledgements.- References.- 7 The Behaving Human Neocortex as a Dynamic Network of Networks.- 7.1 Abstract.- 7.2 Neural Organization Across Scales.- 7.3 Network of Networks (NoN) Model.- 7.3.1 Architecture.- 7.3.2 Model Formulation.- 7.3.3 NoN Properties.- 7.3.4 NoN Contributions.- 7.4 Neurobiological Predicatability and Falsifiability.- 7.5 Implications for Neuroengineering.- 7.6 Concluding Remarks.- 7.7 Acknowledgements.- References.- 8 Towards Global Principles of Brain Processing.- 8.1 Abstract.- 8.2 Introduction.- 8.3 What Could Brain Principles Look Like?.- 8.4 Structural Modeling.- 8.5 Static Activation Study Results.- 8.6 The Motion After-Effect (MAE).- 8.7 The Three-Stage Model of Consciousness.- 8.8 The CODAM Model of Consciousness.- 8.9 Principles of the Global Brain.- 8.10 The Thinking Brain.- 8.11 Discussion.- 8.12 Acknowledgement.- References.- 9 The Neural Networks for Language in the Brain: Creating LAD.- 9.1 Abstract.- 9.2 Introduction.- 9.3 The ACTION Net Model of TSSG.- 9.4 Phrase Structure Analyzers.- 9.5 Generativity of the Adjectival Phrase Analyzer.- 9.6 Complexity of Phrase Structure Analysis.- 9.7 Future Directions in the Construction of LAD.- 9.8 Conclusions.- References.- 10 Cortical Belief Networks.- 10.1 Abstract.- 10.1 Introduction.- 10.1 An Example.- 10.1 Representing Distributions in Populations.- 10.1 Basis Function Representations.- 10.1 Generative Representations.- 10.1 Standard Bayesian Approach.- 10.1 Distributional Population Coding.- 10.1 Applying Distributional Population Coding.- 10.1.1 Population Analysis.- 10.1.1 Decoding Transparent Motion.- 10.1.1 Decision Noise.- 10.1.1 Lateral Interactions.- 10.1 Cortical Belief Network.- 10.1 Discussion.- 10.1 Acknowledgements.- 10.1 References.

Reviews

Author Information

Tab Content 6

Author Website:  

Customer Reviews

Recent Reviews

No review item found!

Add your own review!

Countries Available

All regions
Latest Reading Guide

lgn

al

Shopping Cart
Your cart is empty
Shopping cart
Mailing List