#Summary comments #Play with the [time, mind]-bending perspective yourself #Ratio of actual to semi-log detrended data : [advantages, disadvantages] #Future related work #Comparison of [TradingView, Yahoo finance] data #[data, software] cart [description, links] #Summary comments #Some of Wilson's key themes #What do I think of the book? #The core theme : Quantum Psychology #Rethinking neural networks #Section I. The dendritic microprocess #Section II. Quantum neurodynamics #Section III. Nanotechnology #Section IV. Perceptual processing #Other themes #Delightful finds : ideas that were [new, different] to me #[Ideas, comments] that echo some of my own feelings #Strangely absent ideas #Stripping away all intelligence #Hypocrisy #Old dogs and new tricks #Weaker points #I can't comment, as I have no knowledge #Nowhere near as radical as I #Far beyond the bounds of Wilson's book #Human [psychology, sociology] - better concepts from technical market analyis? #Astrology? You must be joking!! #Only details matter, for the rest I yap #Summary #Key files #References ./0.0 31Mar2022 full presentation.mp3 ./ #??? #??? code code development overall #??? #??? fileops run commentary, overall.html
fileops run commentary, overall.html
fileops run commentary, overall.html
code code development #??? #??? code code development overall #??? #??? fileops run commentary, overall.html
fileops run commentary, overall.html
fileops run commentary, overall.html
code code development #Initial setup #So, what does the user have to do to [get, adjust] output? #What CAN the user change? #Initial observations #Initial questions #Adapt my PineScript program to other [symbol, application]s #PineScript, SP500USD chart - multi-fractal on-chart comments #Special comments #83year detrended SP500 #Oncolytics - possible bull wedge formation? #Regular [1,6] month market views #International market indexes [SP500, NASDAQ, SHCOMP, 10y T-bill] #Interest rates, currency [DXY,CNYUSD] #crypto [BTC,ETH,COIN], 10y T-bill #USOIL, Canadian [XEG ETF,CNQ,IMO,SU], 10y T-bill #pharma [CRSP,NTLA,BEAM,BLUE,EDIT,ONC] #Key [results, comments] #How can the Great Pricing Waves be correlated with #Ratio of actual to semi-log detrended data : [advantages, disadvantages] #Future related work #Comparison of [TradingView, Yahoo finance] data #[data, software] cart [description, links] #Key [results, comments] #Play with the [time, mind]-bending perspective yourself #Key [results, comments] #Play with the [time, mind]-bending perspective yourself #Ratio of actual to semi-log detrended data : [advantages, disadvantages] #Future related work #Comparison of [TradingView, Yahoo finance] data #[data, software] cart [description, links] search/ #Colton, Bromley Jul2018 mixed oxide thorium based fuels #Bromley, Alexander Dec2018 thorium and depleted uranium in sub-critical GC-PTR #Colton, Bromley Mar2021 PT Heavy Water Reactor to Destroy Americium and Curium #Wojtaszek, Bromley Jul2022 Uranium-Based Oxy-Carbide in Compact HTGC Reactors #Wojtaszek, Bromley Aug2023 Plutonium-Thorium Fuels with 7LiH Moderator #Introduction #Detailed [description, specification] #what is currently ignored by callerID-SNNs? #mRNA program code causes neurons to fire? #random thoughts #Is there any biological plausibility? #Future objectives #tie-in with Grossberg's 2021 'Conscious Mind, Resonant Brain' #fractal [dendrite, axon]s #Links to my related work #??? #??? #??? #??? #Introduction #Definitions of consciousness #Sentience #Definitions of sentience #Philosophy and sentience #Alleged sentience of artificial intelligence #'Multiple Conflicting Hypothesis' for consciousness: #Stephen Grossberg 2021 Conscious Mind, Resonant Brain #John Taylor 2006 The Mind: A users manual #Brain regions, neural networks, resonance #Llinas 1998 Recurrent thalamo-cortical resonance #Min 2010 Thalamic reticular networking #Modern [philosophical, logical] models of consciousness #Dehaene–Changeux 1986 global neuronal workspace model #Bernard Baars 1988 global workspace model #Crick-Koch 1990 Towards a neurobiological theory of consciousness #Neural correlates of consciousness #Giulio Tononi 2004 Integrated information theory #Multiple drafts #Functionalism #Undefined neuron structures, Universal Function Approximators, Magic science #Electromagnetic theories of consciousness #Quantum consciousness #??? #??? #??? #??? #Questions: Grossberg's c-ART, Transformer NNs, and consciousness? #Grossberg: why ART is relevant to consciousness in Transformer NNs #A workable [definition, context, model] for consciousness #non-[Grossberg, TrNN] topics #Introduction #Grossberg 2021: cellular evolution and top-down-bottom-up mechanisms #Howell 2011: the need for machine consciousness #Introduction #Blake Lemoine: Is LaMDA Sentient? #Is LaMDA Sentient? — an Interview #We’re All Different and That’s Okay #What is LaMDA and What Does it Want? #What is sentience and why does it matter? #Terry Sejnowski: Is machine intelligence debatable? #Gary Marcus: Current LLMs do NOT possess 'Artitifial General Intelligence' (AGI) #Do these comments have anything to do with consciousness? #Tae Kim: AI Chatbots Keep Making Up Facts. Nvidia Has an Answer #Proceeding Table of Contents #Eccles keynote, Pribram post-conference viewpoint #Section I. The dendritic microprocess #Section II. Quantum neurodynamics #Section III. Nanoneurology #Section IV. Perceptual Processing #Afterword, list of authors #Quotes #Historical thinking about quantum [neurophysiology, consciousness] #Howells questions about 1993 conference proceedings #Introduction: what does quantum physics add to our understanding to consciousness? #Historical thinking about quantum [neurophysiology, consciousness] #Quantum [concept, approach, context]s #Quantum concepts #Quantum approaches to applications #Quantum contexts for consciousness #??? #??? #??? #??? #??? #??? #??? #??? #??? #??? ??? #??? #??? ./ #Introduction #Links for doing the work #Going further - themes, videos, presentations, courses #Captioned images - remaining problems #Introduction #principles, architecture, function, process #Principles, Principia #equations of the [brain, mind] #modules & modal architectures ([micro, macro]-circuits) #CogEM Cognitive-Emotional-Motor model #[intra, inter]-cellular process #informational noise suppression #[stable, robust, adaptive] learning #on-center off-surround #top-down bottom-up #cooperative-competitive #complementary computing #laminar computing #computing with cellular patterns #family of ART.....base, linguistic #ART - Adaptive Resonance Theory #LAMINART vison, speech, cognition #[, n]START learning & memory consolidation #ARTMAP associate learned categories across ART networks #[, c]ARTWORD word perception cycle #LIST PARSE [linguistic, spatial, motor] working memory #family of ART ....visual, auditory #[, d, p]ARTSCAN attentional shroud, binocular rivalry #ARTSCENE classification of scenic properties #SMART synchronous matching ART, mismatch triggering #ARTPHONE [gain control, working] memory #ARTSTREAM auditory streaming, SPINET sound spectra #consciousness #What is consciousness? #conscious vs non-conscious #Grossberg: other consciousness theories #random fun themes #art (painting etc) #biology, evolution, paleontology #brain disorders and disease #hippocampus IS a cognitive map! #auditory continuity illusion #see-reach to hear-speak #neurotransmitter #learning and development #Why are there hexagonal grid cell receptive fields? #Strange themes #AI, machine intelligence, etc #Brain is NOT Bayesian? #brain rythms & Schuman resonances #Explainable AI #logic vs connectionist #Successful [datamodelling, applications] showing effectiveness of cART etc #[bio, neuro, psycho]logy data #[software, engineering, other] applications #Navigation: [menu, link, directory]s #Theme webPage generation by bash script #Notation for [chapter, section, figure, table, index, note]s #incorporate reader questions into theme webPages #Home TrNN&ART Status: #Captioned images [image, link] problems : #no permission clause #caption cut off : #image-caption is way too tall : #Go through image number sequence for missing images #missing 'primary links' #Questions: Grossberg's c-ART, Transformer NNs, and consciousness? #ART assess theories of consciousness #ART augmentation of other research #[definitions, models] of consciousness #For whom the bell tolls #Grossberg part of webSite #Grossbergs ART- Adaptive Resonance Theory #Grossbergs cellular patterns computing #Grossbergs complementary computing #Grossbergs Consciousness: neural [architecture, function, process, percept, learn, etc] #Grossbergs cooperative-competitive #Grossbergs [core, fun, strange] concepts #Grossbergs equations of the mind #Grossbergs laminar #Grossbergs list of [chapter, section]s #Grossbergs list of [figure, table]s #Grossbergs list of index #Grossbergs modal architectures #Grossbergs modules (microcircuits) #Grossbergs overview #Grossbergs paleontology #Grossbergs quoted text #Grossbergs what is consciousness #TrNNs have incipient consciousness #Introduction #Let the machines speak #machine consciousness, the need #non-conscious themes notes #opinions- Blake Lemoine, others #Pribram 1993 quantum fields and consciousness proceedings #references- Grossberg #references- non-Grossberg #reader Howell notes #Taylors consciousness #TrNN controls need consciousness #TrNNS&ART theme #TrNNs augment by cART #TrNNs have incipient consciousness #[use, modfication]s of c-ART #What is consciousness: from historical to Grossberg #why is cART unknown link link link link missing link missing link missing link #??? #??? #ART - Adaptive Resonance Theory #Introduction #principles, architecture, function, process #Principles, Principia #equations of the [brain, mind] #modules & modal architectures ([micro, macro]-circuits) #CogEM Cognitive-Emotional-Motor model #[intra, inter]-cellular process #informational noise suppression #[stable, robust, adaptive] learning #on-center off-surround #top-down bottom-up #cooperative-competitive #complementary computing #laminar computing #computing with cellular patterns #family of ART.....base, linguistic #ART - Adaptive Resonance Theory #LAMINART vison, speech, cognition #[, n]START learning & memory consolidation #ARTMAP associate learned categories across ART networks #[, c]ARTWORD word perception cycle #LIST PARSE [linguistic, spatial, motor] working memory #family of ART ....visual, auditory #[, d, p]ARTSCAN attentional shroud, binocular rivalry #ARTSCENE classification of scenic properties #SMART synchronous matching ART, mismatch triggering #ARTPHONE [gain control, working] memory #ARTSTREAM auditory streaming, SPINET sound spectra #consciousness #What is consciousness? #conscious vs non-conscious #Grossberg: other consciousness theories #random fun themes #art (painting etc) #biology, evolution, paleontology #brain disorders and disease #hippocampus IS a cognitive map! #auditory continuity illusion #see-reach to hear-speak #neurotransmitter #learning and development #Why are there hexagonal grid cell receptive fields? #Strange themes #AI, machine intelligence, etc #Brain is NOT Bayesian? #brain rythms & Schuman resonances #Explainable AI #logic vs connectionist #Successful [datamodelling, applications] showing effectiveness of cART etc #[bio, neuro, psycho]logy data #[software, engineering, other] applications #Chapter 10 - Laminar computing by cerebral cortex #Conscious mind, resonant brain: Table of Contents #Conscious mind, resonant brain: sub-section list #Chapter 6 - Conscious seeing and invariant recognition #Grossberg's comments for some well-known consciousness theories #Chapter 12 - From seeing and reaching to hearing and speaking #Introduction #Conscious mind, resonant brain: Table of Contents #Conscious mind, resonant brain: sub-section list #Instructions #Preface #Chapter 1 - Overview #Chapter 2 - How a brain makes a mind #Chapter 3 - How a brain sees: Constructing reality #Chapter 4 - How a brain sees: Neural mechanisms #Chapter 5 - Learning to attend, recognize, and predict the world #Chapter 6 - Conscious seeing and invariant recognition #Chapter 7 - How do we see a changing world? #Chapter 8 - How we see and recognize object motion #Chapter 9 - Target tracking, navigation, and decision-making #Chapter 10 - Laminar computing by cerebral cortex #Chapter 11 - How we see the world in depth #Chapter 12 - From seeing and reaching to hearing and speaking #Chapter 13 - From knowing to feeling #Chapter 14 - How prefrontal cortex works #Chapter 15 - Adaptively timed learning #Chapter 16 - Learning maps to navigate space #Chapter 17 - A universal development code #Introduction #Grossberg: why ART is relevant to consciousness in Transformer NNs #Grossberg's [non-linear DEs, CogEm, CLEARS, ART, LAMINART, cART] models #1958-59 ?among the world's first? systems of non-linear differential equations for NNs #?date? CogEm Cognitive-Emotional model #?date? CLEARS [Cognition, Learning, Expectation, Attention, Resonance, Synchrony] #1976 ART Adaptive Resonance Theory #?date? LAMINART Laminar computing ART #?date? cART conciousness ART #The underlying basis in [bio, psycho]logical data #Credibility from non-[bio, psycho]logical applications of Grossberg's ART #Key [results, comments] #Play with the [time, mind]-bending perspective yourself #??? #??? #Grossberg OR[anticipated, predicted, unified] the [experimental result, model]s #Grossberg's other comments #Grossberg's comments for some well-known consciousness theories #??? #??? ??? #??? #??? ./ #Introduction #Links for doing the work #Going further - themes, videos, presentations, courses #Captioned images - remaining problems #Introduction #principles, architecture, function, process #Principles, Principia #equations of the [brain, mind] #modules & modal architectures ([micro, macro]-circuits) #CogEM Cognitive-Emotional-Motor model #[intra, inter]-cellular process #informational noise suppression #[stable, robust, adaptive] learning #on-center off-surround #top-down bottom-up #cooperative-competitive #complementary computing #laminar computing #computing with cellular patterns #family of ART.....base, linguistic #ART - Adaptive Resonance Theory #LAMINART vison, speech, cognition #[, n]START learning & memory consolidation #ARTMAP associate learned categories across ART networks #[, c]ARTWORD word perception cycle #LIST PARSE [linguistic, spatial, motor] working memory #family of ART ....visual, auditory #[, d, p]ARTSCAN attentional shroud, binocular rivalry #ARTSCENE classification of scenic properties #SMART synchronous matching ART, mismatch triggering #ARTPHONE [gain control, working] memory #ARTSTREAM auditory streaming, SPINET sound spectra #consciousness #What is consciousness? #conscious vs non-conscious #Grossberg: other consciousness theories #random fun themes #art (painting etc) #biology, evolution, paleontology #brain disorders and disease #hippocampus IS a cognitive map! #auditory continuity illusion #see-reach to hear-speak #neurotransmitter #learning and development #Why are there hexagonal grid cell receptive fields? #Strange themes #AI, machine intelligence, etc #Brain is NOT Bayesian? #brain rythms & Schuman resonances #Explainable AI #logic vs connectionist #Successful [datamodelling, applications] showing effectiveness of cART etc #[bio, neuro, psycho]logy data #[software, engineering, other] applications #Navigation: [menu, link, directory]s #Theme webPage generation by bash script #Notation for [chapter, section, figure, table, index, note]s #incorporate reader questions into theme webPages #Home TrNN&ART Status: #Captioned images [image, link] problems : #no permission clause #caption cut off : #image-caption is way too tall : #Go through image number sequence for missing images #missing 'primary links' #Questions: Grossberg's c-ART, Transformer NNs, and consciousness? #ART assess theories of consciousness #ART augmentation of other research #[definitions, models] of consciousness #For whom the bell tolls #Grossberg part of webSite #Grossbergs ART- Adaptive Resonance Theory #Grossbergs cellular patterns computing #Grossbergs complementary computing #Grossbergs Consciousness: neural [architecture, function, process, percept, learn, etc] #Grossbergs cooperative-competitive #Grossbergs [core, fun, strange] concepts #Grossbergs equations of the mind #Grossbergs laminar #Grossbergs list of [chapter, section]s #Grossbergs list of [figure, table]s #Grossbergs list of index #Grossbergs modal architectures #Grossbergs modules (microcircuits) #Grossbergs overview #Grossbergs paleontology #Grossbergs quoted text #Grossbergs what is consciousness #TrNNs have incipient consciousness #Introduction #Let the machines speak #machine consciousness, the need #non-conscious themes notes #opinions- Blake Lemoine, others #Pribram 1993 quantum fields and consciousness proceedings #references- Grossberg #references- non-Grossberg #reader Howell notes #Taylors consciousness #TrNN controls need consciousness #TrNNS&ART theme #TrNNs augment by cART #TrNNs have incipient consciousness #[use, modfication]s of c-ART #What is consciousness: from historical to Grossberg #why is cART unknown #Preface #[Turing, von Neuman] machines of gene mechanisms? #[intra, extra]-cellular processes, [neuron, astrocyte]s #Cellular mechanisms for [protein, information] #[DNA, rhibosome, etc] addresses #[2, 4]-value logic for [protein, information] #ribosomes #microtubules: platforms for [transport, info processing]? #same mRNA code, different mechanism: ergo diff [program, protein]? #missing sub-sections for genetic machinery #DNA transcription to mRNA #Computations with multiple RNA strands #NOT: simple function of a single RNA input strand, one output RNA #AND: simple function of two input RNA strands, one output RNA #GoTo #IF: control branching to an [operation, address] #??? #MindCode [learn, evolve]: Grossberg's 'Conscious Mind, Resonant Brain' #Consciousness: Grossberg's tie-in of [, non] conscious processes #Biological context #[pro, eu]karyotes #Lamarckian versus Mendellian heredity, spiking MindCode as special case #Astrocyctes #Ontogeny: the growth of bioFirmWare #Biological similies #Endocrine system #bone [shell, fibre]s #Building MindCode from [bio, psycho]logical data #[proteomic, neuroinfo, MindCode]*[protein, program]*bioFirmWare #MindCode [identify, library, model, predict, change, heal] #MindCode applications to keep in mind #Autonomous systems, networks, robots #Missing concepts - among hundreds #Shapiro & Benenson: example stream from early MindCode 2005 on... #Links to my related work #[lists, outline, concept]s to keep in mind from 2020 #Computations with multiple RNA strands #[DNA, rhibosome, etc] addresses #??? #??? #??? #??? #??? #??? #Introduction #Definitions of consciousness #Sentience #Definitions of sentience #Philosophy and sentience #Alleged sentience of artificial intelligence #'Multiple Conflicting Hypothesis' for consciousness: #Stephen Grossberg 2021 Conscious Mind, Resonant Brain #John Taylor 2006 The Mind: A users manual #Brain regions, neural networks, resonance #Llinas 1998 Recurrent thalamo-cortical resonance #Min 2010 Thalamic reticular networking #Modern [philosophical, logical] models of consciousness #Dehaene–Changeux 1986 global neuronal workspace model #Bernard Baars 1988 global workspace model #Crick-Koch 1990 Towards a neurobiological theory of consciousness #Neural correlates of consciousness #Giulio Tononi 2004 Integrated information theory #Multiple drafts #Functionalism #Undefined neuron structures, Universal Function Approximators, Magic science #Electromagnetic theories of consciousness #Quantum consciousness #??? #??? #ART - Adaptive Resonance Theory #Introduction #principles, architecture, function, process #Principles, Principia #equations of the [brain, mind] #modules & modal architectures ([micro, macro]-circuits) #CogEM Cognitive-Emotional-Motor model #[intra, inter]-cellular process #informational noise suppression #[stable, robust, adaptive] learning #on-center off-surround #top-down bottom-up #cooperative-competitive #complementary computing #laminar computing #computing with cellular patterns #family of ART.....base, linguistic #ART - Adaptive Resonance Theory #LAMINART vison, speech, cognition #[, n]START learning & memory consolidation #ARTMAP associate learned categories across ART networks #[, c]ARTWORD word perception cycle #LIST PARSE [linguistic, spatial, motor] working memory #family of ART ....visual, auditory #[, d, p]ARTSCAN attentional shroud, binocular rivalry #ARTSCENE classification of scenic properties #SMART synchronous matching ART, mismatch triggering #ARTPHONE [gain control, working] memory #ARTSTREAM auditory streaming, SPINET sound spectra #consciousness #What is consciousness? #conscious vs non-conscious #Grossberg: other consciousness theories #random fun themes #art (painting etc) #biology, evolution, paleontology #brain disorders and disease #hippocampus IS a cognitive map! #auditory continuity illusion #see-reach to hear-speak #neurotransmitter #learning and development #Why are there hexagonal grid cell receptive fields? #Strange themes #AI, machine intelligence, etc #Brain is NOT Bayesian? #brain rythms & Schuman resonances #Explainable AI #logic vs connectionist #Successful [datamodelling, applications] showing effectiveness of cART etc #[bio, neuro, psycho]logy data #[software, engineering, other] applications #Chapter 10 - Laminar computing by cerebral cortex #Conscious mind, resonant brain: Table of Contents #Conscious mind, resonant brain: sub-section list #Chapter 6 - Conscious seeing and invariant recognition #Grossberg's comments for some well-known consciousness theories #Chapter 12 - From seeing and reaching to hearing and speaking #Introduction #Conscious mind, resonant brain: Table of Contents #Conscious mind, resonant brain: sub-section list #Instructions #Preface #Chapter 1 - Overview #Chapter 2 - How a brain makes a mind #Chapter 3 - How a brain sees: Constructing reality #Chapter 4 - How a brain sees: Neural mechanisms #Chapter 5 - Learning to attend, recognize, and predict the world #Chapter 6 - Conscious seeing and invariant recognition #Chapter 7 - How do we see a changing world? #Chapter 8 - How we see and recognize object motion #Chapter 9 - Target tracking, navigation, and decision-making #Chapter 10 - Laminar computing by cerebral cortex #Chapter 11 - How we see the world in depth #Chapter 12 - From seeing and reaching to hearing and speaking #Chapter 13 - From knowing to feeling #Chapter 14 - How prefrontal cortex works #Chapter 15 - Adaptively timed learning #Chapter 16 - Learning maps to navigate space #Chapter 17 - A universal development code #Introduction #Grossberg: why ART is relevant to consciousness in Transformer NNs #Grossberg's [non-linear DEs, CogEm, CLEARS, ART, LAMINART, cART] models #1958-59 ?among the world's first? systems of non-linear differential equations for NNs #?date? CogEm Cognitive-Emotional model #?date? CLEARS [Cognition, Learning, Expectation, Attention, Resonance, Synchrony] #1976 ART Adaptive Resonance Theory #?date? LAMINART Laminar computing ART #?date? cART conciousness ART #The underlying basis in [bio, psycho]logical data #Credibility from non-[bio, psycho]logical applications of Grossberg's ART #Key [results, comments] #Play with the [time, mind]-bending perspective yourself #??? #??? #Grossberg OR[anticipated, predicted, unified] the [experimental result, model]s #Grossberg's other comments #Grossberg's comments for some well-known consciousness theories #??? #??? #??? #??? #Questions: Grossberg's c-ART, Transformer NNs, and consciousness? #Grossberg: why ART is relevant to consciousness in Transformer NNs #A workable [definition, context, model] for consciousness #non-[Grossberg, TrNN] topics #Introduction #Grossberg 2021: cellular evolution and top-down-bottom-up mechanisms #Howell 2011: the need for machine consciousness #Introduction #Blake Lemoine: Is LaMDA Sentient? #Is LaMDA Sentient? — an Interview #We’re All Different and That’s Okay #What is LaMDA and What Does it Want? #What is sentience and why does it matter? #Terry Sejnowski: Is machine intelligence debatable? #Gary Marcus: Current LLMs do NOT possess 'Artitifial General Intelligence' (AGI) #Do these comments have anything to do with consciousness? #Tae Kim: AI Chatbots Keep Making Up Facts. Nvidia Has an Answer #Proceeding Table of Contents #Eccles keynote, Pribram post-conference viewpoint #Section I. The dendritic microprocess #Section II. Quantum neurodynamics #Section III. Nanoneurology #Section IV. Perceptual Processing #Afterword, list of authors #Quotes #Historical thinking about quantum [neurophysiology, consciousness] #Howells questions about 1993 conference proceedings #Introduction: what does quantum physics add to our understanding to consciousness? #Historical thinking about quantum [neurophysiology, consciousness] #Quantum [concept, approach, context]s #Quantum concepts #Quantum approaches to applications #Quantum contexts for consciousness ??? #??? #??? #??? #??? #??? #??? #??? #??? #??? #??? #??? #??? ./ #Introduction #Links for doing the work #Going further - themes, videos, presentations, courses #Captioned images - remaining problems #Introduction #principles, architecture, function, process #Principles, Principia #equations of the [brain, mind] #modules & modal architectures ([micro, macro]-circuits) #CogEM Cognitive-Emotional-Motor model #[intra, inter]-cellular process #informational noise suppression #[stable, robust, adaptive] learning #on-center off-surround #top-down bottom-up #cooperative-competitive #complementary computing #laminar computing #computing with cellular patterns #family of ART.....base, linguistic #ART - Adaptive Resonance Theory #LAMINART vison, speech, cognition #[, n]START learning & memory consolidation #ARTMAP associate learned categories across ART networks #[, c]ARTWORD word perception cycle #LIST PARSE [linguistic, spatial, motor] working memory #family of ART ....visual, auditory #[, d, p]ARTSCAN attentional shroud, binocular rivalry #ARTSCENE classification of scenic properties #SMART synchronous matching ART, mismatch triggering #ARTPHONE [gain control, working] memory #ARTSTREAM auditory streaming, SPINET sound spectra #consciousness #What is consciousness? #conscious vs non-conscious #Grossberg: other consciousness theories #random fun themes #art (painting etc) #biology, evolution, paleontology #brain disorders and disease #hippocampus IS a cognitive map! #auditory continuity illusion #see-reach to hear-speak #neurotransmitter #learning and development #Why are there hexagonal grid cell receptive fields? #Strange themes #AI, machine intelligence, etc #Brain is NOT Bayesian? #brain rythms & Schuman resonances #Explainable AI #logic vs connectionist #Successful [datamodelling, applications] showing effectiveness of cART etc #[bio, neuro, psycho]logy data #[software, engineering, other] applications #Navigation: [menu, link, directory]s #Theme webPage generation by bash script #Notation for [chapter, section, figure, table, index, note]s #incorporate reader questions into theme webPages #Home TrNN&ART Status: #Captioned images [image, link] problems : #no permission clause #caption cut off : #image-caption is way too tall : #Go through image number sequence for missing images #missing 'primary links' #Questions: Grossberg's c-ART, Transformer NNs, and consciousness? #ART assess theories of consciousness #ART augmentation of other research #[definitions, models] of consciousness #For whom the bell tolls #Grossberg part of webSite #Grossbergs ART- Adaptive Resonance Theory #Grossbergs cellular patterns computing #Grossbergs complementary computing #Grossbergs Consciousness: neural [architecture, function, process, percept, learn, etc] #Grossbergs cooperative-competitive #Grossbergs [core, fun, strange] concepts #Grossbergs equations of the mind #Grossbergs laminar #Grossbergs list of [chapter, section]s #Grossbergs list of [figure, table]s #Grossbergs list of index #Grossbergs modal architectures #Grossbergs modules (microcircuits) #Grossbergs overview #Grossbergs paleontology #Grossbergs quoted text #Grossbergs what is consciousness #TrNNs have incipient consciousness #Introduction #Let the machines speak #machine consciousness, the need #non-conscious themes notes #opinions- Blake Lemoine, others #Pribram 1993 quantum fields and consciousness proceedings #references- Grossberg #references- non-Grossberg #reader Howell notes #Taylors consciousness #TrNN controls need consciousness #TrNNS&ART theme #TrNNs augment by cART #TrNNs have incipient consciousness #[use, modfication]s of c-ART #What is consciousness: from historical to Grossberg #why is cART unknown link link link link missing link missing link missing link #??? #??? /home/bill/web /home/bill/web/Sun Charvatova/Radioisotopes/Howell Aug08 - Charvatova's hypothesis & Isotopic solar proxies.pdf /home/bill/web /home/bill/web/Sun Charvatova/Charvatova related files/Howell - solar inertial motion - NASA-JPL versus Charvatova.pdf #cheat #climate #generation #nuclear Thoughts/Favourite sayings & Crazy Thoughts.odt #Introduction #Andrew Hall's work #Surface Conductive Faults Thunderblog #Arc Blast - Part One Thunderblog #Arc Blast - Part Two Thunderblog #Arc Blast - Part Three Thunderblog #The Monocline Thunderblog #The Maars of Pinacate, Part One Thunderblog #The Maars of Pinacate, Part Two Thunderblog #Nature's Electrode Thunderblog #The Summer Thermopile Thunderblog #Tornado - The Electric Model Thunderblog #Lightning-Scarred Earth, Part 1 Thunderblog #Lightning-Scarred Earth, Part 2 Thunderblog #Arches National Monument, Sputtering Canyons Part 1 Thunderblog #Colorado Plateau, Sputtering Canyons part 2 Thunderblog #Secondary effects from electrical deposition, Sputtering Canyons series Part 3 Thunderblog #Eye of the Storm, Part 1 Thunderblog #The Electric Winds of Jupiter, Eye of the Storm, Part 2 Thunderblog #Some storms suck and others blow, Eye of the Storm, Part 3 Thunderblog #Wind Map, Eye of the Storm, Part 4 Thunderblog #Large Scale Wind Structures, Eye of the Storm, Part 5 Thunderblog #Jupiter's The Great Red Spot, Eye of the Storm, Part 6 Thunderblog #Electric Earth & the Cosmic Dragon, Eye of the Storm part 7, 1 of 2 Thunderblog and video #Electric Earth & the Cosmic Dragon, Eye of the Storm part 7, 2 of 2 Thunderblog and video #Proving the Passage of the Dragon, Eye of the Storm, Part 8 Thunderblog and video #San Andreas Fault - A Dragon in Action? Eye of the Storm, Part 9, 1 of 2 Thunderblog and video #Ground Currents and Subsurface Birkeland Currents - How the Earth Thinks? Eye of the Storm part 9, 2 of 2 Thunderblog and video #Reverse Engineering the Earth, Eye of the Storm, Part 10 Thunderblog and video #Easter Egg Hunt video #The Cross from the Laramie Mountains, Part 2 #The Shocking Truth, Thunderblog and video #Cracks in Theory #Electricity in Ancient Egypt, video 26Aug2023 #Other electric geology concepts ../index.html #1 ukrinform.net news log.html news items except [ukrinform, kyivindependent].html #07Mar2022 18:05 kyivindependent.com: Blinken- Ukraine’s using defense support 'effectively' against Russian aggression #02Mar2022 Roosevelt treason - atomic bomb [top-secret designs, material, etc] direct to Russia 1943 #20Jan2022 emto Geoff Cowper - my thoughts on WWI & Ukraine now #20Jan2022 Dan's Ukraine questionquestion, and Korotayev prediction of possible state collapse in Saudi Arabia #Howell's TradingView chart - USOIL snakes, ladders, Tchaichovsky ukrinform.net news log.html news items except [ukrinform, kyivindependent].html #Maps of Ukraine [war [forecast, battles], losses, oil, gas pipelines, minerals] #Stray [thought, quote, history]s #15Mar2022 Howell: Quick summary, still more questions #05Mar2022 Scenarios by Others #04Mar2022 Howell: Russia preserves some infrastructure? No Ukraine scorched Earth? #02Mar2022 Howell: more questions #28Feb2022 Howell: new OPEC++, aligned with [China, Russia] economic priorities? #28Feb2022 Russian [plan, action]s, my naive reflections #26Feb2022 Howell - AM I A MORON OR WHAT? #Action Plan: What would I do if I was Putin? #Action Plan: What would I do if I was the Ukraine? #Summary #Introduction #Russia #Ukraine #Rhetoric not war #Immediate borderlands : Poland, Romania, Bulgaria #Scandanavia, Baltic #Turkey, Germany, France, Italy, rest of EU #USA #Players, puppets, rhetoric #Vladimir Putin #[Perspectives, connections] #Strategies for hunting lions #Strategies for preying on democracies and weak [dictatorship, monarchy]s #Always-shifting alliances #World War II (WWII) versus 2022 context #Calculus of War #General [limitation, constraint]s #personal contacts #References #05Mar2022 Scenarios by Others #Maps of Ukraine [war [forecast, battles], losses, oil, gas pipelines, minerals] ukrinform.net news log.html Mearsheimer 2015 Eastward expansion of NATO stages.png #Russia #Conrad Black: American destruction of the British Empire #Germany's hesitance : Petroleum then, natural gas now #[Nuclear, wind, solar] - energy and war #Calculus of War #MV Bodnarescu - hard-nosed engineer (Romanian German) #Andreas Mayer (Swiss German) and Joe Stachulak (Polish) - post-war generation #Neil Howell (my father) and I, 'the two fools who rushed in' #Roger Barlizan - Ukrainian and British royalist #Ibn Khaldun #Arnold Toynbee - Challege and response #US ages : heroic (revoln->Korea), [creative, effeminate, arrogant], machine #Ray Dalio: Changing world order ukrinform.net news log.html news items except [ukrinform, kyivindependent].html #1 ukrinform.net news log.html news items except [ukrinform, kyivindependent].html #07Mar2022 MW/AP: War in Ukraine: Zelensky government accuses Putin of resorting to ‘medieval siege’ tactics in Russia’s ongoing invasion #07Mar2022 MW: How China’s Currency Could Come Out a Big Winner in the Ukraine War #07Mar2022 Rodney's Take: 1920 Jones Act, shipped US goods too expensive for Americans #07Mar2022 PSI, opindia.com: Ukraine - Russian govt had raised 'bioweapons' alarm #04Mar2022 MktWatch, Opinion: Russia’s invasion of Ukraine: 4 ways this war could end #04Mar2022 MW/AP: Civilian drone hobbyists in Ukraine join the fight against Russia #04Mar2022 Rodney Johnson: Saudia Arabia's dilemma #02Mar2022 Rodney Johnson's Take - Guerillas in the Machine #26Feb2022 nationalreview.com: Why the Russians Are Struggling #02Mar2022 msn.com: Will Russia invade Moldova? #27Feb2022 Hugo talks - Skeptical view of war and media posing #26Feb2022 Dobler: 29% of the West’s Wheat Supply is gone – Ice Age Farmer #23Feb2022 Dobler: Russia’s Ballistic Missile deployments along Ukraine’s Eastern Border #07Feb2022 Russia has 70% of military capacity in place for full-scale invasion of Ukraine #24Jan2022 MW: Fiona Hill - Putin wants to evict the United States from Europe #28Jan2022 MW/AP: U.S. has put some 8,500 troops on higher alert for potential deployment #28Jan2022 MW/AP: Russia says it won’t start a war as Ukraine tensions mount #25Jan2022 TradingView: Bitcoin and the Ukraine, Russia ukrinform.net news log.html news items except [ukrinform, kyivindependent].html /home/bill/web /home/bill/web /home/bill/web /home/bill/web/Sun Charvatova/Charvátová, Hejda Aug08 - A possible role of the solar inertial motion in climatic changes.pdf /home/bill/web /home/bill/web /home/bill/web /home/bill/web/Sun Charvatova/Charvatova - list of publications.doc /home/bill/web /home/bill/web /home/bill/web /home/bill/web/Sun Charvatova/Radioisotopes/Howell Aug08 - Charvatova's hypothesis & Isotopic solar proxies.pdf /home/bill/web /home/bill/web /home/bill/web /home/bill/web/Sun Charvatova/Radioisotopes/Howell Aug08 - Charvatova's hypothesis & Isotopic solar proxies - graphs of time foldeding & bending.pdf /home/bill/web /home/bill/web /home/bill/web /home/bill/web/Sun Charvatova/Radioisotopes/Howell - history timelines and radioisotopes.jpg /home/bill/web /home/bill/web /home/bill/web /home/bill/web/Sun Charvatova/Radioisotopes/ /home/bill/web /home/bill/web /home/bill/web /home/bill/web/Sun Charvatova/Charvatova related files/Howell - solar inertial motion - NASA-JPL versus Charvatova.pdf /home/bill/web /home/bill/web /home/bill/web /home/bill/web/Sun Charvatova/Verification/ /home/bill/web /home/bill/web /home/bill/web /home/bill/web/Sun Charvatova/Howell - Solar presentations 13Oct06.pdf #Key [results, comments] #Play with the [time, mind]-bending perspective yourself #Ratio of actual to semi-log detrended data : [advantages, disadvantages] #Future related work #Comparison of [TradingView, Yahoo finance] data #[data, software] cart [description, links] #Key [results, comments] #Play with the [time, mind]-bending perspective yourself #Ratio of actual to semi-log detrended data : [advantages, disadvantages] #Future related work #Comparison of [TradingView, Yahoo finance] data #[data, software] cart [description, links] Howell's page #Table of Contents #21Feb2022 Do [influenza, covax] deaths account for MOST of the official reports of covid deaths? #21Feb2022 Do covax deaths account for ~50% of the official reports of covid deaths? #19Feb2022 Are many covid [case, hospitalization, death]s due to influenza and NOT covid? #14Jan2022 If I was Fauci - how might I cover my ass? Fun list to watch for #20Dec2020 update, Youyang Gu's comments on closing his forecasting activity #New corona virus cases/day/population for selected countries #Daily cases charts for countries, by region #Spreadsheet for generating the charts #Jumping off the cliff and into conclusions #COVID-19 data and models #Corona virus models #Cosmic/Galactic rays at historical high in summer 2019 #Is the cure worse than the disease? #Ben Davidson 07Apr2020 Space Weather & Pandemics #Ben Davidson 23Apr2020 Millions Are Being Murdered, The Killer Cure #David Spielhalter Risk of dying if you get coronavirus vs normal annual risk #Quentin Fottrell, MarketWatch - deaths of despair #Howell - FAR MORE Americans will die from the recession than corona virus #Questions, Successes, Failures #Questions #Why are European-decended countries particularly hard hit? #Was the virus made in a lab in China? #Apparent successes of the medical, scientific] experts? #Apparent failures of essentially ALL [medical, scientific] experts, and the mass media? #The population in general #Howells blog posts to MarketWatch etc #Howell comments : Kyle Beattie's Bayesian analysis of covax - ~30% increases in [case, death]s #Introduction #21Feb2022 Do covax deaths account for ~50% of the official reports of covid deaths? #Images : covax drives covid [case, death]s, plus it's own adverse effects #Howell : comments on selected [paper, presentation]s #Howell comments : Jessica Rose's analysis of VAERS Data, increase in Deaths Following covax Shots #Howell comments : Kyle Beattie's Bayesian analysis of covax - ~30% increases in [case, death]s #Howell comments : Pardekooper's videos are handy to get started with database usage. #Howell comments : Covax 'how might I cover my ass?' #Howell comments : Kyle Beattie's Bayesian analysis of covax - ~30% increases in [case, death]s #21Jan2022 Fentanyl versus covid versus covax, and the infamous downtown eastside of Vancouver #14Jan2022 Re: If I was Fauci - how might I cover my ass? Fun list to watch for #11Jan2022 RE: Jessica Rose - mRNA covaxes; Lamarckian versus Mendellian heredity #10Jan2022 Yet another end of the world - This time, we really are going to die!! #09Jan2022 Steve Kirsch - vax negative impacts on covid deaths, OurWorldInData excess deaths #09Jan2022 https://stevekirsch.substack.com/p/new-big-data-study-of-145-countries #09Jan2022 Excess deaths from all causes compared to projection based on previous years #08Jan2022 Covax versus excess deaths #08Jan2022 Re: 220107 Steve - UK excess deaths #07Jan2022 Total deaths all causes less historical 5 year average, less deaths by covid #07Jan2022 Robert Malone covax interview #16Dec2021 Re: Pilots and atmospheric phenomena #05Jan2022 Re: Covid-19 adverse #08Dec2021 Re: Covid vaccine Adverse Effects #25Nov2021 Re: Some of Sacha Dobler's recent stuff 220307 PSI, Igor Chudov: Unvax versus boosted case rates per 100k.png 220307 PSI, Igor Chudov: Unvax versus boosted death rates per 100k.png Moderna vaccine - Howells personal health problems.html #Covid-19 vaccine shots #Symptoms within one week or so after getting the vaccine #Symptoms within two weeks or so after getting the vaccine #Symptoms within a month or so after getting the vaccine #Comparison of [TradingView, Yahoo finance] data #[data, software] cart [description, links] #Table of Contents #Cosmic/Galactic rays at historical high in summer 2019 #14May2020 Howell - COVID-19 : [incomplete, random, scattered] cart [information, analysis, questions] #06Oct2016 Eileen Mckusick - The Sun?s Influence on Consciousness #08Dec2015 Howell - The Greatest of cycles : Human implications? #Table of Contents #Introduction #Howell - USA influenza [cases, deaths] alongside [sunspots, Kp index, zero Kp bins] #Question - influenza rates : extremely low ?1978?-2013, surging 2013-2020 #Other questions pertaining to the graph #Definitions and data [problem, limitation]s #Notes concerning [graphs, analysis, background material] #Haunting implications of a possible relation between flu and the Kp index #References: USA influenza [cases, deaths] alongside [sunspots, Kp index, zero Kp bins] #US Center for Disease Control (CDC) - annual flu seasons 2010-2017 #Is the effectiveness of vaccines over-rated? #Old [doubts, questions] #Gems from my recent reading, ~2015-2020 #The overall decline in influenza-attributed mortality cannot be due to vaccines #1917-18 Spanish flu deaths may have been largely due to secondary infections #Amazon Customer - Concerned about vaccines? #Anne Rooney - Dangerous and inaccurate nonsense #Pandemics are a very tiny theme that fits into a 'Universal Wave Series'? #Vaccines: so what should we conclude? #References: Is the effectiveness of vaccines over-rated, or sometimes problematic? #Quite apart from the issue of the benefits of vaccines #Influenza pandemics - Tapping, Mathias, and Surkan (TMS) theory #References: Tapping, Mathias, and Surkan (TMS) theory #Rebuttals of the [astronomical, disease] correlation #Table of Contents #Introduction #Some historical comments #Thunderbolts.info - the Electric Universe and Health #Thunderbolts.info 04Oct2016 - The Sun's Influence on Consciousness #Thunderbolts.info 31Mar2020 - Geomagnetic Effects on Earth's Biology, Electricity of Life #Jerry Tennant 16Jun2020 - Voltage and Regeneration, Electricity of Life #Eileen Mckusick - Mastering the Human Biofield with Tuning Forks #Ben Davidson & Suspicious Observers - Space Weather and health #Ben Davidson 23Feb2018 - What To Do With Space Weather Health Information #Adrian D'Amico 28Feb2018 - Space Weather and Human Health #The first quantitative sociology, and the second universal wave series? #Robert Prechter - Socionomics, the first quantitative sociology? #Stephen Puetz - Universal Wave Series #Other health effects of the [sun, astronomy] #Non-health effects of the [sun, astronomy] #Cosmic/Galactic rays at historical high in summer 2019 #USA Center for Disease Control - Leading Causes of Death 2017 #Historical pandemics https://thefederalist.com/2020/04/18/10-deadliest-pandemics-in-history-were-much-worse-than-coronavirus-so-far/ #Introduction #SAM: Structured Atom Model of Edo Kaal #SAFIRE, Aureon.ca #Deactivate: comparison QM vs SAM #Deactivate [U,Pu, etc] -> [Pb, Au], is alchemy back? #??? #Nomenclature, acronyms #Definitions: nuclear [material, process, deactivate]s #References #Introduction #SAM: Structured Atom Model of Edo Kaal #video transcript: Edo Kaal 03Jul2021 The Structured Atom Model | EU2017 #Conference webPage & description #video link #full video transcript #Table of Contents for the presentation #video transcript: Gareth Samuel 03Jul2021 The Structured Atom Model | Thunderbolts #00:20 nuclear decay processes #00:33 decay process of U235 #00:46 alpha particle emission, problems with the standard explanations #02:12 SAM basics: nucleus structure, electrostatic, no [neutron, strong force] #03:42 SAM periodic table #04:06 SAM transmutations confirmed in SAFIRE laboratory #04:40 Inner nuclear structure dictates chemical properties #05:11 Formation of elements doesn't require stars #06:06 Other SAM [assumptions, advantages] #Howell: questions about SAM #SAFIRE, Aureon.ca #Nomenclature, acronyms #Definitions: nuclear [material, process, deactivate]s #References #Gods and plants - summary #Sun #Mercury #Venus #Mars #Jupiter #Saturn #Mesopot #Hindu #Egypt #Hebrew #Chinese #Greek #Roman #Japan #Maya #Inca #civiliser #tragedy #knowledge, letters #love #bad guy #war #destroyer #Mesopot #Hindu #Egypt #Hebrew #Chinese #Greek #Roman #Japan #Maya #Inca #Introduction #Basis of concept #Table of cycles: Puetz's Universal Wave Series (UWS) #prime dimensionless ratios (Poirier) #intra-Birkeland current, radial to axis of current (Donald Scott) #string (Fourier series) #drum (Bessel functions) #sphere (?spherical functions?) #Hypothesized causes of the UWS (Puetz, Borchardt, Condie) #Glenn Borchardt #Life in the plane of a Z-pinch #Benoit Mandelbrot fractals: The misbehavior of markets #Fractional Order Calculus (FOC) #References #[1] #[9] #[4] #[3] #[4] #[8] #[1] #[?] #Puetz - Universal Wave Series (UWS) #Description of the Universal Wave Series (UWS) #Overall conjectures (guesses) #System dynamics of the UWS #Harmonics of [,D]UWS #Frequency beats # # # # # # # # # # #Summary #What does my TradingView PineScript code do? #So, what does the user have to do to [get, adjust] output? #What CAN the user change? #Initial observations #Initial questions #PineScript comments #PuetzUWS comments #Puetz's UWS as a highly-[constrained, specific] Fourier wave series-like analysis? #Was 2007-2009 the modern equivalent to the 1929 crash? Are we going there yet? #References #Inspirations for this webPage / #Quick introduction to Puetz 'Universal Wave Series' temporal nested cycles #Formulae for Puetz UWS #[Fibonacci, Fourier, Elliot, Puetz] series comparisons ../../ProjMini/PuetzUWS//home/bill/web/ProjMini/PuetzUWS/Puetz finance - 88 year cycle and harmonics 13Jan2019.png ../../ProjMini/PuetzUWS//home/bill/web/ProjMini/PuetzUWS/Puetz finance - 1.17, 3.5, 10.5 y cycles across countries & indexes 11Sep2019.png #MindCode programming code ../index.html ../Sun civilisations/Howell - Mega-Life, Mega-Death and the Sun II, towards a quasi-predictive model of the rise and fall of civilisations.pdf Glaciation model 005 Laskar etal model for solar insolation in QNial programming language Holocene 002 Holocene 003 globavg JPL Ephemeris JPL Ephemeris 5 day distances Laskar etal model for solar insolation in QNial programming language #Key [results, comments] #Questions: Grossberg's c-ART, Transformer NNs, and consciousness? #Grossberg: why ART is relevant to consciousness in Transformer NNs #A workable [definition, context, model] for consciousness #non-[Grossberg, TrNN] topics ??? mailto:kozmoklimate@gmail.com ./ #Introduction #Links for doing the work #Going further - themes, videos, presentations, courses #Captioned images - remaining problems #Navigation: [menu, link, directory]s #Theme webPage generation by bash script #Notation for [chapter, section, figure, table, index, note]s #incorporate reader questions into theme webPages #Home TrNN&ART Status: #Captioned images [image, link] problems : #no permission clause #caption cut off : #image-caption is way too tall : #Go through image number sequence for missing images #missing 'primary links' #Questions: Grossberg's c-ART, Transformer NNs, and consciousness? #ART assess theories of consciousness #ART augmentation of other research #[definitions, models] of consciousness #For whom the bell tolls #Grossberg part of webSite #Grossbergs ART- Adaptive Resonance Theory #Grossbergs cellular patterns computing #Grossbergs complementary computing #Grossbergs Consciousness: neural [architecture, function, process, percept, learn, etc] #Grossbergs cooperative-competitive #Grossbergs [core, fun, strange] concepts #Grossbergs equations of the mind #Grossbergs laminar #Grossbergs list of [chapter, section]s #Grossbergs list of [figure, table]s #Grossbergs list of index #Grossbergs modal architectures #Grossbergs modules (microcircuits) #Grossbergs overview #Grossbergs paleontology #Grossbergs quoted text #Grossbergs what is consciousness #TrNNs have incipient consciousness #Introduction #Let the machines speak #machine consciousness, the need #non-conscious themes notes #opinions- Blake Lemoine, others #Pribram 1993 quantum fields and consciousness proceedings #references- Grossberg #references- non-Grossberg #reader Howell notes #Taylors consciousness #TrNN controls need consciousness #TrNNS&ART theme #TrNNs augment by cART #TrNNs have incipient consciousness #[use, modfication]s of c-ART #What is consciousness: from historical to Grossberg #why is cART unknown #Introduction #principles, architecture, function, process #Principles, Principia #equations of the [brain, mind] #modules & modal architectures ([micro, macro]-circuits) #CogEM Cognitive-Emotional-Motor model #[intra, inter]-cellular process #informational noise suppression #[stable, robust, adaptive] learning #on-center off-surround #top-down bottom-up #cooperative-competitive #complementary computing #laminar computing #computing with cellular patterns #family of ART.....base, linguistic #ART - Adaptive Resonance Theory #LAMINART vison, speech, cognition #[, n]START learning & memory consolidation #ARTMAP associate learned categories across ART networks #[, c]ARTWORD word perception cycle #LIST PARSE [linguistic, spatial, motor] working memory #family of ART ....visual, auditory #[, d, p]ARTSCAN attentional shroud, binocular rivalry #ARTSCENE classification of scenic properties #SMART synchronous matching ART, mismatch triggering #ARTPHONE [gain control, working] memory #ARTSTREAM auditory streaming, SPINET sound spectra #consciousness #What is consciousness? #conscious vs non-conscious #Grossberg: other consciousness theories #random fun themes #art (painting etc) #biology, evolution, paleontology #brain disorders and disease #hippocampus IS a cognitive map! #auditory continuity illusion #see-reach to hear-speak #neurotransmitter #learning and development #Why are there hexagonal grid cell receptive fields? #Strange themes #AI, machine intelligence, etc #Brain is NOT Bayesian? #brain rythms & Schuman resonances #Explainable AI #logic vs connectionist #Successful [datamodelling, applications] showing effectiveness of cART etc #[bio, neuro, psycho]logy data #[software, engineering, other] applications #Introduction #principles, architecture, function, process #Principles, Principia #equations of the [brain, mind] #modules & modal architectures ([micro, macro]-circuits) #CogEM Cognitive-Emotional-Motor model #[intra, inter]-cellular process #informational noise suppression #[stable, robust, adaptive] learning #on-center off-surround #top-down bottom-up #cooperative-competitive #complementary computing #laminar computing #computing with cellular patterns #family of ART.....base, linguistic #ART - Adaptive Resonance Theory #LAMINART vison, speech, cognition #[, n]START learning & memory consolidation #ARTMAP associate learned categories across ART networks #[, c]ARTWORD word perception cycle #LIST PARSE [linguistic, spatial, motor] working memory #family of ART ....visual, auditory #[, d, p]ARTSCAN attentional shroud, binocular rivalry #ARTSCENE classification of scenic properties #SMART synchronous matching ART, mismatch triggering #ARTPHONE [gain control, working] memory #ARTSTREAM auditory streaming, SPINET sound spectra #consciousness #What is consciousness? #conscious vs non-conscious #Grossberg: other consciousness theories #random fun themes #art (painting etc) #biology, evolution, paleontology #brain disorders and disease #hippocampus IS a cognitive map! #auditory continuity illusion #see-reach to hear-speak #neurotransmitter #learning and development #Why are there hexagonal grid cell receptive fields? #Strange themes #AI, machine intelligence, etc #Brain is NOT Bayesian? #brain rythms & Schuman resonances #Explainable AI #logic vs connectionist #Successful [datamodelling, applications] showing effectiveness of cART etc #[bio, neuro, psycho]logy data #[software, engineering, other] applications #Introduction #Conscious mind, resonant brain: Table of Contents #Conscious mind, resonant brain: sub-section list #Instructions #Preface #Chapter 1 - Overview #Chapter 2 - How a brain makes a mind #Chapter 3 - How a brain sees: Constructing reality #Chapter 4 - How a brain sees: Neural mechanisms #Chapter 5 - Learning to attend, recognize, and predict the world #Chapter 6 - Conscious seeing and invariant recognition #Chapter 7 - How do we see a changing world? #Chapter 8 - How we see and recognize object motion #Chapter 9 - Target tracking, navigation, and decision-making #Chapter 10 - Laminar computing by cerebral cortex #Chapter 11 - How we see the world in depth #Chapter 12 - From seeing and reaching to hearing and speaking #Chapter 13 - From knowing to feeling #Chapter 14 - How prefrontal cortex works #Chapter 15 - Adaptively timed learning #Chapter 16 - Learning maps to navigate space #Chapter 17 - A universal development code #Introduction #Grossberg: why ART is relevant to consciousness in Transformer NNs #Grossberg's [non-linear DEs, CogEm, CLEARS, ART, LAMINART, cART] models #1958-59 ?among the world's first? systems of non-linear differential equations for NNs #?date? CogEm Cognitive-Emotional model #?date? CLEARS [Cognition, Learning, Expectation, Attention, Resonance, Synchrony] #1976 ART Adaptive Resonance Theory #?date? LAMINART Laminar computing ART #?date? cART conciousness ART #The underlying basis in [bio, psycho]logical data #Credibility from non-[bio, psycho]logical applications of Grossberg's ART #Key [results, comments] #Play with the [time, mind]-bending perspective yourself #??? #??? #Grossberg OR[anticipated, predicted, unified] the [experimental result, model]s #Grossberg's other comments #Grossberg's comments for some well-known consciousness theories #??? #??? #home #neural #projmini #computer #market #videos #myBlogs #reviews #hosted #professional #personal #home #neural #projmini #projmajr #projmini #computer #market #videos #myBlogs #reviews #hosted #professional #personal #home #neural #projmini #computer #market #videos #myBlogs #reviews #hosted #professional #personal #home #neural #projmini #projmajr #projmini #computer #market #videos #myBlogs #reviews #hosted #professional #personal directory status & updates copyrights
  • Summary comments
  • Play with the [time, mind]-bending perspective yourself
  • Ratio of actual to semi-log detrended data : [advantages, disadvantages]
  • Future potential work
  • Comparison of [TradingView, Yahoo finance] data
  • [data, software] cart [description, links] directory status & updates copyrights
  • directory status & updates copyrights directory status & updates copyrights
  • At present, the full video (540 Mbytes) is too slow (dragging, deep voices, slow video), and is too cumbersome to go from one time to another. So until I convert to a different video [codec, contailer] formats (perhaps H.264 codec & .MKV container?) or find a video viewer that is better suited to large files, the videos for each scene are posted instead (see the listing below), giving better throughput and easy of going from one scene to another by separate loading. Microsoft Windows (and hopefully MacIntosh?) users can view this by downloading the VLC media viewer. "... VLC is a free and open source cross-platform multimedia player and framework that plays most multimedia files, and various streaming protocols. ..." At present, this full video cannot be moved forward and back within the video, something I will fix when I get the time, as the ability to go back over material and skip sections is particularly important with this video. In the meantime, the separate "Scenes" listed below can be used by moving back and forward.
  • The QNial programming language was used to [direct, sequence, conduct, whatever] the video production, together with a LibreOffice Calc spreadsheet that acts as a great front-end for preparing code specific to the video sequencing. These can be found in the Programming code directory listing, and will be handy for anyone interested in the details of how I produced the video. I like to describe the QNial programming language of Queen directory status & updates copyrights
  • Summary - my commentary as part of Perry Kincaid
  • Key files - to [view, hear] my commentary
  • References - unfortunately, the list is very incomplete, but does provide some links Perry Kincaid, founder of KEI Networks, organised a PUBLIC webinar Alberta is high on hydrogen : Introducing hydrogen to Alberta
  • Slide show - open source presentation file format .odp. Microsoft PowerPoint will probably complain, but should be able to load.
  • Voice narrative - in mp3 audio file format.
  • Adobe pdf - file format.
  • Voice script - text file with script for the voice commentary. Also included are notes for some of the slides that were not commented on (marked by "(X)"). Click to view most files related to this presentation
    directory status & updates copyrights Ben Davidson of Suspicious Observers posted 3 brilliant videos on nearby stellar flaring, as further support for a potential "micro-flare" or other solar disruption to explain the 12,000 year [mythological observations, paleontology, geology, planetary] quasi-periodicity of disruptive events on Earth, which by appearances may be "imminent". I like Ben
  • 24Dec2019 DISASTER CYCLE | Signs in the Sky Now
  • 26Dec2019 Galactic Sheet Impact | Timing the Arrival
  • 27Dec2019 Nearby Superflares | What Do They Mean If we take an "Electric Universe" perspective, in particular Wal Thornhill
    Note that Donald Scott directory status & updates copyrights ALL videos are provided in ogv file format, which is of higher quality and easier and more natural to me in a Linux environment. Microsoft Windows (and hopefully MacIntosh?) users can view this by downloading the VLC media viewer. "... VLC is a free and open source cross-platform multimedia player and framework that plays most multimedia files, and various streaming protocols. ...".
  • Ben Davidson of Suspicious Observers posted 3 brilliant videos on nearby stellar flaring, as further support for a potential "micro-flare" or other solar disruption to explain the 12,000 year [mythological observations, paleontology, geology, planetary] quasi-periodicity of disruptive events on Earth, which by appearances may be "imminent". But can stellar [apparent birth, brighten, dim, apparent death] also provide further potential evidence? Naturally we view stars
  • Toolsets can be browsed via: Past and Future Worlds directory. Perhaps these may be of interest, help] to others putting together a film from Linux-based free software.
  • Toolsets can be browsed via: Big Data, Deep Learning, and Safety directory. Perhaps these may be of interest, help] to others putting together a film from Linux-based free software.
  • Toolsets can be browsed via: Icebreaker unchained directory. Perhaps these may be of interest, help] to others putting together a film from Linux-based free software.
  • directory status & updates copyrights
  • directory status & updates copyrights
  • directory status & updates copyrights
  • directory status & updates copyrights
  • directory status & updates copyrights
  • Howell - TradingView PineScript [description, problem, debug].html
  • Howell - TradingView PineScript of priceTimeFractals.html
  • 0_PineScript notes.txt - details of software [code, bug, blogSolutions]
  • 0_PineScript errors.txt - [error, solution]s that keep coming back
  • Howell - References related to Puetz [H]UWS.html
  • Kivanc Ozbilgics Turtle Trade PineScript - documention.txt
  • Kivanc Ozbilgics Turtle Trade PineScript, plus 8-year detrended SP500.txt
  • RicardoSantos, Function Polynomial Regression.txt
  • sickojacko maximum [,relative] drawdown calculating functions.txt
  • TradingView auto fib extension.txt directory status & updates copyrights
  • Perhaps more importantly, are lessons that can be learned from my own failutres, and some of the techniques I This section also appears in my webPage for users , and also applies to programmers. Users only have to set up the basic chart and symbols in TradingView based on my chart PuetzUWS [time, price] multiFractal mirrors, SPX 1872-2020. To do so you must be a TradingView subscriber. After that, copy over my PineScript coding, which you can find on my TradingView page - click on "SCRIPTS", and select my script "PuetzUWS [time, price] multifractal of detrended SPX 1871-2020". Further setup details are given below.
    Download symbol data (like [TVC:[USOIL, GOLD], NASDAQ: NDX]) from [TradingView, yahoo finance, etc]. My own data for SPX is in my LibreCalc spreadsheet SP500 1872-2020 TradingView, 1928-2020 yahoo finance.ods. Actually, it Users can simply follow standard Trading View guide instructions to install the Pine Script program that super-imposes fractal [time, price] grids on their charts. I don For details, see Howell - TradingView PineScript [description, problem, debug].html.
  • directory status & updates copyrights
  • Special comments
  • Regular [1,6] month market views
  • https://tr.tradingview.com/script/pB5nv16J/?utm_source=notification_email&utm_medium=email&utm_campaign=notification_pubscript_update
    https://www.tradingview.com/script/12M8Jqu6-Function-Polynomial-Regression/
    directory status & updates copyrights directory status & updates copyrights
  • Key [results, comments]
  • How can the Great Pricing Waves be correlated with "dead" system?
  • Ratio of actual to semi-log detrended data : [advantages, disadvantages]
  • Future potential work
  • Comparison of [TradingView, Yahoo finance] data
  • [data, software] cart [description, links] NOTE : The model here is DEFINITELY NOT suitable for application to [trade, invest]ing! It I typically use a LibreOffice Calc spreadsheets to [collect, rearrange, simple transform] data. For this project : 1_Fischer 1200-2020.ods
    This is susceptible to serious bias in selecting the [start, end] dates for each segment. See the spreadsheet 1_Fischer 1200-2020.ods.
    The year ~1926 was taken as the [start, end] point for my 1872-2020 detrend StockMkt Indices 1871-2022 PuetzUWS2011 [start, end] point, so I use it here as well. (23Feb2023 - original text said 1940, perhaps it is still like that?)
    This is easy with the spreadsheet - one column of regression results I use 10 year intervals per segment, but you only really need the [start, end] dates [-,+] 20 years. The extra 20 years extends the segments at both ends for visual clarity. For an example, see the spreadsheet 1_Fischer 1200-2020.ods, sheet "Fig 0.01 SLRsegments".
    Save the "SLRsegments" to a data file that canused by GNUplot. Example : Fig 0.01 line segments for GNUplots.dat. Notice that olumn titles can use free-format text, except for the comma, which separates columns.
  • Save data of 1_Fischer 1200-2020.ods to data file , example Fig 0.01 linear regression raw data.dat
  • For each curve, Fischer linear regressions.ndf (23Feb2023 no longer exists?) - a special operator (proceedure) is created to select a segments
  • text data file : Fig 0.01 Price of consumables in England 1201-1993.dat
  • gnuplot script : Fig 0.01 Price of consumables in England 1201-1993.plt
  • graph output : Fig 0.01 Price of consumables in England 1201-1993.png
  • Fig 0.01 Price of consumables in England 1201-1993 detrended.plt - This covers the medieval to mdern era, and is used to colect curves for different data. The restricted t-frame provides a more accurate view of that period.
  • 1850 BC to 2020 AD prices detrended.plt - Obviously this covers a variety of regions, time-frames. What I really need is data going 7,500 years (~3 cycles of 2,400 years (Halstatt cycle) corsponding to a 2006 project on the rise and fall of civilisations _Civilisations and the sun, and if I find [time to do it, data] this would be nice. https://www.digitizeit.xyz/ https://www.gimp.org http://www.gnuplot.info/ directory status & updates copyrights directory status & updates copyrights
  • directory status & updates copyrights directory status & updates copyrights
  • Key [results, comments]
  • Play with the [time, mind]-bending perspective yourself
  • Ratio of actual to semi-log detrended data : [advantages, disadvantages]
  • Future potential work
  • Comparison of [TradingView, Yahoo finance] data
  • [data, software] cart [description, links] Wow! Even knowing that the [eyes, mind] often see patterns that aren While you probably don
  • 7,500 years of history - This is the same challenge that I had with a [lunitic, scattered, naive] model of history by my father and I, where it was necessary to cut ?150? years out of a 7,500 year time series to "kind of make it fit". Steven Yaskall recognized us as the "two fools who rushed in" in his book "Grand Phases on the Sun". We were justifiably proud of that.
  • Smooth sinusoidal curves and regular periodicities - I seems that mathematicians and scientists still [think, apply] models assuming ideal waveforms, even when [their tools, reality] do not. Stephen Puetz While most results are provided in sections above, links to data [spreadsheets, text files] and software [???, source code] are listed below along with brief comments. A full listing of files (including other SP500 web-pages) can be seen via this Directory
  • TradingView data text file and spreadsheet - I had to upgrade my TradingView subscription to Pro+ to download the data for years prior to 1928, as I couldn
  • Yahoo finance data (23Feb2023 the text file has been lost, but the data is in the linked spreadsheet with TradingView data). I was happy to have another "somewhat independent" data source, even if they are both from the same S&P or other source. This really helps as a check on my data treatment (see the section above "Comparison of [TradingView, Yahoo finance] data").
  • TradingView Pine language You are probably wondering why I didn
  • gnuplot I
  • gimp (GNU image manipulation program) is what I used for the SP500 time-section transparencies. For more details, see the section above "Play with the [time, mind]-bending perspective yourself".
  • gnuplot.sh is the tiny bash script used to select gnuplot scripts. My other bash scripts can be found here.
  • QNial programming language - Quenn directory status & updates copyrights directory status & updates copyrights
  • multpl.com
  • Qiang Zhang 30Jan2021 Price Earning Ratio model - This is similar to, but better than, my own model below. His github has several other interesting investment-related postings, including Black-Scholes derivative pricing. see Howell - SP500 PE Shiller ratios versus 10 year Treasury bond yields, with earnings growth & discount factors.ods
  • time-varying [SP500_growFuture, etc] - there is little chance of growth rates lasting more than a year or two, especially || > 20%. Frankly, they are constantly changing year-to-year in a big way. The time series approach mentioned below is a simple basis for anticipating this in a statistic manner as a start. Other approaches get more into predictions based on some concept or another.
  • SP500 index, variable [dividends, internal investment & stock buybacks, earnings] - I won
  • Elliot Wave Theory, notable Robert Prechter (including Socionomics). Amoung many, many fun topics, the arguments presented about how the Fed FOLLOWSnterest rates, only gng the impression of leading, is espectially relevant to theis web-page.
  • Harry S. Dent Jr - demographics, with astounding successes in the past (at least twice on decade-or-longer-out basis, perhaps a bit muffled with the last decade.
  • Stephen Puetz - Universal Wave Series stunning results across a huge swath of subject areas!! eminds me of the system of 20+ Mayan calendars.
  • Brian Frank of Frank funds - "Slaughterhouse-Five (Hundred), Passive Investing and its Effects on the U.S. Stock Market" - Index fund [distortion, eventual destabilization] of the markets. This was a recent fascinating read for me. (MarketWatch 10Apr2020) directory status & updates copyrights I will change this every six months or year, just to profile my different projects past and ongoing. See also past home page highlights, Howell

    04Jul202 Edo Kaal periodic table of the elements


    Icebreaker Unchained : we should have lost WWII

    I have not yet made a webPage for this project (so many years after it was shelved in Aug2015!), but [documentation, information, unfinished scripts] are provided in the Stalin supported Hitler (video production) directory and Icebreaker directory (which should be combined into one). Two very simple animations took sooooo loooong to produce. They total only ~ 1 minute for both "A year of stunning victories" map scan-zooms of the Poland, false war, lowlands, France and Dunkirk). Worse, the unfinished part 1 of 6 videos (!1 hour length) wasn 25May2021 Here are two example graphs of TSLA options that I have been working on. I am far from getting into options trading, I just want to learn more about the market. For more details (but no webPage yet), see QNial software coding for options data processing (also "winURL yahoo finance news download.ndf" in the same directory for yahoo finance news downloads), and several graphs of Tesla options.

    1872-2020 SP500 index, ratio of opening price to semi-log detrended price


    David Fischer - The Great pricing Waves 1200-1990 AD


    "Mega-Life, Mega-Death, and the invisible hand of the Sun: Towards a quasi-predictive model for the rise and fall of civilisations", Click to see a full-sized image of the chart in your browser.. (~3.5 feet squared on my kitchen wall. My printed out version includes hand annotated comparisons to the Mayan calendar and other references.) directory status & updates copyrights

    "Mega-Life, Mega-Death, and the invisible hand of the Sun: Towards a quasi-predictive model for the rise and fall of civilisations", Click to see a full-sized image of the chart in your browser.. (~3.5 feet squared on my kitchen wall. My printed out version includes hand annotated comparisons to the Mayan calendar and other references.)

    12Sep2020: 1872-2020 SP500 index, ratio of opening price to semi-log detrended price


    directory status & updates copyrights
  • directory status & updates copyrights
  • help identify program coding, as distinct from, or hybridized with, protein coding within [DNA, mRNA]. While this is mostly an issue for my MindCode project, callerID-SNNs fit nicely into, and may pragmatically help, that context.
  • extra-neruon [Turing, von Neuman]-like computations based on the local neural network [structure, connection]s. This was a focus of my previous MindCode and earlier work (eg. Genetic specification of recurrent neural networks (draft version of a WCCI2006 conference paper), but isn
  • intra-neuron [Turing, von Neuman]-like computations based on the "focus" neuron An mid-term objective is to tie caller-IDs to the work of Stephen Grossberg as described in my webPage Overview - Stephen Grossberg For now, I can 10Nov2023 Maybe I can use a prime number basis for [time, synapse] fractals, as a contrast to Stephen Puetz
  • Howell 2006 "Genetic specification of recurrent neural networks" (draft version of my WCCI2006 conference paper)
  • MindCode 2023 description
  • MindCode 2023 program coding (QNial programming language) this is a simple one-line listing of each operator for each file
  • callerID-SNNs Introduction (this webPage)
  • callerID-SNNs program coding (QNial programming language)
  • bash library: file operations used extensively, sometimes hybridized with the QNial programming language directory status & updates copyrights directory status & updates copyrights
  • Genetic

  • Genetic

  • Junk

  • "... Consciousness, at its simplest, is sentience and awareness of internal and external existence.[1] However, its nature has led to millennia of analyses, explanations and debates by philosophers, theologians, linguists, and scientists. Opinions differ about what exactly needs to be studied or even considered consciousness. ..."(Wiki2023)
  • Only a very small number of theories of consciousness are listed on this webPage, compared to the vast number of [paper, book]s on the subject coming out all of the time. "Popular theories" as listed on Wikipedia, are shown, assuming that this will be important for non-experts. But the only ones that really count for this webSite are the "Priority model of consciousness".
    Readers will have completely different [interest, priority]s than I, so they would normally have different "Priority model of consciousness", and rankings of the conscousness theories. To understand my selections and rankings, see Introduction to this webSite.
  • this webSite I like the description in Wikipedia (Wiki2023):
    The following additional definitions are also quoted from (Wiki2023) :
    ..." (Wiki2023)
    ..." (Wiki2023)
    ..." (Wiki2023)
    Grossberg 16Jul2023 I am currently lacking a coherent overall webPage for Grossberg The following listing is taken from What is consciousness: from historical to Grossberg, and repeats some of the points in this section above : conscious ART (cART), etc
  • A surprisingly small number of neural architectures can simulate [extensive, diverse] [neuro, pyscho]logical data at BOTH the [sub, ]conscious levels, and for [perception, action] of [sight, auditory, touch, language, cognition, emotion, etc]. This is similar to what we see in physics.
  • simple grepStr search results : ..."(Wiki2023)
    Byoung-Kyong Min 2010 "A Thalamic reticular networking model of consciousness"
    (Wiki2023)
    Wikipedia: Models of consciousness, retrieved Apr2023 (Wiki2023)
    ..." (Wiki2023)
    ..." (Wiki2023)
    "... The Neural correlates of consciousness (NCC) formalism is used as a major step towards explaining consciousness. The NCC are defined to constitute the minimal set of neuronal events and mechanisms sufficient for a specific conscious percept, and consequently sufficient for consciousness. In this formalism, consciousness is viewed as a state-dependent property of some undefined complex, adaptive, and highly interconnected biological system.[3][4][5] ..." (Wiki2023, full article: Wiki2023 - Neural_correlates_of_consciousness, also cited by Grossberg 2021)
    Another idea that has drawn attention for several decades is that consciousness is associated with high-frequency (gamma band) oscillations in brain activity. This idea arose from proposals in the 1980s, by Christof von der Malsburg and Wolf Singer, that gamma oscillations could solve the so-called binding problem, by linking information represented in different parts of the brain into a unified experience.[80] Rodolfo Llinás, for example, proposed that consciousness results from recurrent thalamo-cortical resonance where the specific thalamocortical systems (content) and the non-specific (centromedial thalamus) thalamocortical systems (context) interact in the gamma band frequency via synchronous oscillations.[81] ..." (Wiki2023 - Consciousness#Neural_correlates)
    Howell 19Jul2023 Note that Grossberg "... Integrated Information Theory (IIT) offers an explanation for the nature and source of consciousness. Initially proposed by Giulio Tononi in 2004, it claims that consciousness is identical to a certain kind of information, the realization of which requires physical, not merely functional, integration, and which can be measured mathematically according to the phi metric. ..." (UTM - Integrated information theory)
    "... Integrated information theory (IIT) attempts to provide a framework capable of explaining why some physical systems (such as human brains) are conscious,[1] why they feel the particular way they do in particular states (e.g. why our visual field appears extended when we gaze out at the night sky),[2] and what it would take for other physical systems to be conscious (Are other animals conscious? Might the whole Universe be?).[3] ... In IIT, a system Wikipedia lists numerous criticisms of IIT, but I have not yet quoted from that, other than to mention the authors : Wikipedia: Models of consciousness
    "... Sociology of human consciousness uses the theories and methodology of sociology to explain human consciousness. The theory and its models emphasize the importance of language, collective representations, self-conceptions, and self-reflectivity. It argues that the shape and feel of human consciousness is heavily social. ..."(Wiki2023, full webPage Wiki2023
    "... Daniel Dennett proposed a physicalist, information processing based multiple drafts model of consciousness described more fully in his 1991 book, Consciousness Explained. ..." (Wiki2023, full webPage Wiki2023)
    ..." (Wiki2023)
    "... Functionalism is a view in the theory of the mind. It states that mental states (beliefs, desires, being in pain, etc.) are constituted solely by their functional role – that is, they have causal relations to other mental states, numerous sensory inputs, and behavioral outputs. ..." (Wiki2023, full webPage Wiki2023)
    "... Electromagnetic theories of consciousness propose that consciousness can be understood as an electromagnetic phenomenon that occurs when a brain produces an electromagnetic field with specific characteristics.[7][8] Some electromagnetic theories are also quantum mind theories of consciousness.[9] ..." (Wiki2023)
    "... "No serious researcher I know believes in an electromagnetic theory of consciousness,"[16] Bernard Baars wrote in an e-mail.[better source needed] Baars is a neurobiologist and co-editor of Consciousness and Cognition, another scientific journal in the field. "It Stuart Hameroff separately worked in cancer research and anesthesia, which gave him an interest in brain processes. Hameroff read Penrose rationalwiki.org presents a hard-nosed critique of various "quantum consciousness" theories, from which the following quote is taken :
  • "... Large Language Models (LLMs) have been transformative. They are pre-trained foundational models that are self-supervised and can be adapted with fine-tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. GPT-3 and more recently LaMDA can carry on dialogs with humans on many topics after minimal priming with a few examples. However, there has been a wide range of reactions and debate on whether these LLMs understand what they are saying or exhibit signs of intelligence. This high variance is exhibited in three interviews with LLMs reaching wildly different conclusions. A new possibility was uncovered that could explain this divergence. What appears to be intelligence in LLMs may in fact be a mirror that reflects the intelligence of the interviewer, a remarkable twist that could be considered a Reverse Turing Test. If so, then by studying interviews we may be learning more about the intelligence and beliefs of the interviewer than the intelligence of the LLMs. As LLMs become more capable they may transform the way we interact with machines and how they interact with each other. Increasingly, LLMs are being coupled with sensorimotor devices. LLMs can talk the talk, but can they walk the walk? A road map for achieving artificial general autonomy is outlined with seven major improvements inspired by brain systems and how LLMs could in turn be used to uncover new insights into brain function. ..." (Sejnowski 2022)
    Sejnowski
  • [definitions, models] of consciousness.html -
  • What is consciousness: from historical to Grossberg -
  • data from [neuroscience, psychology] : quick list, more details
  • success in [definitions, models] of [consciousness, sentience]. However, for reasons given on that webpage, only Stephen Grossberg A few models of consciousness are summarized on my webPage A quick comparison of Consciousness Theories. Only a few concepts are listed, almost randomly selected except for [Grossberg, Taylor] Stephen Grossberg may have the ONLY definition of consciousness that is directly tied to quantitative models for lower-level [neuron, general neurology, psychology] data. Foundational models, similar in nature to the small number of general theories in physics to describe a vast range of phenomena, were derived over a period of ?4-5? decades BEFORE they were found to apply to consciousness. That paralleled their use in very widespread
  • John Taylor
  • references- Grossberg and
  • see Grossberg 2021: the biological need for machine consciousness
    Howell 30Dec2011, page 39 "Part VI - Far beyond current toolsets"
    13.3K Followers ..."
    (Blake Lemoine, 2022)
  • 11Jun2022 Is LaMDA Sentient? — an Interview

    22Jun2022 We’re All Different and That’s Okay

    11Jun2022 What is LaMDA and What Does it Want?

    14Aug2022 What is sentience and why does it matter?

    More detail following from Sejnnowski
  • Historical thinking about consciousness.
  • Historical thinking about quantum [neurophysiology, consciousness]
  • WRONG!! It may help the ready to re-visit comments about the historical thinking about consciousness, which is not limited to quantum consciousness. This complements items below. Early era of [General Relativity, Quantum Mechanics]: I would be greatly surprised if there wasn Pribram 1993 quantum fields and consciousness proceedings provides references back to 1960, and Jibu, Yasue comment that :
  • Howells questions about 1993 conference proceedings
  • from the section
  • As per the second question from the section
  • As per the first question from the section
  • use a bash script, for example, to automatically play through a sequence of selected segments Viewers may list their own comments in files (on or more files from different people, for example), to include in Files listing [chapter, section, figure, table, selected Grossberg quotes, my comments]s. These files of lists are my basis for providing much more detailed information. While this is FAR LESS HELPFUL than the text of the book or its index alone, it can complement the book index, and it has the advantages that :
  • text extractions of simple searches or "themes" is greatly facilitated, so the reader can download the files, copy the bash scripts (or use another text extraction program), and set up their own "themes". Rather than just watch this video, you can follow it by reading the script and following its links, once I write it... What is consciousness? I will start with a simple definition concentrated on how out [awareness of [environment, situation, self, others], expectations, feeling about a situation] arise from essentially non-conscious cognitive, emotional, and motor processes, including muscle control. "Awareness", "Expectations", "Emotions", lead to "Actions". "Actions" include muscle actions, language communications, striving towards a goal, reactions to the current situation, directing [perception, cognition], and other processes. "Learning" in a robust, stable, and flexible manner is an essential part of this, given that the environment forces us to learn and adapt to new situations and to modify our [conscious, sub-conscious] understanding where it is wrong or insufficient. Some other components of consciousness are provided in the remainder of this video, but there are many, many more in the literature. Of interest to philosophers such as David Chalmers, are qualia and phenomenal experiences.
  • image p038fig01.25 The ART Matching Rule stabilizes real time learning using a [top-down, modulatory on-center, off-surround] network. Object attention is realized by such a network. See text for additional discussion.
    || ART Matching Rule [volition, categories, features]. [one, two] against one.
  • image p483fig13.02 The object-value categories in the orbitofrontal cortex require converging specific inputs from the sensory cortex and nonspecific incentive motivational inputs from the amygdala in order to fire. When the orbitofrontal cortex fires, it can deliver top-down ART Matching Rule priming signals to the sensory cortical area by which it was activated, thereby helping to choose the active recognition categories there that have the most emotional support, while suppressing others, leading to attentional blocking of irrelevant cues.
    || Cognitive-Emotional-Motor (CogEM) model. Drive-> amygdala incentive motivational learning-> orbitofrontal cortex- need converging cue and incentive iputs to fire <-> sensory cortex- conditioned renforcer learning-> amygdala. CS-> sensory cortex. Motivated attention closes the cognitive-emotional feedback loop, focuses on relevant cues, and causes blocking of irrelevant cues.
  • image p484fig13.04 The top-down feedback from the orbitofrontal cortex closes a feedback loop that supports a cognitive-emotional resonance. If this resonance can be sustained long enough, it enables us to have feelings at the same time that we experience the categories that caused them.
    || Cognitive-Emotional resonance. Basis of "core consciousness" and "the feeling of what happens". (Damasio 1999) derives heuristic version of CogEM model from his clinical data. Drive-> amygdala-> prefrontal cortex-> sensory cortex, resonance around the latter 3. How is this resonance maintained long enough to become conscious?
  • First, what is the "... The Internet Encyclopedia of Philosophy goes on to say:
  • image p541fig15.02 The neurotrophic Spectrally Timed Adaptive Resonance Theory, or nSTART, model of (Franklin, Grossberg 2017) includes hippocampus to enable adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between CS and US.
    || Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. [CS, US] -> Sensory Cortex (SC) <- motivational attention <-> category learning -> Prefrontal Cortex (PFC). SC conditioned reinforcement learning-> Amygdala (cannot bridge the temporal gap) incentive motivational learning-> PFC. SC adaptively timer learning and BDNF-> Hippocampus (can bridge the temporal gap) BDNF-> PFC. PFC adaptively timed motor learning-> cerebellum.
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
    bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :image p038fig01.25 The ART Matching Rule stabilizes real time learning using a [top-down, modulatory on-center, off-surround] network. Object attention is realized by such a network. See text for additional discussion.
    || ART Matching Rule [volition, categories, features]. [one, two] against one. 025
  • image p192fig05.06 Bottom-up and top-down circuits between the LGN and cortical area V1. The top-down circuits obey the ART Matching Rule for matching with bottom-up input patterns and focussing attention on expected critical features.
    || Model V1-LGN circuits, version [1, 2]. retina -> LGN relay cells -> interneurons -> cortex [simple, endstopped] cells -> cortex complex cells 030
  • image p200fig05.13 Instar and outstar learning are often used to learn the adaptive weights in the bottom-up filters and top-down expectations that occur in ART. The ART Matching Rule for object attention enables top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features.
    || Expectations focus attention: feature pattern (STM), Bottom-Up adaptive filter (LTM), Category (STM), competition, Top-Down expectation (LTM); ART Matching Rule: STM before top-down matching, STM after top-down matching (attention!) 100
  • image p520fig14.02 Macrocircuit of the main brain regions, and connections between them, that are modelled in the unified predictive Adaptive Resonance Theory (pART) of cognitive-emotional and working memory dynamics. Abbreviations in red denote brain regions used in cognitive-emotional dynamics. Those in green denote brain regions used in working memory dynamics. Black abbreviations denote brain regions that carry out visual perception, learning and recognition of visual object categories, and motion perception, spatial representation and target tracking. Arrow denote non-excitatory synapses. Hemidiscs denote adpative excitatory synapses. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown. Also not shown are output signals from cortical areas to motor responses. V1: striate, or primary, visual cortex; V2 and V4: areas of prestriate visual cortex; MT: Middle Temporal cortex; MST: Medial Superior Temporal area; ITp: posterior InferoTemporal cortex; ITa: anterior InferoTemporal cortex; PPC: Posterior parietal Cortex; LIP: Lateral InterParietal area; VPA: Ventral PreArculate gyrus; FEF: Frontal Eye Fields; PHC: ParaHippocampal Cortex; DLPFC: DorsoLateral Hippocampal Cortex; HIPPO: hippocampus; LH: Lateral Hypothalamus; BG: Basal Ganglia; AMYG: AMYGdala; OFC: OrbitoFrontal Cortex; PRC: PeriRhinal Cortex; VPS: Ventral bank of the Principal Sulcus; VLPFC: VentroLateral PreFrontal Cortex. See the text for further details.
    || 240
  • image p240fig05.44 When an algebraic exemplar model is realized using only local computations, it starts looking like an ART prototype model.
    || How does the model know which exemplars are in category A? BU-TD learning. How does a NOVEL test item access category A? 325
  • image p207fig05.19 The ART hypothesis testing and learning cycle. See the text for details about how the attentional system and orienting system interact in order to incorporate learning of novel categories into the corpus of already learned categories without causing catastophic forgetting.
    || 330
  • image p215fig05.28 How a mismatch between bottom-up and top-down input patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level.
    || Mismatch -> inhibition -> arousal -> reset. BU input orienting arousal, BU+TD mismatch arousal and reset. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance 335
  • image p226fig05.35 I had shown in 1976 how a competitive learning or self-organizing map model could undergo catastrophic forgetting if the input environment was sufficiently dense and nonstationary, as illustrated by Figure 5.18. Later work with Gail Carpenter showed how, if the ART Matching Rule was shut off, repeating just four input patterns in the correct order could also casue catastrophic forgetting by causing superset recoding, as illustrated in Figure 5.36.
    || Code instability input sequences. D C A; B A; B C = ; |D|<|B|<|C|; where |E| is the number of features in the set E. Any set of input vectors that satisfy the above conditions will lead to unstable coding if they are periodically presented in the order ABCAD and the top-down ART Matching Rule is shut off. 340
  • image p226fig05.36 Column (a) shows catastrophic forgetting when the ART Matching Rule is not operative. It is due to superset recoding. Column (b) shows how category learning quickly stabilizes when the ART Matching Rule is restored.
    || Stabel and unstable learning, superset recoding 345
  • image p241fig05.45 The 5-4 category structure is one example of how an ART network learns the same kinds of categories as human learners. See the text for details.
    || 5-4 Category structure. A1-A5: closer to the (1 1 1 1) prototype; B1-B4: closer to the (0 0 0 0) prototype 350
  • image p419fig12.17 The auditory continuity illusion illustrates the ART Matching Rule at the level of auditory streaming. Its "backwards in time" effect of future context on past conscious perception is a signature of resonance.
    || Auditory continuity illusion. input, percept. Backwards in time - How does a future sound let past sound continue through noise? Resonance! - It takes a while to kick in. After it starts, a future tone can maintain it much more quickly. Why does this not happen if there is no noise? - ART Matching Rule! TD harmonic filter is maodulatory without BU input. It cannot create something out of nothing. 355
  • image p600fig16.36 The entorhinal-hipppocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories. See the text for details.
    || Entorhinal-hippocampal interactions as an ART system. Hippocampal place cells as spatial categories. Angular head velocity-> head direction cells-> stripe cells- small scale 1D periodic code (ECIII) SOM-> grid cells- small scale 2D periodic code (ECII) SOM-> place cells- larger scale spatial map (DG/CA3)-> place cells (CA1)-> conjunctive-coding cells (EC V/VI)-> top-down feedback back to stripe cells- small scale 1D periodic code (ECIII). stripe cells- small scale 1D periodic code (ECIII)-> place cells (CA1). 800
  • image p211fig05.21 Sequences of P120, N200, and P300 event-related potentials occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    || ERP support for mismatch-mediated reset: event-related potentials: human scalp potentials. ART predicted correlated sequences of P120-N200-P300 Event Related Potentials during oddball learning. P120 mismatch; N200 arousal/novelty; P300 STM reset. Confirmed in (Banquet and Grossberg 1987) 905
  • image p541fig15.02 The neurotrophic Spectrally Timed Adaptive Resonance Theory, or nSTART, model of (Franklin, Grossberg 2017) includes hippocampus to enable adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between CS and US.
    || Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. [CS, US] -> Sensory Cortex (SC) <- motivational attention <-> category learning -> Prefrontal Cortex (PFC). SC conditioned reinforcement learning-> Amygdala (cannot bridge the temporal gap) incentive motivational learning-> PFC. SC adaptively timer learning and BDNF-> Hippocampus (can bridge the temporal gap) BDNF-> PFC. PFC adaptively timed motor learning-> cerebellum.
  • image
  • image
  • image p163fig04.39 A schematic of the LAMINART model that explains key aspects of laminar visual cortical anatomy and dynamics. LGN -> V1 [6, 4, 2/3] -> V2 [6, 4, 2/3]
    || p163c1h0.6 "... The first article abount laminar computing ... proposed how the laminar cortical model could process 2D pictures using bottom-up filtering and horizontal bipole grouping interactions (Grossbeerg, Mingolla, Ross 1997). In 1999, I was able to extend the model to also include top-down circuits for expectation and attention (Grossberg 1999)(right panel). Such a synthesis of laminar bottom-up, horizontal, and top-down circuits is characteristic of the cerebral cortex (left panel). I called it LAMINART because it began to show how properties of Adaptive Resonance Theory, or ART, notably the ART prediction about how top-down expectations and attention work, and are realized by identical cortical cells and circuits. You can immediately see from the schematic laminar circuit diagram ... (right panel) that circuits in V2 seem to repeat circuits in V1, albeit with a larger spatial scale, despite the fact that V1 and V2 carry out different functions, How this anatomical similarity can coexist with functional diversity will be clarified in subsequent sections and chapters. It enable different kinds of biological intelligence to communicate seamlessly while carrying out their different psychological functions. ..."
  • Menu
  • Grossbergs list of [chapter, section]s.html - Note that the links on this webPage can be used to individually view all captioned images.
  • directory of captioned images - users can easily view all of the captioned images, especially if they are downloaded onto their computer. Many image viewers have "forward, backward] arrows to go through these sequentially, or right-click to open a link in a window.
  • core bash script for extracting captions from webPage listing, convert them to images, then vertically appending them to the figure.
  • my bash utility to [position, move] windows. This is normally used to start up 6 workspaces on my computer (Linux Mint Debian Edition), each with 5-10 apps in separate windows.
  • Prepared themes with links to the captioned images - there are a huge number of themes from the book to focus on. I have prepared a few as examples.
  • What is consciousness? - video example not ready as of 30Aug2023. I save videos as "ogv/ogg" files, and open standard format. The "VLC media viewer" is the program that I use to view them. I have found that although some of the standard video viewers complain, when pushed into the process ogv files can be viewed with them.
  • Navigation: [menu, link, directory]s
  • Theme webPage generation by bash script
  • Notation for [chapter, section, figure, table, index, note]s
  • incorporate reader questions into theme webPages
  • A very primitive bash script is used to generate the search results for ALL themes in the Themes webPage. Many readers will already have far better tools for this from the Computational Intelligence area etc.
    Because the theme webPage is automatically generated, and frequently re-generated as I update the list of themes and sources, I do NOT edit the file directly. The output format can be confusing, due to the special formatted [chapter, section] headings, and large tables which will keep the readers guessing whether they are still within the theme they want to peruse (as per the Table of Contents). Perhaps I can upgrade the searches in time to reduce the confusion, and to split themes in a better way.
  • list of [chapter, section]s
  • list of [figure, table]s
  • selected index items - I have NO intention of re-typing the entire index!
  • Grossberg quotes
  • reader Howell notes - this is an example of building your own webPage of [note, comment, thought]s when reading the book, which can them be added to the bash script for searches. Are notes in addition to [figure, table] captions, mostly comprised of text within the image, but also including quotes of text in the book. Rarely, it includes comments by Howell preceded by "Howell".
    The latter are distinct from "readers notes" (see, for example : Grossberg The reader may want to create their own file of comments based on this example, or augment this list with their [own, others More importantly, and as an easy first adaptation of Grossbergs [core, fun, strange] concepts.html thematic listings, you probably want to get rid of Howell
  • downloading the entire webDirectories below to some directory on your filesystem, say {yourDir} : TrNNs_ART , bin (hopefully I
  • adapt the bash script bash script: thematic [search, collect]s.sh to your own system, and run. This will require re-defining several environmental variables for your, such as :
  • thematic sub-lists appear in the webPage "Grossberg
  • 29Sep2023 Here is a list of various problems with the captioned images and their links on the webPage Grossbergs list of [figure, table]s.html :
    10Aug2023 I haven
  • How difficult would it be to augment "Transformer Neural Networks" (TrNNs) with Grossberg 10Aug2023 This webPage has not yet been worked on. It will touch on one of three questions of this webSite as mentioned in the Introduction :
  • How difficult would it be to augment "Transformer Neural Networks" (TrNNs) with Grossberg 10Aug2023 I haven
  • conscious ART (cART), etc
  • A surprisingly small number of neural architectures can simulate [extensive, diverse] [neuro, pyscho]logical data at BOTH the [sub, ]conscious levels, and for [perception, action] of [sight, auditory, touch, language, cognition, emotion, etc]. This is similar to what we see in physics.
  • simple grepStr search results : Grossberg (Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin 2017)
  • Byoung-Kyong Min 2010 "A Thalamic reticular networking model of consciousness"
    "... The model suggests consciousness as a "mental state embodied through TRN-modulated synchronization of thalamocortical networks". In this model the thalamic reticular nucleus (TRN) is suggested as ideally suited for controlling the entire cerebral network, and responsible (via GABAergic networking) for synchronization of neural activity. ..." (Wiki2023)
  • Navigation: [menu, link, directory]s
  • Theme webPage generation by bash script
  • Notation for [chapter, section, figure, table, index, note]s
  • incorporate reader questions into theme webPages
  • Grossberg 2021 p081c2h0.66 sub-section: How does one evolve a computational brain?
    The above discussion illustrates that no single step of theoretical derivation can derive a whole brain. One needs a method for deriving a brain in stages, or cycles, much as evolution has incrementally discovered ever more complex brains over many thousands of years. The following theoretical method has been successfully applied many times since I first used it in 1957. It embodies a kind of conceptual evolutionary process for deriving a brain.

    Because "brain evolution needs to achieve behavioural success", we need to start with data that embodiey indices of behavioral success. That is why, as illustrated in Figure 2.37 Modelling method and cycle, one starts with Behavioral Data from scores or hundreds of psychological experiments. These data are analyszed as the result of an individual adapting autonomously in real time to a changing world. This is the Arty of Modeling. It requires that one be able to infer from static data curves the dynamical processes that control individual behaviors occuring in real time. One of the hardest things that I teach to my students to do is "how to think in real time" to be able to carry out this speculative leap.

    Properly carried out, this analysis leads to the discovery of new Design Principles that are embodied by these behavioral processes. The Design Principles highlight the functional meaning of the data, and clarify how individual behaviors occurring in real time give rise to these static data curves.

    These principles are then converted into the simplest Mathematical Model using a method of minimal anatomies, which is a form of Occam
  • image p059fig02.05 Bowed serial position curve. This kind of data emphasizes the importance of modelling how our brains give rise to our minds using nonlinear systems of differential equations.
    || Effects of [inter, intra]trial intervals (Hovland 1938). # of errors vs list position. [w (sec), W (sec)] = (2 6) (4 6) (2 126) (4 126). Nonoccurance of future items reduces the number of errors in response to past items. Thes data require a real-time theory for their explanation! that is, DIFFERENTIAL equations.
  • image p064fig02.10 The Shunting Model includes upper and lower bounds on neuronal activities. These bound have the effect of multiplying additive terms by excitatory and inhibitory automatic gain terms that enable such models to preserve their sensitivity to inputs whose size may vary greatly in size through time, while also approximately normalizing their total activities.
    || STM: Shunting Model (Grossberg, PNAS 1967, 1968). Mass action in membrane equations. Bi/Ci -> xi(t) -> O -> -Fi/Ei. Bounded activations, automatic gain control. d[dt: xi(t)] = - Ai*xi + (Bi - Ci*xi)sum[j=1 to n: fj(xj(t))*Dji*yji*zji + Ii] - (Ei*Xi + Fi)*sum[j=1 to n: gj(xj)*Gji*Yji*Zji + Ji]. Includes the Additive Model.
  • image p064fig02.11 Medium-Term Memory (MTM) and Long-Term Memory (LTM) equations complement the Additive and Shunting Models of STM. MTM is typically defined by a chemical transmitter that is released from the synaptic knobs of a neuron (Figure 2.03). Its release or inactivation in an activity-dependent way is also called habituation. LTM defines how associative learning occurs between a pair of neurons whose activities are approximately correlated through time. See the text for details.
    || Medium and Long Term memory.
    MTMhabituative transmitter gated[dt: yki(t)] = H*(K - yki) - L*fk(xk)*yki
    LTMgated steepest descent learningd[dt: zki(t)] = Mk*fk(xk)*(hi(xi) - zki)
  • image p068fig02.14 Hodgkin and Huxley developed a model to explain how spikes travel down the squid giant axon.
    || Neurophysiology (single cell): spike potentials in squid giant axon (Hodgekin, Huxley 1952, Nobel Prize). time -> (dendrites -> cell body -> axon).
    C*dp[dt: V] = α*dp^2[dX^2: V] + (V(+) - V)*g(+) + (V(-) - V)*g(-) + (V^p - V)*g^p
    g(+) = G(+)(m,h), g(-) = G(-)(n), G^p = const, [m, h, n] - ionic processes, V - voltage
    Precursor of Shunting network model (Rail 1962). (Howell: see p075fig02.24 Membrane equations of neurophysiology. Shunting equation
  • image p074fig02.23 The equations for a shunting on-center off-surround network. Shunting terms lead to many beautiful and important properties of these networks, which are found ubiquitously, in one form or another, in all cellular tissues.
    || Shunting on-center off-surround network.
    Mass action: d[dt: xi] = -A*xi +(B - xi)*Ii -xi*sum[k≠i: Ik]
    Turn on unexcited sitesTurn off excited sites
    At equilibrium:
    0 = d[dt: xi] = -(A + Ii + sum[k≠i: Ik])*xi + B*Ii = -(A + I)*xi + B*Ii
    xi = B*Ii/(A + I) = B*θi*I/(A + I) = θi* B*I/(A + I)No saturation!
    Infinite dynamical range
    Automatic gain control
    Compute ratio scale
    Weber law
    x = sum[k-1 to n: xk] = B*I/(A + I) ≤ B Conserve total activity
    NORMALIZATION
    Limited capacty
    Real-time probability
  • image p075fig02.24 The membrane equations of neurophysiology describe how cell voltages change in response to excitatory, inhibitory, and passive input channels. Each channel is described by a potential difference multiplied by a conductance. With the special choices shown in the lower right-hand corner, this equation defines a feedforward shuntin on-center off-surround network.
    || Membrane equations of neurophysiology.
    C*dp[dt] = (V(+) - V)*g(+) +(V(-) - V)*g(-) +(V(p) - V)*g(p)
    Shunting equation (not additive)
    V Voltage
    V(+), V(-), V(p) Saturating voltages
    g(+), g(-), g(p) Conductances
    V(+) = B, C = 1; V(-) = V(p) = 0; g(+) = Ii; g(-) = sum[k≠i: Ik];
    lower V: V(+) = V(p) Silent inhibition, upper V: V(+). (Howell: see p068fig02.14 Grossberg
  • image p079fig02.32 Matching amplifies the matched pattern due to automatic gain control. See terms I and J in the equation.
    || Substrate of resonance. Match (in phase) of BU and TD input patterns AMPLIFIES matched pattern due to automatic gain control by shunting terms. J = sum[i: Ji], I = sum[i: Ii], θi = (Ii + Ji)/(I + J)
    xi = (B + C)*(I + J)/(A + I + J)*[θi -C/(B + C)]
    Need top-down expectations to be MODULATORY.
  • image p202fig05.17 This figure summarizes the simplest equations whereby the adaptive weights of a winning category learn the input pattern that drove it to win, or more generally a time-average of all the input patterns that succeeded in doing so.
    || Geometry of choice and learning, learning trains the closest LTM vector
  • image p359fig10.07 The two bottom-up pathways from LGN to layer 4 of V1 can together activate layer 4 and contrast-normalize layer 4 responses.
    || Bottom-up contrast normalization (Grossberg 1968, 1973; Sperling, Sondhi 1968; Heeger 1992; Douglas etal 1995; Shapley etal 2004). Together, direct LGN-to-4 path and 6-to-4 on-center off-surround provide contrast normalization if cells obey shunting or membrane equation dynamics.
  • image p501fig13.26 A simple differential equation describes the processes of transmitter accumulation and release that do their best, at a finite rate, to carry out unbiased transduction.
    || Transmitter accumulation and release. Transmitter y cannot be restored at an infinite rate: T = S*ym y ~= B, Differential equations: d[dt: y] = A*(B - y) - S*y = accumulate - release. Transmitter y tries to recover to ensure unbiased transduction. What if it falls behind? Evolution has exploited the good properties that happen then.
  • image p505fig13.33 An unexpected event can disconfirm ongoing processing by triggering a burst of nonspecific arousal that causes antagonistic rebounds in currently active gated dipoles, whether cognitive or affective.
    || Novelty reset: rebound to arousal onset. 1. Equilibrate to I and J: S1 = f(I+J); y1 = A*B/(A+S1); S2 = f(I+J); y2 = A*B/(A+S2);. 2. Keep phasic input J fixed; increase arousal I to I* = I + ∆I: (a) OFF reaction if T1 < T2; OFF = T2 - T1 = f(I*+J)*y2 - f(I*)*y1 = { A*B*(f(I*) - f(I*+J)) - B*(f(I*)*f(I+J) - f(I)*f(I*+J)) } / (A+f(I)) / (A + f(I+J)). 3. How to interpret this complicated equation?
  • image p580fig16.05 Macrocircuit of the GridPlaceMap model, which can learn both 2D grid cells and place cells in response to realistic trajectories of navigating rats using a hierarchy of SOMs with identical equations.
    || GridPlaceMap model: rate-based and spiking (Pilly, Grossberg 2012). Pre-wired 1D stripe cells, learns both 2D frid and place cells! Same laws for both; both select most frequent and energetic inputs. Place cells emerge gradually in response to developing grid cells. [place-> grid-> stripe] cells-> path integration-> vestibular signals
  • image p586fig16.16 In the place cell learning model of (Gorchetnikov, Grossberg 2007), three populations of five cells each of entorhinal grid cells (only two are shown) with different spatial periods input to the model
  • image p593fig16.25 Spectral Spacing Model STM, MTM, and LTM equations. The rate spectrum that determines the dorsoventral gradient of multiple grid cell properties is defined by μm.
    || Spectral Spacing Model equations. [STM, MTM, LTM]. μm = rate spectrum.
  • Grossberg 2021 p081c2h0.66 sub-section: How does one evolve a computational brain?
    The above discussion illustrates that no single step of theoretical derivation can derive a whole brain. One needs a method for deriving a brain in stages, or cycles, much as evolution has incrementally discovered ever more complex brains over many thousands of years. The following theoretical method has been successfully applied many times since I first used it in 1957. It embodies a kind of conceptual evolutionary process for deriving a brain.

    Because "brain evolution needs to achieve behavioural success", we need to start with data that embodiey indices of behavioral success. That is why, as illustrated in Figure 2.37 Modelling method and cycle, one starts with Behavioral Data from scores or hundreds of psychological experiments. These data are analyszed as the result of an individual adapting autonomously in real time to a changing world. This is the Arty of Modeling. It requires that one be able to infer from static data curves the dynamical processes that control individual behaviors occuring in real time. One of the hardest things that I teach to my students to do is "how to think in real time" to be able to carry out this speculative leap.

    Properly carried out, this analysis leads to the discovery of new Design Principles that are embodied by these behavioral processes. The Design Principles highlight the functional meaning of the data, and clarify how individual behaviors occurring in real time give rise to these static data curves.

    These principles are then converted into the simplest Mathematical Model using a method of minimal anatomies, which is a form of Occam
  • image pxvifig00.01 Macrocircuit of the visual system
  • image p087fig03.01 A macrocircuit of key visual processes (in green) and the cortical areas in which they primarily occur (in red), from the retina to the Prefrontal Cortex (PFC), including both the What and Where cortical streams. The [bottom-up, horizontal, and top-down] interactions help each of these processes to overcome computationally complementary processing deficiencies that they would experience without them, and aso to read-out top-down expectations that help to stabilize learning while they focus attention on salient objects and positions.
    || Emerging unified theory of visual intelligence. [What, Where] streams. Bottom-up and top-down interactions overcome COMPLEMENTARY processing deficiencies.
  • image p168fig04.44 Macrocircuit of the main boundary and surface formation stages that take place from the lateral geniculate nucleus, or LGN, through cortical areas [V1, V2, V4]. See the text for details.
    || image p228fig05.37 A macrocircuit of the neurotrophic Spectrally Timed ART, or nSTART, model. I developed nSTART with my PhD student Daniel Franklin. It proposes how adaptively timed learning in the hippocampus, bolstered by Brain Derived Neurotrophic Factor, or BDNF, helps to ensure normal memory consolidation.
    || habituative gates, CS, US, Thalamus (sensory cortex, category learning, conditioned reinforcer learning, adaptively time learning and BDNF), Amygdala (incentive motivation learning), Hippocampus (BDNF), Prefrontal Cortex (attention), Pontine nuclei, Cerebellum (adaptively timed motor learning)
  • image p246fig05.48 Microcircuits of the LAMINART model that I developed with Rajeev Raizada. See the text for details of how they integrate bottom-up adaptive filtering, horizontal bipole grouping, and top-down attentional matching that satisfied the ART Matching Rule.
    ||
  • image p346fig09.16 A macrocircuit of some of the main brain regions that are used to move the eyes. Black boxes denote areas belonging to the saccadic eye movement systes (SAC), white boxes the smooth pursuit eye system (SPEM), and gray boxes, both systems. The abbreviations for the different brain regions are: LIP - Lateral Intra-Parietal area; FPA - Frontal Pursuit Area; MST - Middle Superior Temporal area; MT - Middle Temporal area; FEF - Frontal Eye Fields; NRPT - Nucleus Reticularis Tegmenti Pontis; DLPN - Dorso-Lateral Pontine Nuclei; SC - Superior Colliculus; CBM - CereBelluM; MVN/rLVN - Medial and Rostro-Lateral Vestibular Nucleii; PPRF - a Peri-Pontine Reticular Formation; TN - Tonic Neurons
    ||
  • image p436fig12.30 The conscious ARTWORD, or cARTWORD, laminar cortical speech model simulates how future context can disambiguate noisy past speech sounds in such a way that the completed percept is consciously heard to proceed from past to future as a feature-item-list resonant wave propagates through time.
    || cARTWORD: Laminar cortical model macrocircuit (Grossberg, Kazerounian 2011) Simulates PHONEMIC RESTORATION: Cognitive Working Memory (processed item sequences) - [Excitatory-> inhibitory-> habituative-> adaptive filter-> adaptive filter-> adaptive filter with depletable synapse-> Acoustic [item, feature]
  • image p474fig12.70 The kind of model macrocircuit that was used in (Grossberg, Stone 1986) to explain lexical decision task data.
    || inputs-> A1 <-> A2 oconic sensory features <-> A3 item and order in sensory STM <-> A4 list parsing in STM (masking field) <-> A5 semantic network (self-feedback). [A4, A5] <-> V* visual object recognition system. M1-> [outputs, A1]. M1 <-> M2 iconic motor features <-> M3 item and order in motor STM. A2-> M2. A3-> M3.
  • image p481fig13.01 Macrocircuit of the functional stages and anatomical interpretations of the Cognitive-Emotional-Motor, or CogEM, model.
    || Drive-> hypothalamus value categories <-> amygdala incentive motivational learning-> Orbitofrontal cortex- object-value categories <-> sensory cortex- invariant object categories- conditioned reinforcer learning-> amygdala-> hypothalamus.
  • image p520fig14.02 Macrocircuit of the main brain regions, and connections between them, that are modelled in the unified predictive Adaptive Resonance Theory (pART) of cognitive-emotional and working memory dynamics. Abbreviations in red denote brain regions used in cognitive-emotional dynamics. Those in green denote brain regions used in working memory dynamics. Black abbreviations denote brain regions that carry out visual perception, learning and recognition of visual object categories, and motion perception, spatial representation and target tracking. Arrow denote non-excitatory synapses. Hemidiscs denote adpative excitatory synapses. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown. Also not shown are output signals from cortical areas to motor responses. V1: striate, or primary, visual cortex; V2 and V4: areas of prestriate visual cortex; MT: Middle Temporal cortex; MST: Medial Superior Temporal area; ITp: posterior InferoTemporal cortex; ITa: anterior InferoTemporal cortex; PPC: Posterior parietal Cortex; LIP: Lateral InterParietal area; VPA: Ventral PreArculate gyrus; FEF: Frontal Eye Fields; PHC: ParaHippocampal Cortex; DLPFC: DorsoLateral Hippocampal Cortex; HIPPO: hippocampus; LH: Lateral Hypothalamus; BG: Basal Ganglia; AMYG: AMYGdala; OFC: OrbitoFrontal Cortex; PRC: PeriRhinal Cortex; VPS: Ventral bank of the Principal Sulcus; VLPFC: VentroLateral PreFrontal Cortex. See the text for further details.
    ||
  • image p532fig14.08 Macrocircuit of the ARTSCENE Search neural model for learning to search for desired objects by using the sequences of already experienced objects and their locations to predict what and where the desired object is. V1 = First visual area or primary visual cortex; V2 = Second visual area; V4 = Fourth visual area; PPC = Posterior Parietal Cortex; ITp = posterior InferoTemporal cortex; ITa = anterior InferoTemporal cortex; MTL = Medial Temporal Lobe; PHC = ParaHippoCampal cortex; PRC = PeriRhinal Cortex; PFC = PreFrontal Cortex; DLPFC = DorsoLateral PreFrontal Cortex; VPFC = Ventral PFC; SC = Superior Colliculus.
    ||
  • image p580fig16.05 Macrocircuit of the GridPlaceMap model, which can learn both 2D grid cells and place cells in response to realistic trajectories of navigating rats using a hierarchy of SOMs with identical equations.
    || GridPlaceMap model: rate-based and spiking (Pilly, Grossberg 2012). Pre-wired 1D stripe cells, learns both 2D frid and place cells! Same laws for both; both select most frequent and energetic inputs. Place cells emerge gradually in response to developing grid cells. [place-> grid-> stripe] cells-> path integration-> vestibular signals
  • image p599fig16.35 Data (a) and simulations (b,c) about anatomically overlapping grid cell modules. (a) shows the anatomical distribution of grid cells belonging to different modules in one animal. DV location (mm) vs postrhinal border. (b) shows the simulated distribution of learned grid cell spacings from two stripe cell scales. frequency (%) vs grid spacing (cm). mu = [1, 0.6]. (c) shows what happens when half the cells respond with one rate and half another rate. (d) shows the same with three rates. (e-g) show spatial maps and autocorrelograms of grid cells that arise from the different rates in (d). [rate map, autocorelogram] vs [score [1.07, 0.5, 0.67], spacing (cm) [23.58, 41, 63.64]].
    ||
  • image p612fig16.42 Macrocircuit of the main SOVEREIGN subsystems.
    || [reward input, drive input, drive representation (DR), visual working memory and planning system (VWMPS), visual form and motion system (VFMS), motor approach and orienting system (MAOS), visual input (VisIn), motor working memory and planning system (MWMPS), motor approach and orienting system (MAOS), motor plant (MotP), Proprioceptive Input (PropIn), Vestibular Input (VesIn), Environmental feedback (EnvFB). DR [incentive motivational learning-> [VWMPS, MWMPS], -> VFMS, -> MAOS], VWMPS [conditioned reinforcer learning-> DR, MAOS], VFMS [visual object categories-> VWMPS, reactive movement commands-> MAOS], MWMPS [conditioned reinforcer learning-> DR, planned movement commands-> MAOS], MAOS [motor map positions-> MWMPS, motor outflow-> MotP], VisIn-> VFMS, VesIn-> MAOS, EnvFB-> [VisIn, MotP, VesIn].
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
  • bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :image p025fig01.16 (left panel) The main processing stages of the Cognitive-Emotional-Motor (CogEM) model have anatomical interpretations in terms of sensory cortex, amygdala, and prefrontal cortex. Chapter 13 will describe in greater detail how CS cues activate invariant object categories in the sensory cortex, value categories in the amygdala, and object-value categories in the prefrontal cortex, notably the orbitofrontal cortex. The amygdala is also modulated by internal drive inputs like hunger and satiety. (right panel) Anatomical data support this circuit, as do many neurophysiological data.
    || drive -> amygdala -> prefrontal cortex <-> sensory cortex -> amygdala. [visual, somatosensory, auditory, gustatory, olfactory] cortex -> [amygdala, Orbital Prefrontal Cortex]. amygdala -> Lateral Prefrontal Cortex
  • image p481fig13.01 Macrocircuit of the functional stages and anatomical interpretations of the Cognitive-Emotional-Motor, or CogEM, model.
    || Drive-> hypothalamus value categories <-> amygdala incentive motivational learning-> Orbitofrontal cortex- object-value categories <-> sensory cortex- invariant object categories- conditioned reinforcer learning-> amygdala-> hypothalamus.
  • image p483fig13.02 The object-value categories in the orbitofrontal cortex require converging specific inputs from the sensory cortex and nonspecific incentive motivational inputs from the amygdala in order to fire. When the orbitofrontal cortex fires, it can deliver top-down ART Matching Rule priming signals to the sensory cortical area by which it was activated, thereby helping to choose the active recognition categories there that have the most emotional support, while suppressing others, leading to attentional blocking of irrelevant cues.
    || Cognitive-Emotional-Motor (CogEM) model. Drive-> amygdala incentive motivational learning-> orbitofrontal cortex- need converging cue and incentive iputs to fire <-> sensory cortex- conditioned renforcer learning-> amygdala. CS-> sensory cortex. Motivated attention closes the cognitive-emotional feedback loop, focuses on relevant cues, and causes blocking of irrelevant cues.
  • image p483fig13.03 The predicted processing stages of CogEM have been supported by anatomical studies of connections between sensory cortices, amygdala, and orbitofrontal cortex.
    || Adapted from (Barbas 1995). sensory cortices = [visual, somatosensory, auditory, gustatory, olfactory]. sensory cortices-> amygdala-> orbital prefrontal cortex. sensory cortices-> orbital prefrontal cortex. [visual cortex, amygdala]-> lateral prefrontal cortex.
  • image p484fig13.04 The top-down feedback from the orbitofrontal cortex closes a feedback loop that supports a cognitive-emotional resonance. If this resonance can be sustained long enough, it enables us to have feelings at the same time that we experience the categories that caused them.
    || Cognitive-Emotional resonance. Basis of "core consciousness" and "the feeling of what happens". (Damasio 1999) derives heuristic version of CogEM model from his clinical data. Drive-> amygdala-> prefrontal cortex-> sensory cortex, resonance around the latter 3. How is this resonance maintained long enough to become conscious?
  • image p487fig13.11 The three main properties of CogEM that help to explain how attentional blocking occurs.
    || CogEM explanation of attentional blocking. Internal drive input <-> Conditioned reinforcer learning (self-recurrent) <-> Competition for STM <- Motor learning. 1. Sensory representations compete for limited capacity STM. 2. Previously reinforced cues amplify their STM via positive feedback. 3. Other dues lose STM via competition.
  • image p489fig13.13 (top row) If a positive ISI separates onset of a CS and US, then the CS can sample the consequences of the US during the time interval before it is inhibited by it. (bottom row) A CogEM simulation of the inverted-U in conditioning as a function of the ISI betweeen CS and US.
    || Positive ISI and conditioning.
  • image p490fig13.15 The CogEM circuit is an ancient design that is found even in mollusks like Aplysia. See the text for details.
    || Aplysia (Buononamo, Baxter, Byrne, Neural Networks 1990; Grossberg, Behavioral and Brain Sciences 1983). Facilitator neuron ~ drive representation.
  • image p494fig13.19 (left column, top row) Secondary conditioning of both arousal and a specific response are now possible. (bottom row) The CogEM circuit may be naturally extended to include multiple drive representations and inputs. (right column, top row) The incentive motivational pathways is also conditionable in order to enable motivational sets to be learned.
    || Secondary conditioning. Homology: conditionable incentive motivation. Multiple drive representations and inputs.
  • image p514fig13.44 Analog of the COgEM model in Figure 6.1 of (Damasio 1999).
    || (a) map of object X-> map of proto-self at inaugural instant-> [, map of proto-self modified]-> assembly of second-order map. (b) map of object X enhanced-> second-order map imaged.
  • image p523fig14.03 (a) The MOTIVATOR neural model generalizes CogEM by also including the basal ganglia. It can hereby explain and simulate complementary functions of the amygdala and basal ganglia (SNc) during conditioning and learned performance. The basal ganglia generate Now Print signals in response to unexpected rewards. These signals modulate learning of new associations in many brain regions. The amygdala supports motivated attention to trigger actions that are expected to occur in response to conditioned or unconditioned stimuli. Object Categories represent visual or gustatory inputs in anterior inferotemporal (ITA) and rhinal (RHIN) cortices, respectively. Value Categories represent the value of anticipated outcomes on the basis of hunger and satiety inputs, in amygdala (AMYG) and lateral hypothalamus (LH). Object-Value Categories resolve the value of competing perceptual stimuli in medial (MORB) and lateral (ORB) orbitofrontal cortex. The Reward Expectation Filter detects the omission or delivery of rewards using a circuit that spans ventral striatum (VS), ventral pallidum (VP), striosomal delay (SD) cells in the ventral striatum, the pedunculopontine nucleus (PPTN) and midbrain dopaminergic neurons of the substantia nigra pars compacta/ventral tegmental area (SNc/VTA). The circuit that processes CS-related visual information (ITA, AMTG, ORB) operates in parallel with a circuit that processes US-related visual and gustatory information (RHIN, AMYG, MORB). (b) Reciprocal adaptive connections between hypothalamus and amygdala enable amygdala cells to become learned value categories. The bottom region represents hypothalmic cells, which receive converging taste and metabolite inputs whereby they become taste-drive cells. Bottom-up signals from activity patterns across these cells activate competing value category, or US Value Representations, in the amygdala. A winning value category learns to respond selectively to specific combinations of taste-drive activity patterns and sends adaptive top-down priming signals back to the taste-drive cells that activated it. CS-activated conditioned reinforcer signals are also associatively linked to value categories. Adpative connections end in (approximately) hemidiscs. See the text for details.
    ||
  • image p548fig15.16 Homologous recognition learning and reinforcement learning macrocicuits enable adaptively timed conditioning in the reinforcement learning circuit to increase inhibition of the orienting system at times when a mismatch in the recognition system would have reduced inhibition of it.
    || Homolog between ART and CogEM model, complementary systems. [Recognition, Reinforcement] learning vs [Attentional, Orienting] system. Reinforcement: timing, drive representation.
  • image p080fig02.33 An opposite-attracts rule during the development of intracellular connections can lead to a mature network that realizes informational noise suppression.
    || How do noise suppression parameters arise? Symmetry-breaking during morphogenesis? Opposites attract rule.
    Intracellular parameters C/B = 1/(1 - n) Intercellular parameters
    Predicts that:
    • Intracellular excitatory and inhibitory saturation points can control the growth during development of :
    • Intercellular excitatory and inhibitory connections.
  • image p012fig01.08 A sigmoidal signal function is a hybrid signal that combines the best properties of [faster, same, slower]-than linear signals. It can suppress noise and store a partially contrast-enhanced activity pattern. slower-than-linear saturates pattern; approximately linear- preserves pattern and normalizes; faster-than-linear- noise suppression and contrast-enhancement.
    || Sigmoidal signal: a hybrid. (upper) saturates pattern- slower-than-linear; (middle) preserves pattern and normalizes- approximately linear. (lower) noise suppression and contrast enhancement- faster-than-linear.
  • image p078fig02.30 Choosing the adaptation level to achieve informational noise suppression.
    || Noise suppression. Attenuate Zero Spatial frequency patterns: no information. Ii vs i (flat line), xi vs i (flat line at zero)
    B >> C: Try B = (n - 1)*C or C/(B + C) = 1/n
    Choose a uniform input pattern (no distinctive features): All θi = 1/n
    xi = (B + C)*I/(A + I)*[θi -C/(B + C)] = 0 no matter how intense I is.
  • image p078fig02.31 How noise suppression enables matching of bottom-up and top-down input patterns.
    || Noise suppression -> pattern matching. mismatch (out of phase) suppressed, match (in phase) amplifies pattern.
  • image p080fig02.33 An opposite-attracts rule during the development of intracellular connections can lead to a mature network that realizes informational noise suppression.
    || How do noise suppression parameters arise? Symmetry-breaking during morphogenesis? Opposites attract rule.
    Intracellular parameters C/B = 1/(1 - n) Intercellular parameters
    Predicts that:
    • Intracellular excitatory and inhibitory saturation points can control the growth during development of :
    • Intercellular excitatory and inhibitory connections.
  • image p080fig02.34 How to achieve informational noise suppression in a network with multiple parallel processing channels.
    || Symmetry-breaking: dynamics and anatomy.
    Dynamics:
    • excitatory range is amplified
    • inhibitory range is compressed
    Anatomy:
    • narrow on-center
    • broad off-surround
    Noise suppression: attenuates uniform patterns
    Contour direction: enhances pattern gradients
  • image p081fig02.36 Informational noise suppression in network with Gaussian on-center and off-surround function as contour detectors that are sensitive to ratio-contrast.
    || Noise suppression and contour detection.
    If B*sum[k=1 to n: Cki] <= D*sum[k=1 to n: Eki] then:
    • uniform patterns are suppressed
    • contrasts are selectively enhanced
    • contours are detected
    Ii vs i, xi vs i
    Responses are selective to [REFLECTANCE, SPATIAL SCALE], eg color [feature, surface] contours.
  • image p510fig13.39 Shunting competition and informational noise suppression in affective gated dipoles, plus back-propagating action potentials for teaching signals, enable the net normalized adaptive weights to be learned. They never saturate!
    || Learn net dipole output pattern. Opponent "decision" controls learning. Cf. competitive learning. Learning signal, opponent extinction.
  • image p009fig01.06 Primacy gradient of activity stored in working memory within a recurrent shunting on-center off-surround network. Rehersal is controlled by a nonspecific rehersal wave and self-inhibitory feedback of the item that is currently being rehearsed. Rehearsal is controlled by a nonspecific rehearsal wave and self-inhibitory feedback of the item that is currently being rehearsed. Green = excitatory, red = inhibitory
    || inputs? -> item and order WM storage -> competitive selection-> rehearsal wave -> outputs
  • image p024fig01.15 A REcurrent Associative Dipole, or READ, circuit is a recurrent shunting on-center off-surround network with habituative transmitter gates. Sensory cues sample it with LTM traces and thereby become conditioned reinforcers.
    ||
  • image p038fig01.25 The ART Matching Rule stabilizes real time learning using a [top-down, modulatory on-center, off-surround] network. Object attention is realized by such a network. See text for additional discussion.
    || ART Matching Rule [volition, categories, features]. [one, two] against one.
  • image p073fig02.22 An on-center off-surround network is capable of computing input ratios.
    || Computing with patterns.
  • How to compute the pattern-sensitive variable: θi = Ii / sum[k=1 to n: Ik]?
    Needs interactions! What type? θi = Ii / sum[k ≠ i: Ik]
    Ii↑ ⇒ θi↑ excitation, Ik↑ ⇒ θk↓, k ≠ i inhibition
    On-center off-surround network.
  • image p074fig02.23 The equations for a shunting on-center off-surround network. Shunting terms lead to many beautiful and important properties of these networks, which are found ubiquitously, in one form or another, in all cellular tissues.
    || Shunting on-center off-surround network.
    Mass action: d[dt: xi] = -A*xi +(B - xi)*Ii -xi*sum[k≠i: Ik]
    Turn on unexcited sitesTurn off excited sites
    At equilibrium:
    0 = d[dt: xi] = -(A + Ii + sum[k≠i: Ik])*xi + B*Ii = -(A + I)*xi + B*Ii
    xi = B*Ii/(A + I) = B*θi*I/(A + I) = θi* B*I/(A + I)No saturation!
    Infinite dynamical range
    Automatic gain control
    Compute ratio scale
    Weber law
    x = sum[k-1 to n: xk] = B*I/(A + I) ≤ B Conserve total activity
    NORMALIZATION
    Limited capacty
    Real-time probability
  • image p075fig02.24 The membrane equations of neurophysiology describe how cell voltages change in response to excitatory, inhibitory, and passive input channels. Each channel is described by a potential difference multiplied by a conductance. With the special choices shown in the lower right-hand corner, this equation defines a feedforward shuntin on-center off-surround network.
    || Membrane equations of neurophysiology.
    C*dp[dt] = (V(+) - V)*g(+) +(V(-) - V)*g(-) +(V(p) - V)*g(p)
    Shunting equation (not additive)
    V Voltage
    V(+), V(-), V(p) Saturating voltages
    g(+), g(-), g(p) Conductances
    V(+) = B, C = 1; V(-) = V(p) = 0; g(+) = Ii; g(-) = sum[k≠i: Ik];
    lower V: V(+) = V(p) Silent inhibition, upper V: V(+). (Howell: see p068fig02.14 Grossberg
  • image p076fig02.25 An on-center off-surround network can respond to increasing on-center excitatory inputs without a loss of sensitivity. Instead, as the off-surround input increases, the region of a cell
  • image p076fig02.26 The mudpuppy retina exhibits the shift property that occurs in the feedforward shunting on-center off-surround network in Figure 2.25. As a result, its sensitivity also shifts in response to different background off-surrounds, and therefore exhibits no compression (dashed purple lines).
    || Mudpuppy retina neurophysiology.
    I center, J background
    a) Relative figure-to-ground
    b) Weber-Fechner I*(A + J)^(-I)
    c) No hyperpolarization, SHUNT: Silent inhibition
    d) Shift property(Werblin 1970) xi(K,J) vs K = ln(I)
    Adaptation- sensitivity shifts for different backgrounds. NO COMPRESSION.
  • image p077fig02.27 A schematic of the on-center off-surround network that occurs in the mudpuppy retina, including three main cell types: receptors, horizontal cells, and bipolar cells.
    || Mechanism: cooperative-competitive dynamics.
    On-center off-surround (Kuffler 1953) cat retina
    Subtractive lateral inhibition (Hartline, Ratcliff 1956/7+) limulus retina.
    R receptor -> H horizontal -> B bipolar (Werblin, Dowling, etal 1969+) mudpuppy retina.
  • image p080fig02.34 How to achieve informational noise suppression in a network with multiple parallel processing channels.
    || Symmetry-breaking: dynamics and anatomy.
    Dynamics:
    • excitatory range is amplified
    • inhibitory range is compressed
    Anatomy:
    • narrow on-center
    • broad off-surround
    Noise suppression: attenuates uniform patterns
    Contour direction: enhances pattern gradients
  • image p081fig02.35 The equilibrium activities of a shunting netwok with Gaussian on-center off-surround kernels are sensitive to the ratio-contrasts of the input patterns that they process. The terms in the denominator of the equilibrium activities accomplish this using the shunting on-center and off-surround terms.
    || Ratio-contrast detector. flat versus [Gaussian Cki, flattened Gaussian? Eki]
    d[dt: xi] = -A*xi +(B - xi)*sum[k≠i: Ik]*Cki -(xi + D)*sum[k=1 to n: Ik*Eki]
    Cki = C*e^(-μ*(k - i)^2), Eki = E*e^(-μ*(k - i)^2)
    At equilibrium: xi = I*sum[k=1 to n: θk*Fki] / (A + I*sum[k=1 to n: θk*Gki])
    Fki = B*Cki -D*Eki (weighted D.O.G)
    Gki = Cki +Eki (S,O,G)
    • Reflectance processing
    • Contrast normalization
    • Discount illuminant
  • image p081fig02.36 Informational noise suppression in network with Gaussian on-center and off-surround function as contour detectors that are sensitive to ratio-contrast.
    || Noise suppression and contour detection.
    If B*sum[k=1 to n: Cki] <= D*sum[k=1 to n: Eki] then:
    • uniform patterns are suppressed
    • contrasts are selectively enhanced
    • contours are detected
    Ii vs i, xi vs i
    Responses are selective to [REFLECTANCE, SPATIAL SCALE], eg color [feature, surface] contours.
  • image p106fig03.24 In response to the Synthetic Aperture image (upper corner left), a shunting on-center off-surround network "discounts the illiminant" and thereby normalizes cell activities to compute feature contours, without causing saturation (upper right corner). Multiple-scale boundaries form in response to spatially coherent activities in the feature contours (lower left corner) and create the webs, or containers, into which the feature contours fill-in the final surface representations (lower right corner).
    || Do these ideas work on hard problems? SAR!
    input imagefeature contoursboundary contoursfilled-in surface
    Synthetic Aperture Radar: sees through weather 5 orders of magnitude of power in radar returndiscounting the illuminant
    • normalizes the image: preseves RELATIVE activities without SATURATION
    • shows individual PIXELS
    boundaries complete between regions where normalized feature contrasts changefilling-in averages brightnesses within boundary compartments
  • image p176fig04.53 The on-center off-surround network within position and across depth helps to explain why brighter Kanizsa squares look closer.
    || inhibition vs. depth. p176c1h0.25 "... to qualitatively understand how this example of proximity-luminance covariance works. It follows directly from the boundary pruning by surface contour feedback signals (Figure 4.51) that achieves complementary consistency and initiates figure-ground perception. ...". p176c1h0.45 "... these inhibitory sigals are part of an off-surround network whose strength decreases as the depth difference increases between the surface that generates the signal and its recipient boundaries. ...". p176c1h0.8 "... Within FACADE theory, the perceived depth of a surface is controlled by the boundaries that act as its filling-in generators and barriers (Figure 3.22), since these boundaries select the depth-sselective FIDOs within whin filling-in can occur, and thereby achieve surface capture. These boundaries, in turn, are themselves strengthened after surface-to-boundary contour feedback eliminates redundant boundaries that cannot support sucessful filling-in (Figure 4.51). These surface contour feedback signals have precisely the properties that are needed to explain why brighter Kanizsa squares look closer! ..."
  • image p192fig05.05 ON and OFF cells in the LGN respond differently to the sides and ends of lines.
    || [ON, OFF]-center, [OFF, ON]-surround (respectively). OFF-center cells maximum response at line end (interior), ON-center cells maximum response along sides (exterior)
  • image p253fig06.02 After bottom-up surface inputs activate spatial attentional cells, they send top-down topographic excitatory signals back to the surface representations. This recurrent shunting on-center off-surround network contrast enhances larger attentional activities while approximately normalizing the total spatial attentional activity. A surface-shroud resonance hereby forms that selects an attentional shroud, enhances the perceived contrast of the attended surface (light blue region), and maintains spatial attention on it.
    || Surface-shroud resonance. perceptual surfaces -> competition -> spatial attention. (Carrasco, Penpeci-Talgar, and Eckstein 2000, Reynolds and Desimone 2003)
  • image p300fig08.12 A single flash activates a Gaussian receptive field across space whose maximum is chosen by a winner-take-all recurrent on-center off-surround network.
    || Gaussian receptive fields are sufficient! (Grossberg, Rudd 1992). Single flash. Suppose that a single flash causes a narrow peak of activity at the position where it occurs. It generates output signals through a Gaussian filter that produces a Gaussian activity profile at the next processing stage., A recurrent on-center off-surround network chooses the maximum activity and suppresses samaller activities. Winner-take-all
  • image p315fig08.35 The output signals from the directional grouping network obey the ART Matching Rule. They thereby select consistent motion directional signals while suppressing inconsistent ones, and do not distort that the spared cells code. The aperture problem is hereby solved by the same mechanism that dynamically stabilizes the learning of directional grouping cells.
    || How to select correct direction and preserve speed estimates? Prediction: Feedback from MSTv to MT- obeys ART Matching Rule; Top-down, modulatory on-center, off-surround network (Grossberg 1976, 1980; Carpenter, Grossberg 1987, 1991); Explains how directional grouping network can stably develop and how top-down directional attention can work. (Cavanough 1992; Goner etal 1986; Sekuer, Ball 1977; Stelmach etal 1994). Directional grouping network (MSTv) <-> Directional long-range filter (MT). Modulatory on-center selects chosen direction and preserves speed. Off-surround inhibits incompatible directions.
  • image p340fig09.07 Log polar remapping from the retina to cortical area V1 and beyond converts expansion, translation, and spiral flows on the retina into parallel flows, with different orientations, on the cortical map.
    || Log polar remapping of optic flow. retina -> cortex. Any combination of expansion and circular motion centered on the fovea maps to cortex as a single direction. Retinal Cartesian coordinates (x,y) map to cortical polar coordinates (r,theta). This makes it easy to compute directional receptive fields in the cortex!
  • image p345fig09.15 Double opponent directional receptive fields in MT are capable of detecting the motion of objects relative to each other and their backgrounds.
    || Motion opponency in MT (Born, Tootell 1992). Motion opponent (Grossberg etal), Differential motion (Royden etal), Subtractive motion cells (Neumann etal). ON center directionally selective: [excit, inhibit]ed by motion in [one, opponent] direction. OFF surround directionally selective: [excit, inhibit]ed by motion in [opponent, center] direction.
  • image p354fig10.01 The lamiar cortical circuit that realizes how we pay attention to an object sends signals from layer 6 of a higher cortical level to layer 6 of a lower cortical level and then back up to layer 4. This "folded feedback" circuit realizes a top-down, modulatory on-center, off-surround circuit that realizes the ART Matching Rule.
    || Top-down attention and folded feedback. Attentional signals also feed back into 6-to-4 on-center off-surround. 1-to-5-to-6 feedback path: Macaque (Lund, Booth 1975) cat (Gilbert, Wiesel 1979). V2-to-V1 feedback is on-center off-surround and affects layer 6 of V1 the most (Bullier etal 1996; Sandell, Schiller 1982). Attended stimuli enhanced, ignored stimuli suppressed. This circuit supports the predicted ART Matching Rule! [LGN, V[1,2][6->1]]
  • image p359fig10.06 Another, albeit indirect, pathway from LGN exists that can also excite layer 4 of V1. Why are not these two pathways redundant? The answer, ultimately, how to do with how cortex learns, as well as with how it pays attention. See the text for details.
    || Another bottom-up input to layer 4: Why?? Layer 6-to-4 on-center off-surround (Grieve, Sillito 1991, 1995; Ahmedetal 1994, 1997). LGN projects to layers 6 and 4. Layer 6 excites spiny stellates in column above it. Medium range connections onto inhibitory neurons. 6-t-4 path acts as on-center off-curround.
  • image p359fig10.07 The two bottom-up pathways from LGN to layer 4 of V1 can together activate layer 4 and contrast-normalize layer 4 responses.
    || Bottom-up contrast normalization (Grossberg 1968, 1973; Sperling, Sondhi 1968; Heeger 1992; Douglas etal 1995; Shapley etal 2004). Together, direct LGN-to-4 path and 6-to-4 on-center off-surround provide contrast normalization if cells obey shunting or membrane equation dynamics.
  • image p360fig10.08 The bottom-up on-center off-surround from LGN-to-6-to-4 has a modulatory on-center because of its role in realizing the ART Matching Rule and, with it, the ability of the cortex to dynamically stabilize its learned memories.
    || Modulation of priming by 6-to-4 on-center (Stratford etal 1996; Callaway 1998). On-center 6-to-4 excitation is inhibited down to being modulatory (priming, subthreshold). On-center 6-to-4 excitation cannot activate layer 4 on its own. Clarifies need for direct path. Prediction: plays key role in stable grouping, development and learning. ART Matching Rule!
  • image p362fig10.11 Feedback between layer 2/3 to the layer 6-to-4-to-2/3 feedback loop chooses the strongest grouping in cases where there is more than one. If only one grouping exists, then the circuit can function very quickly in a feedforward manner. When multiple groupings exist, the cortex "runs as fast as it can" to select the one with the most evidence to support it using the self-normalizing inhibition in the layer 6-to-4 off-surround.
    || How is the final grouping selected? Folded feedback LGN-> 6-> 4-> 2/3. 1. Layer 2/3 groupings feed back into 6-to-4 on-center off-surround: a) direct layer 2/3 -to-6 path; b) can also go via layer 5 (Blasdel etal 1985; Kisvarday etal 1989). 2. Strongest grouping enhanced by its on-center. 3. Inputs to weaker groupings suppressed by off-surround. 4. Interlaminar feedback creates functional columns. Activities of conflicting groupings are reduced by self-normalizing inhibition, slowing processing; intracortical feedback selects and contrast-enhances the winning grouping, speeding processing.
  • image p364fig10.13 The bottom-up adaptive filter, intracortical grouping circuit, and intercortical top-down attentional circuit all use the same competitive decision circuit between layers 6 and 4, called the attention-preattention interface, with which to select the featural patterns that will be processed.
    || Bottom-up filters and intracortical grouping feedback use the same 6-to-4 decision circuit, LGN-> Vx[6,4,2/3]. competitive decision circuit, modulatory on-center off-surround network. Top-down intercortical attention also uses the same 6-to-4 decision circuit!
  • image p364fig10.14 This figure emphasizes how preattentive intracortical groupings and top-down intercortical attention share the same modulatory on-center, off-surround layer 4-to-6 decision circuit.
    || Explanation: grouping and attention share the same modulatory decision circuit. Layer 6-6-4-2/3 pathway shown; also a layer 6-1-2/3 path. intercortical attention, both act via a modulatory on-center off-surround decision circuit, intracortical feedback from groupings
  • image p367fig10.15 Data (left column) and simulation (right column) of how attention prevents a masking stimulus from inhibiting the response to the on-center of the cell from which the recording was made.
    || Attention protects target from masking stimulus (Reynolds etal 1999; Grossberg, Raizada 2000).
  • image p375fig11.07 The 3D LAMINART model uses both monocular and binocular simple cells to binocularly fuse like image contrasts. The remainder of the model generates 3D boundary and surface representations of multiple kinds of experiments as well as of natural scenes.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A], V2 thin stripe [4->2/3A], V4]. V1 blob [V1-4 monocular, V1 interior binocular] simple cells. [complex, simple, inhibitory] cells, on-center off-surround
  • image p376fig11.10 The 3D LAMINART model shows how the disparity filter can be integrated into the circuit that completes 3D boundary representations using bipole grouping cells. It also explains how surface contours can strengthen boundaries that succeed in generating closed filling-in domains.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A] surface contour, V2 thin stripe (monocular surface) [4->2/3A], V2 interior [disynaptic inhibitory interneurons, bipole grouping cells, disparity filter, V4 binocular surface]. [complex, simple, inhibitory] cells, on-center off-surround
  • image p423fig12.20 The Spatial Pitch Network, or SPINET, model shows how a log polar spatial representation of the sound frequency spectrum can be derived from auditory signals occuring in time. The spatial representation allows the ARTSTREAM model to compute spatially distinct auditory streams.
    || SPINET model (Spatial Pitch Network) (Cohen, Grossberg, Wyse 1995). 1. input sound 2. Gamma-tone filter bank 3. Shaort-term average energy spectrum 4. MAP transfer function 5. On-center off-surround and rectification 6. Harmonic weighting 7. Harmonic summation and competition -> PITCH
  • image p448fig12.46 A Masking Field working memory is a multiple-scale self-similar recurrent shunting on-center off-surround network. It can learn list chunks that respond selectively to lists of item chunks of variable length that are stored in an item working memory at the previous processing stage. Chunks that code for longer lists (eg MY vs MYSELF) are larger, and give rise to stronger recurrent inhibitory neurons (red arrows).
    || How to code variable length lists? MASKING FIELDS code list chunks of variable length (Cohen, Grossberg 1986, 1987; Grossberg, Kazerounian 2011, 2016; Grossberg, Meyers 2000; Grossberg, Pearson 2008). Multiple-scale self-similar WM: Masking field, adaptive filter. Variable length coding- Masjking fields select list chunks that are sensitive to WM sequences of variable length; Selectivity- Larger cells selectively code code longer lists; Assymetric competition- Larger cells can inhibit smaller cells more than conversely MAgic Number 7! Temporal order- different list chunks respond to the same items in different orders eg LEFT vs FELT;.
  • image p564fig15.35 (a) A pair of recurrent shunting on-center off-surround networks for control of the fore limbs and hind limbs. (b) Varying the GO signal to these networks can trigger changes in movement gaits. See the text for details.
    ||
  • image p567fig15.38 (a) The Gated Pacemaker model for the control of circadian rythms is a recurrent shunting on-center off-surround network whose excitatory feedback signals are gated by habituative transmitters. Tonic arousal signals energize the pacemaker. Diurnal (left) and nocturnal (right) pacemakers are determined by whether phasic light signals turn the pacemaker on or off. An activity-dependent fatigue signal prevents the pacemaker from becoming overly active for too long. (b) Two simulations of circadian activity cycles during different schedules of light (L) and dark (D). See the text for details.
    || sourceOn-> on-cells (recurrent) <-(-) (-)> off-cells (recurrent) <-sourceOff. on-cells-> activity-> off-cells. off-cells-> fatigue. Diurnal: sourceOn=[light, arousal]; sourceOff=arousal;. Nocturnal: sourceOn=arousal; sourceOff=[arousal, light];.
  • image p586fig16.16 In the place cell learning model of (Gorchetnikov, Grossberg 2007), three populations of five cells each of entorhinal grid cells (only two are shown) with different spatial periods input to the model
  • image p627tbl17.01 Homologs between reaction-diffusion and recurrent shunting cellular network models of development.
    || byRows: (reaction-diffusion, recurrent shunting net) (activator, excitatory activity) (inhibitor, inhibitory activity) (morphogenic source density, inputs) (firing of morphogen gradient, contrast enhancement) (maintenance of morphogen gradient, short-term memory) (power or sigmoidal signal functions, power or sigmoidal signal functions) (on-center off-surround interactions via diffusion, on-center off-surround interactions via signals) (self-stabilizing distributions of morphogens if inhibitors equilibrate rapidly, short-term memory pattern if inhibitors equilibrate rapidly) (periodic pulses if inhibitors equilibrate slowly, periodic pulses if inhibitors equilibrate slowly) (regulation, adaptation).
  • image p016fig01.11 A sufficiently big mismatch between a bottom-up input pattern and a top-down expectation can activate the orienting system, which triggers a burst of nonspecific arousal that can reset the recognition category that read out the expectation. In this way, unexpected events can reset short-term memory and initiate a search for a category that better represents the current situation.
    || [category- top-down (TD) expectation; Bottom-up (BU) input pattern] -> Feature pattern -> BU-TD mismatch -> orienting system -> non-specific arousal -> category.
  • image p038fig01.25 The ART Matching Rule stabilizes real time learning using a [top-down, modulatory on-center, off-surround] network. Object attention is realized by such a network. See text for additional discussion.
    || ART Matching Rule [volition, categories, features]. [one, two] against one.
  • image p052fig02.02 Feature-category resonances enable us to rapidly learn how to recognize objects without experiencing catastrophic forgetting. Attentive matching between bottom-up feature pattern inputs and top-down expectations prevent catastrophic forgetting by focussing object attention upon expected patterns of features, while suppressing outlier features that might otherwise have caused catastophic forgetting if they were learned also.
    || Adaptive Resonance. Attended feature clusters reactivate bottom-up pathways. Activated categories reactivate their top-down pathways. Categories STM, Feature patterns STM. Feature-Category resonance [synchronize, amplify, prolong]s system response. Resonance triggers learning in bottom-up and top-down adaptive weights: adaptive resonance!
  • image p078fig02.31 How noise suppression enables matching of bottom-up and top-down input patterns.
    || Noise suppression -> pattern matching. mismatch (out of phase) suppressed, match (in phase) amplifies pattern.
  • image p079fig02.32 Matching amplifies the matched pattern due to automatic gain control. See terms I and J in the equation.
    || Substrate of resonance. Match (in phase) of BU and TD input patterns AMPLIFIES matched pattern due to automatic gain control by shunting terms. J = sum[i: Ji], I = sum[i: Ii], θi = (Ii + Ji)/(I + J)
    xi = (B + C)*(I + J)/(A + I + J)*[θi -C/(B + C)]
    Need top-down expectations to be MODULATORY.
  • image p087fig03.01 A macrocircuit of key visual processes (in green) and the cortical areas in which they primarily occur (in red), from the retina to the Prefrontal Cortex (PFC), including both the What and Where cortical streams. The [bottom-up, horizontal, and top-down] interactions help each of these processes to overcome computationally complementary processing deficiencies that they would experience without them, and aso to read-out top-down expectations that help to stabilize learning while they focus attention on salient objects and positions.
    || Emerging unified theory of visual intelligence. [What, Where] streams. Bottom-up and top-down interactions overcome COMPLEMENTARY processing deficiencies.
  • image p091fig03.04 A cross-section of the eye, and top-down view of the retina, shao how the blind spot and retinal veins can occlude the registration of light signals at their positions on the retina.
    || Eye: [optic nerve, ciliary body, iris,lens, pupil, cornea, sclera, choroid, retina]. Human retina: [fovea, blind spot, optic nerve]. see alsi cross-section of retinal layer.
  • image p163fig04.39 A schematic of the LAMINART model that explains key aspects of laminar visual cortical anatomy and dynamics. LGN -> V1 [6, 4, 2/3] -> V2 [6, 4, 2/3]
    || p163c1h0.6 "... The first article abount laminar computing ... proposed how the laminar cortical model could process 2D pictures using bottom-up filtering and horizontal bipole grouping interactions (Grossbeerg, Mingolla, Ross 1997). In 1999, I was able to extend the model to also include top-down circuits for expectation and attention (Grossberg 1999)(right panel). Such a synthesis of laminar bottom-up, horizontal, and top-down circuits is characteristic of the cerebral cortex (left panel). I called it LAMINART because it began to show how properties of Adaptive Resonance Theory, or ART, notably the ART prediction about how top-down expectations and attention work, and are realized by identical cortical cells and circuits. You can immediately see from the schematic laminar circuit diagram ... (right panel) that circuits in V2 seem to repeat circuits in V1, albeit with a larger spatial scale, despite the fact that V1 and V2 carry out different functions, How this anatomical similarity can coexist with functional diversity will be clarified in subsequent sections and chapters. It enable dfifferent kinds of biological intelligence to communicate seamlessly while carrying out their different psychological functions. ..."
  • image p192fig05.06 Bottom-up and top-down circuits between the LGN and cortical area V1. The top-down circuits obey the ART Matching Rule for matching with bottom-up input patterns and focussing attention on expected critical features.
    || Model V1-LGN circuits, version [1, 2]. retina -> LGN relay cells -> interneurons -> cortex [simple, endstopped] cells -> cortex complex cells
  • image p193fig05.08 The patterns of LGN activation and inhibition on the sides and ends of a line without the top-down feedback (A) and with it (C). The top-down distribution of excitation (+) and inhibition (-) are shown in (B).
    ||
  • image p199fig05.11 Instar learning enables a bottom-up adaptive filter to become selectively tuned to particular feature patterns. Such pattern learning needs adaptive weights that can either increase or decrease to match the featural activations that they filter.
    || Instar learning STM->LTM: need both increases and decreases in strength for the LTM pattern to learn the STM pattern
  • image p200fig05.13 Instar and outstar learning are often used to learn the adaptive weights in the bottom-up filters and top-down expectations that occur in ART. The ART Matching Rule for object attention enables top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features.
    || Expectations focus attention: feature pattern (STM), Bottom-Up adaptive filter (LTM), Category (STM), competition, Top-Down expectation (LTM); ART Matching Rule: STM before top-down matching, STM after top-down matching (attention!)
  • image p211fig05.20 The PN and N200 event-related potentials are computationally complementary events that are computed within the attentional and orienting systems.
    || PN and N200 are complementary waves. PN [top-down, conditionable, specific] match; N200 [bottom-up, unconditionable, nonspecific] mismatch
  • image p214fig05.24 Learning of a top-down expectation must occur during bottom-up learning in the adaptive filter in order to be able to match the previously associated feature pattern with the one that is currently active.
    || Learning top-down expectations. When the code (green right triangle GRT) for X1 was learned at F2, GRT learned to read-out X1 at F1. [Bottom-Up, Top-Down] learning
  • image p214fig05.25 The sequence of events whereby a novel input pattern can activate a category which, in turn, reads out its learned top-down expectation to be matched against the input pattern. Error correction thus requires the use of a Match Detector that has properties of the Processing Negativity ERP.
    || How is an error corrected. During bottom-up learning, top-down learning must also occur so that the pattern that is read out top-down can be compared with the pattern that is activated by bottom-up inputs. Match detector: Processing Negativity ERP. 1. top-down, 2. conditionable, 3. specific, 4. match
  • image p214fig05.26 When a big enough mismatch occurs, the orienting system is activated and sends a burst of nonspecific arousal to the category level. This Mismatch Detector has properties of the N200 ERP.
    || Mismatch triggers nonspecific arousal. Mismatch at F1 eleicits a nonspecific event at F2. Call this event nonspecific arousal. N200 ERP Naatanen etal: 1. bottom-up, 2. unconditionable, 3. nonspecific, 4. mismatch
  • image p215fig05.28 How a mismatch between bottom-up and top-down input patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level.
    || Mismatch -> inhibition -> arousal -> reset. BU input orienting arousal, BU+TD mismatch arousal and reset. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
  • image p220fig05.29 Vigilance is a gain parameter on inputs to the orienting system that regulates whether net excitation from bottom-up inputs or inhibition from activated categories will dominate the orienting system. If excitation wins, then a memory search for a better matching will occur. If inhibition wins, then the orienting system will remain quiet, thereby enabling resonance and learning to occur.
    || Vigilance control [resonate and learn, reset and search]. ρ is a sensitivity or gain parameter
  • image p221fig05.30 When a predictive disconfirmation occurs, vigilance increases enough to drive a search for a more predictive category. If vigilance increases just enough to exceed the analog match between features that survive top-down matching and the entire bottom-up input pattern, then minimax learning occurs. In this case, the minimum amount of category generalization is given up to correct the predictive error.
    || Match tracking realizes minimax learning principle. Given a predictive error, vigilance increases just enough to trigger search and thus acrifices the minimum generalization to correct the error ... and enables expert knowledge to be incrementally learned. predictive error -> vigilance increase just enough -> minimax learning
  • image p221fig05.31 A system like Fuzzy ARTMAP can learn to associate learned categories in one ART network with learned categories in a second ART network. Because both bottom-up and top-down interactions occur in both networks, a bottom-up input pattern to the first ART network can learn to generate a top-down output pattern from the second ART network.
    || Fuzzy ARTMAP. Match tracking realizes minimax learning principle: vigilence increases to just above the match ratio of prototype / exemplar, thereby triggering search
  • image p226fig05.35 I had shown in 1976 how a competitive learning or self-organizing map model could undergo catastrophic forgetting if the input environment was sufficiently dense and nonstationary, as illustrated by Figure 5.18. Later work with Gail Carpenter showed how, if the ART Matching Rule was shut off, repeating just four input patterns in the correct order could also casue catastrophic forgetting by causing superset recoding, as illustrated in Figure 5.36.
    || Code instability input sequences. D C A; B A; B C = ; |D|<|B|<|C|; where |E| is the number of features in the set E. Any set of input vectors that satisfy the above conditions will lead to unstable coding if they are periodically presented in the order ABCAD and the top-down ART Matching Rule is shut off.
  • image p246fig05.48 Microcircuits of the LAMINART model that I developed with Rajeev Raizada. See the text for details of how they integrate bottom-up adaptive filtering, horizontal bipole grouping, and top-down attentional matching that satisfied the ART Matching Rule.
    ||
  • image p252fig06.01 A surface-shroud resonance begins to form when the surface representations of objects bid for spatial attention. In addition to these topographic excitatory inputs, there is long-range inhibition of the spatial attention cells that determines which inputs will attract spatial attention.
    || Bottom-up spatial attention competition. [more, less] luminous perceptual surfaces -> competition -> spatial attention
  • image p253fig06.02 After bottom-up surface inputs activate spatial attentional cells, they send top-down topographic excitatory signals back to the surface representations. This recurrent shunting on-center off-surround network contrast enhances larger attentional activities while approximately normalizing the total spatial attentional activity. A surface-shroud resonance hereby forms that selects an attentional shroud, enhances the perceived contrast of the attended surface (light blue region), and maintains spatial attention on it.
    || Surface-shroud resonance. perceptual surfaces -> competition -> spatial attention. (Carrasco, Penpeci-Talgar, and Eckstein 2000, Reynolds and Desimone 2003)
  • image p258fig06.07 A top-down spotlight of attention can also be converted into a shroud. This process begins when the spotlight triggers surface filling-in within a region. Figure 6.8 shows how it is completed.
    || Reconciling spotlights and shrouds: top-down attentional spotlight becomes a shroud. spotlight of attention, surface filling-in
  • image p286fig07.04 Illusory contours persist longer than real contours because real contours have more inducers whose rebound at contour offset can cause faster boundary reset. Illusory contours also take longer to form than real contours, which explains the increasing portion of the curve.
    || Persistence data and simulations (Meyer, Ming 1988; Reynolds 1981). Increasing portion of curve is due to formation time of the illusory contour. Longer persistence is due to fewer bottom-up inducers of an illusory contour that has the same length as a real contour: only illuminance-derived edges generate reset signals. When bottom-up inducers are inhibited by OFF cell rebounds, their offset gradually propagates to the center of the illusory contour.
  • image p286fig07.05 This figure shows the propagation through time of illusory contour offset from the rebounded cells that got direct inputs to the center of the contour.
    || Persistence data and simulations. Illusory contours persist longer than real contours (Meyer, Ming 1988; Reynolds 1981). When bottom-up inducers are inhibited by OFF cell rebounds, their offset gradually propagates to the center of the illusory contour.
  • image p315fig08.35 The output signals from the directional grouping network obey the ART Matching Rule. They thereby select consistent motion directional signals while suppressing inconsistent ones, and do not distort that the spared cells code. The aperture problem is hereby solved by the same mechanism that dynamically stabilizes the learning of directional grouping cells.
    || How to select correct direction and preserve speed estimates? Prediction: Feedback from MSTv to MT- obeys ART Matching Rule; Top-down, modulatory on-center, off-surround network (Grossberg 1976, 1980; Carpenter, Grossberg 1987, 1991); Explains how directional grouping network can stably develop and how top-down directional attention can work. (Cavanough 1992; Goner etal 1986; Sekuer, Ball 1977; Stelmach etal 1994). Directional grouping network (MSTv) <-> Directional long-range filter (MT). Modulatory on-center selects chosen direction and preserves speed. Off-surround inhibits incompatible directions.
  • image p330fig08.52 Direction fields of the object frame (left column) and of the two dot "parts" (right column) show the correct motion directions after the peak shift top-down expectation acts.
    || Simulation of motion vector decomposition. [Larger scale (nearer depth), Small scale (farther depth)] vs [Down, Up]
  • image p331fig08.54 The simulated part directions of the rotating dot through time after the translational motion of the frame does its work via the top-down peak shift mechanism.
    || Cycloid. Motion directions of a single dot moving slowly along a cycloid curve through time.
  • image p354fig10.01 The lamiar cortical circuit that realizes how we pay attention to an object sends signals from layer 6 of a higher cortical level to layer 6 of a lower cortical level and then back up to layer 4. This "folded feedback" circuit realizes a top-down, modulatory on-center, off-surround circuit that realizes the ART Matching Rule.
    || Top-down attention and folded feedback. Attentional signals also feed back into 6-to-4 on-center off-surround. 1-to-5-to-6 feedback path: Macaque (Lund, Booth 1975) cat (Gilbert, Wiesel 1979). V2-to-V1 feedback is on-center off-surround and affects layer 6 of V1 the most (Bullier etal 1996; Sandell, Schiller 1982). Attended stimuli enhanced, ignored stimuli suppressed. This circuit supports the predicted ART Matching Rule! [LGN, V[1,2][6->1]]
  • image p356fig10.03 Laminar computing achieves at least three basic properties of visual processing that have analogs in all biologically intelligent behaviors. These properties may be found in all cortical circuits in specialized form.
    || What does Laminar Computing achieve? 1. Self-stabilizing development and learning; 2. Seamless fusion of a) pre-attentive automatic bottom-up processing, b) attentive task-selective top-down processing; 3. Analog coherence: Solution of Binding Problem for perceptual grouping without loss of analog sensitivity. Even the earliest visual cortical stages carry out active adaptive information processing: [learn, group, attention]ing
  • image p359fig10.05 Activation of V1 is initiated, in part, by direct excitatory signals from the LGN to layer 4 of V1.
    || How are layer 2/3 bipole cells activated? Direct bottom-up activation of layer 4. LGN -> V1 layer 4. Strong bottom-up LGN input to layer 4 (Stratford etal 1996; Chung, Ferster 1998). Many details omitted.
  • image p359fig10.06 Another, albeit indirect, pathway from LGN exists that can also excite layer 4 of V1. Why are not these two pathways redundant? The answer, ultimately, how to do with how cortex learns, as well as with how it pays attention. See the text for details.
    || Another bottom-up input to layer 4: Why?? Layer 6-to-4 on-center off-surround (Grieve, Sillito 1991, 1995; Ahmedetal 1994, 1997). LGN projects to layers 6 and 4. Layer 6 excites spiny stellates in column above it. Medium range connections onto inhibitory neurons. 6-t-4 path acts as on-center off-curround.
  • image p359fig10.07 The two bottom-up pathways from LGN to layer 4 of V1 can together activate layer 4 and contrast-normalize layer 4 responses.
    || Bottom-up contrast normalization (Grossberg 1968, 1973; Sperling, Sondhi 1968; Heeger 1992; Douglas etal 1995; Shapley etal 2004). Together, direct LGN-to-4 path and 6-to-4 on-center off-surround provide contrast normalization if cells obey shunting or membrane equation dynamics.
  • image p360fig10.08 The bottom-up on-center off-surround from LGN-to-6-to-4 has a modulatory on-center because of its role in realizing the ART Matching Rule and, with it, the ability of the cortex to dynamically stabilize its learned memories.
    || Modulation of priming by 6-to-4 on-center (Stratford etal 1996; Callaway 1998). On-center 6-to-4 excitation is inhibited down to being modulatory (priming, subthreshold). On-center 6-to-4 excitation cannot activate layer 4 on its own. Clarifies need for direct path. Prediction: plays key role in stable grouping, development and learning. ART Matching Rule!
  • image p364fig10.13 The bottom-up adaptive filter, intracortical grouping circuit, and intercortical top-down attentional circuit all use the same competitive decision circuit between layers 6 and 4, called the attention-preattention interface, with which to select the featural patterns that will be processed.
    || Bottom-up filters and intracortical grouping feedback use the same 6-to-4 decision circuit, LGN-> Vx[6,4,2/3]. competitive decision circuit, modulatory on-center off-surround network. Top-down intercortical attention also uses the same 6-to-4 decision circuit!
  • image p364fig10.14 This figure emphasizes how preattentive intracortical groupings and top-down intercortical attention share the same modulatory on-center, off-surround layer 4-to-6 decision circuit.
    || Explanation: grouping and attention share the same modulatory decision circuit. Layer 6-6-4-2/3 pathway shown; also a layer 6-1-2/3 path. intercortical attention, both act via a modulatory on-center off-surround decision circuit, intracortical feedback from groupings
  • image p420fig12.18 The ARTSTREAM model explains and simulates the auditory continuity illusion as an example of a spectral-pitch resonance. Interactions of ART Matching Rule and asymmetric competition mechanisms in cortical strip maps explain how the tone selects the consistent frequency from the noise in its own stream while separating the rest of the noise into another stream.
    || ARTSTREAM model (Grossberg 1999; Grossberg, Govindarajan, Wyse, Cohen 2004). SPINET. Frequency and pitch strips. Bottom Up (BU) harmonic sieve. Top Down (TD) harmonic ART matching. Exclusive allocation. Learn pitch categories based on early harmonic processing. A stream is a Spectral-Pitch Resonance!
  • image p441fig12.38 The LTM Invariance Principle is realized if the relative sizes of the inputs to the list chunk level stay the same as more items are stored in working memory. This property, in turn, follows from shunting previously stored working memory activities when a ne4w item occurs.
    || LTM Invariance principle. Choose STM activities so that newly stored STM activities may alter the size of old STM activities without recoding their LTM patterns. In particular: New events do not change the relative activities of past event sequences, but may reduce their absolute activites. Why? Bottom-up adaptive filtering uses dot products: T(j) = sum[i=1 to n: x(i)*z(i,j) = total input to v(j). The relative sizes of inputs to coding nodes v(j) are preserved. x(i) -> w*x(i), 0 < w <= 1, leaves all past ratios T(j)/T(k) unchanged.
  • image p449fig12.47 This figure illustrates the self-similarity in a Masking Field of both its recurrent inhibitory connections (red arrows) and its top-down excitatory priming signals (green arrows) to the item chunk working memory.
    || Both recurrent inhibition and top-down excitatory priming are self-similar in a masking field. MYSELF <-> [MY, MYSELF]
  • image p453fig12.50 The ARTWORD perception cycle shows how sequences of items activate possible list chunks, which compete among each other and begin to send their top-down expectations back to the item working memory. An item-list resonance develops through time as a result.
    || ARTWORD perception cycle. (a) bottom-up activation (b) list chunk competition (c) item-list resonance (d) chunk reset due to habituative collapse.
  • image p483fig13.02 The object-value categories in the orbitofrontal cortex require converging specific inputs from the sensory cortex and nonspecific incentive motivational inputs from the amygdala in order to fire. When the orbitofrontal cortex fires, it can deliver top-down ART Matching Rule priming signals to the sensory cortical area by which it was activated, thereby helping to choose the active recognition categories there that have the most emotional support, while suppressing others, leading to attentional blocking of irrelevant cues.
    || Cognitive-Emotional-Motor (CogEM) model. Drive-> amygdala incentive motivational learning-> orbitofrontal cortex- need converging cue and incentive iputs to fire <-> sensory cortex- conditioned renforcer learning-> amygdala. CS-> sensory cortex. Motivated attention closes the cognitive-emotional feedback loop, focuses on relevant cues, and causes blocking of irrelevant cues.
  • image p484fig13.04 The top-down feedback from the orbitofrontal cortex closes a feedback loop that supports a cognitive-emotional resonance. If this resonance can be sustained long enough, it enables us to have feelings at the same time that we experience the categories that caused them.
    || Cognitive-Emotional resonance. Basis of "core consciousness" and "the feeling of what happens". (Damasio 1999) derives heuristic version of CogEM model from his clinical data. Drive-> amygdala-> prefrontal cortex-> sensory cortex, resonance around the latter 3. How is this resonance maintained long enough to become conscious?
  • image p523fig14.03 (a) The MOTIVATOR neural model generalizes CogEM by also including the basal ganglia. It can hereby explain and simulate complementary functions of the amygdala and basal ganglia (SNc) during conditioning and learned performance. The basal ganglia generate Now Print signals in response to unexpected rewards. These signals modulate learning of new associations in many brain regions. The amygdala supports motivated attention to trigger actions that are expected to occur in response to conditioned or unconditioned stimuli. Object Categories represent visual or gustatory inputs in anterior inferotemporal (ITA) and rhinal (RHIN) cortices, respectively. Value Categories represent the value of anticipated outcomes on the basis of hunger and satiety inputs, in amygdala (AMYG) and lateral hypothalamus (LH). Object-Value Categories resolve the value of competing perceptual stimuli in medial (MORB) and lateral (ORB) orbitofrontal cortex. The Reward Expectation Filter detects the omission or delivery of rewards using a circuit that spans ventral striatum (VS), ventral pallidum (VP), striosomal delay (SD) cells in the ventral striatum, the pedunculopontine nucleus (PPTN) and midbrain dopaminergic neurons of the substantia nigra pars compacta/ventral tegmental area (SNc/VTA). The circuit that processes CS-related visual information (ITA, AMTG, ORB) operates in parallel with a circuit that processes US-related visual and gustatory information (RHIN, AMYG, MORB). (b) Reciprocal adaptive connections between hypothalamus and amygdala enable amygdala cells to become learned value categories. The bottom region represents hypothalmic cells, which receive converging taste and metabolite inputs whereby they become taste-drive cells. Bottom-up signals from activity patterns across these cells activate competing value category, or US Value Representations, in the amygdala. A winning value category learns to respond selectively to specific combinations of taste-drive activity patterns and sends adaptive top-down priming signals back to the taste-drive cells that activated it. CS-activated conditioned reinforcer signals are also associatively linked to value categories. Adpative connections end in (approximately) hemidiscs. See the text for details.
    ||
  • image p600fig16.36 The entorhinal-hipppocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories. See the text for details.
    || Entorhinal-hippocampal interactions as an ART system. Hippocampal place cells as spatial categories. Angular head velocity-> head direction cells-> stripe cells- small scale 1D periodic code (ECIII) SOM-> grid cells- small scale 2D periodic code (ECII) SOM-> place cells- larger scale spatial map (DG/CA3)-> place cells (CA1)-> conjunctive-coding cells (EC V/VI)-> top-down feedback back to stripe cells- small scale 1D periodic code (ECIII). stripe cells- small scale 1D periodic code (ECIII)-> place cells (CA1).
  • image p613fig16.44 The main target position vector (TPV), difference vector (DV), and volitional GO computations in SOVEREIGN that bring together reactive and planned signals to control decision-making and action. See the text for details.
    || Reactive visual TPV (RVT), NETs (NETs), S-MV mismatch (SMVM), NETmv (NETmv), reactive visual TPV storage (RVTS), reactive DV1 (RD1), NET (NET), motivated what and where decisions (MWWD), Planned DV1 (PD1), tonic (Tonic), top-down readout mismatch (TDRM), Parvo gate (tonic) (PG), Orienting GOp offset (OGpO). RVT-> [NETs, RVTS], NETs-> [SMVM, NET], SMVM-> NET, NETmv-> SMVM, RVTS-> [NETs, RD1], NET-> [RD1, PD1, TDRM], MWWD-> PD1, PD1-> Tonic-> TDRMPG-> NETs, OGpO-> [NETmv, PD1].
  • Grossberg 2021 p229c2h0.60 SMART computer simulations demonstrate that a good enough match of a top-down expectation with a bottom-up feature pattern generates an attentive resonance during which the spikes of active cells synchronize in the gamma frequency range of 20-70 Hz (Figure 5.40). Many labs have reported a link between attention and gamma oscillations in the brain, including two articles published in 2001, one from the laboratory of Robert Desimone when he was at the the National Institute of Mental Health in Bethseda (Fries, Reynolds, Rorie, Desimone 2001), and the other from the laboratory of Wolf Singer in Frankfurt (Engel, Fries, Singer 2001). You
  • Grossberg 2021 p081c2h0.66 sub-section: How does one evolve a computational brain?
    The above discussion illustrates that no single step of theoretical derivation can derive a whole brain. One needs a method for deriving a brain in stages, or cycles, much as evolution has incrementally discovered ever more complex brains over many thousands of years. The following theoretical method has been successfully applied many times since I first used it in 1957. It embodies a kind of conceptual evolutionary process for deriving a brain.

    Because "brain evolution needs to achieve behavioural success", we need to start with data that embodiey indices of behavioral success. That is why, as illustrated in Figure 2.37 Modelling method and cycle, one starts with Behavioral Data from scores or hundreds of psychological experiments. These data are analyszed as the result of an individual adapting autonomously in real time to a changing world. This is the Arty of Modeling. It requires that one be able to infer from static data curves the dynamical processes that control individual behaviors occuring in real time. One of the hardest things that I teach to my students to do is "how to think in real time" to be able to carry out this speculative leap.

    Properly carried out, this analysis leads to the discovery of new Design Principles that are embodied by these behavioral processes. The Design Principles highlight the functional meaning of the data, and clarify how individual behaviors occurring in real time give rise to these static data curves.

    These principles are then converted into the simplest Mathematical Model using a method of minimal anatomies, which is a form of Occam
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
    bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :image p009fig01.06 Primacy gradient of activity stored in working memory within a recurrent shunting on-center off-surround network. Rehersal is controlled by a nonspecific rehersal wave and self-inhibitory feedback of the item that is currently being rehearsed. Rehearsal is controlled by a nonspecific rehearsal wave and self-inhibitory feedback of the item that is currently being rehearsed. Green = excitatory, red = inhibitory
    || inputs? -> item and order WM storage -> competitive selection-> rehearsal wave -> outputs
  • image p077fig02.27 A schematic of the on-center off-surround network that occurs in the mudpuppy retina, including three main cell types: receptors, horizontal cells, and bipolar cells.
    || Mechanism: cooperative-competitive dynamics.
    On-center off-surround (Kuffler 1953) cat retina
    Subtractive lateral inhibition (Hartline, Ratcliff 1956/7+) limulus retina.
    R receptor -> H horizontal -> B bipolar (Werblin, Dowling, etal 1969+) mudpuppy retina.
  • image p100fig03.15 A fuzzy band of possible initial grouping orientations allows grouping to get started. Cooperative-competitive feedback via a hierarchical resolution of uncertainty chooses a sharp final grouping that has the most evidence to support it.
    || before choice: transient; after choice: equilibrium
  • image p108fig03.28 The watercolor illusion of Baingio Pinna 1987 can be explained using spatial competition betweeen like-oriented boundary signals. This occurs at what I have called the First Competitive Stage. This is one stage in the brain
  • image p146fig04.25 Networks of simple, complex, and hypercomplex cells can create end cuts as an example of hierarchical resolution of uncertainty. See the text for details.
    || How are end cuts created? (Grossberg 1984) Two stages of short-range competition. 1st stage: Simple cells -> complex cells -> hypercomplex - endstopped complex. First competitive stage- across position, same orientation; Second competitive stage- same position, across orientation. -> cooperation.
  • image p148fig04.26 End cuts are formed during neon color spreading in the same way that they are formed at line ends.
    || End cut during neon color spreading.
  • FIRST competitive stageSECOND competitive stage
    within orientationacross orientation
    across positionwithin position
    to generate end cuts.
  • image p149fig04.27 Bipole cells can form boundaries that interpolate end cuts, and use their cooperative-competitive interactions to choose the boundary groupings that have the most support from them.
    || Bipole cells: boundary completion. long-range cooperation & short-range inhibition: complete winning boundary groupings and suppress weaker boundaries.
  • image p161fig04.37 Kanizsa squares that form either collinearly to their inducers (left panel) or perpendicular to them (right panel) confirm predictions of the BCS boundary completion model.
    || Analog-sensitive boundary completion. contour strength vs Kanizsa square image. Increases with "support ratio" (Shipley, Kellman 1992). Inverted-U (Lesher, Mingoloa 1993; cf Soriano, Spillmann, Bach 1994)(shifted gratings). p370h0.6 BCS = Boundary Contour System, FCS = Feature Contour System. p161c1h0.85 "... As predicted by the BCS, they found an Inverted-U in contour strength as a function of line density. ... This effect may be explained by the action of the short-range competition that occurs before the stage of long-range cooperative grouping by bipole cells (Figure 4.32). It is thus another example of the balance between cooperative and competitive mechanisms. ..."
  • image p198fig05.10 A competitive learning circuit learns to transform distributed feature patterns into selective responses of recognition categories.
    || Competitive learning and Self-Organized Maps (SOMs). input patterns -> feature level (F1) -> adaptive filter (T=ZS) ->
  • image p205fig05.18 How catastrophic forgetting can occur in a competitive learning or self-organizing map model due to basic properties of competition and associative learning.
    || Learning from pattern sequences, practicing a sequence of spatial patterns can recode all of them! When is learning stable? Input patterns cannot be too dense relative to the number of categories; Either: not to many distributed inputs relative to the number of categories, or not too many input clusters
  • image p226fig05.35 I had shown in 1976 how a competitive learning or self-organizing map model could undergo catastrophic forgetting if the input environment was sufficiently dense and nonstationary, as illustrated by Figure 5.18. Later work with Gail Carpenter showed how, if the ART Matching Rule was shut off, repeating just four input patterns in the correct order could also casue catastrophic forgetting by causing superset recoding, as illustrated in Figure 5.36.
    || Code instability input sequences. D C A; B A; B C = ; |D|<|B|<|C|; where |E| is the number of features in the set E. Any set of input vectors that satisfy the above conditions will lead to unstable coding if they are periodically presented in the order ABCAD and the top-down ART Matching Rule is shut off.
  • image p287fig07.07 Persistence increases with distance between a target and a masking stimulus due to weakening of the spatial competition in the first competitive stage of hypercomplex cells.
    || Persistence data and simulations. Persistence increases with distance between a target and a masking stimulus (Farrell, Pavel, Sperling 1990). There is less spatial competition from the masker to the target when they are more distant, hence the target is more persistent.
  • image p364fig10.13 The bottom-up adaptive filter, intracortical grouping circuit, and intercortical top-down attentional circuit all use the same competitive decision circuit between layers 6 and 4, called the attention-preattention interface, with which to select the featural patterns that will be processed.
    || Bottom-up filters and intracortical grouping feedback use the same 6-to-4 decision circuit, LGN-> Vx[6,4,2/3]. competitive decision circuit, modulatory on-center off-surround network. Top-down intercortical attention also uses the same 6-to-4 decision circuit!
  • image p437fig12.33 Item and Order working memory models explain free recall data, as well as many other psychological and neurobiological data, by simulating how temporal series of events are stored as evolving spatial patterns of activity at content-addressable item categories. The categories with the largest activities are rehearsed first, and self-inhibit their activity as they do so in order to prevent tem from being rehearsed perseveratively. The laws whereby the items are stored in working memory obey basic design principles concerning list categories, or chunks, of sequences of stored items can be stably remembered.
    || Working memory models: item and order, or competitive queuing (Grossberg 1978; Houghton 1990; Page, Norris 1998). Event sequence in time stored as an evolving spatial pattern of activity. Primacy gradient of working memory activation stores correct temporal order at content-addressable cells. Maximally activated cell populations is performed next when a rehearsal wave is turned on. Output signal from chosen cell population inhibits its own activity to prevent perseveration: inhibition of return. Iterate until entire sequence is performed.
  • image p488fig13.12 (left column) How incentive motivational feedback amplifies activity of a sensory cortical cell population. (right column) A sensory cortical cell population whose activity is amplified by incentive motivational feedback can suppress the activities of less activated populations via self-normalizing recurrent competitive interactions.
    || Motivational feedback and blocking. (left) sensory input CS, STM activity without motivational feedback, STM activity with motivational feedback. (right) STM suppressed by competition, STM amplified by (+) feedback.
  • image p510fig13.39 Shunting competition and informational noise suppression in affective gated dipoles, plus back-propagating action potentials for teaching signals, enable the net normalized adaptive weights to be learned. They never saturate!
    || Learn net dipole output pattern. Opponent "decision" controls learning. Cf. competitive learning. Learning signal, opponent extinction.
  • p001 Chapter 1 Overview - From Complementary Computing and Adaptive Resonance to conscious awareness
  • p250 Chapter 6 Conscious seeing and invariant recognition - Complementary cortical streams coordinate attention for seeing and recognition
  • p289 Chapter 8 How we see and recognize object motion - Visual form and motion perception obey complementary laws
  • p337 Chapter 9 Target tracking, navigation, and decision-making - Visual tracking and navigation obey complementary laws
  • image p029tbl01.01 Some pairs of complementary processing streams.
    ||
    visual boundary:
    interblob stream V1-V2-V4
    visual surface:
    blob stream V1-V2-V4
    visual boundary:
    interblob stream V1-V2-V4
    visual motion:
    magno stream V1-MT-MST
    WHAT streamWHERE stream
    perception & recognition:
    interferotemporal & prefrontal areas
    space & action:
    parietal & prefrontal areas
    object tracking:
    MT interbands & MSTv
    optic flow navigation:
    MT+ bands & MSTd
    motor target position:
    motor & parietal cortex
    volitional speed:
    basal ganglia
  • image p030tbl01.02 The What and Where cortical processing streams obey complementary laws. These laws enable the What stream to rapidly and stably learn invariant object categories without experiencing catastrophic forgetting, while the Where stream learns labile spatial and action representations to control actions that are aimed towards these objects.
    ||
    WHATWHERE
    spatially-invariant object learning and recognitionspatially-variant reaching and movement
    fast learning without catastrophic forgettingcontinually update sensory-motor maps and gains
    IT InterferoTemporal CortexPPC Posterior Parietal Cortex
    WhatWhere
    matchingexcitatoryinhibitory
    learningmatchmismatch
  • image p087fig03.01 A macrocircuit of key visual processes (in green) and the cortical areas in which they primarily occur (in red), from the retina to the Prefrontal Cortex (PFC), including both the What and Where cortical streams. The [bottom-up, horizontal, and top-down] interactions help each of these processes to overcome computationally complementary processing deficiencies that they would experience without them, and aso to read-out top-down expectations that help to stabilize learning while they focus attention on salient objects and positions.
    || Emerging unified theory of visual intelligence. [What, Where] streams. Bottom-up and top-down interactions overcome COMPLEMENTARY processing deficiencies.
  • image p094fig03.07 The processes of boundary completion and surface filling-in are computationally complementary.
    ||
    Boundary completionSurface filling-in
    outwardinward
    orientedunoriented
    insensitive to direction of contrastsensitive to direction-of-contrast
  • image p174fig04.51 The same feedback circuit that ensures complementary consistency between boundaries and surfaces also, automatically, initiates figure-ground separation! See the text for details.
    || before feedback: [V1 -> V2 pale stripe -> V2 thin stripe, "attention pointers" (Cavanagh etal 2010)]; after feedback: [V1 + V2 thin stripe] -> V2 pale stripe via contrast sensitive [exhitation, inhibition] for depths [1, 2] -> object recognition
  • image p176fig04.53 The on-center off-surround network within position and across depth helps to explain why brighter Kanizsa squares look closer.
    || inhibition vs. depth. p176c1h0.25 "... to qualitatively understand how this example of proximity-luminance covariance works. It follows directly from the boundary pruning by surface contour feedback signals (Figure 4.51) that achieves complementary consistency and initiates figure-ground perception. ...". p176c1h0.45 "... these inhibitory sigals are part of an off-surround network whose strength decreases as the depth difference increases between the surface that generates the signal and its recipient boundaries. ...". p176c1h0.8 "... Within FACADE theory, the perceived depth of a surface is controlled by the boundaries that act as its filling-in generators and barriers (Figure 3.22), since these boundaries select the depth-sselective FIDOs within whin filling-in can occur, and thereby achieve surface capture. These boundaries, in turn, are themselves strengthened after surface-to-boundary contour feedback eliminates redundant boundaries that cannot support sucessful filling-in (Figure 4.51). These surface contour feedback signals have precisely the properties that are needed to explain why brighter Kanizsa squares look closer! ..."
  • image p211fig05.20 The PN and N200 event-related potentials are computationally complementary events that are computed within the attentional and orienting systems.
    || PN and N200 are complementary waves. PN [top-down, conditionable, specific] match; N200 [bottom-up, unconditionable, nonspecific] mismatch
  • image p267fig06.14 Feedback from object surfaces to object boundaries uses surface contours. This feedback assures complementary consistency and enables figure-ground separation. A corollary discharge of the surface contours can be used to compite salient object feature positions.
    || Perceptual consistency and figure-ground separation.
  • image p314fig08.34 The VISTARS model for visually-based spatial navigation. It uses the Motion BCS as a front end and feeds it output signals into two computationally complementary cortical processing streams for computing optic flow and target tracking information.
    || VISTARS navigation model (Browning, Grossberg, Mingolia 2009). Use FORMOTION model as front end for higher level navigational circuits: input natural image sequences -> estimate heading (MT+)-MSTd -> additive processing -> estimate object position (MT-)-MSTv direction and speed subtractive processing -> Complementary Computing. [optic flow navigation, object tracking]
  • image p523fig14.03 (a) The MOTIVATOR neural model generalizes CogEM by also including the basal ganglia. It can hereby explain and simulate complementary functions of the amygdala and basal ganglia (SNc) during conditioning and learned performance. The basal ganglia generate Now Print signals in response to unexpected rewards. These signals modulate learning of new associations in many brain regions. The amygdala supports motivated attention to trigger actions that are expected to occur in response to conditioned or unconditioned stimuli. Object Categories represent visual or gustatory inputs in anterior inferotemporal (ITA) and rhinal (RHIN) cortices, respectively. Value Categories represent the value of anticipated outcomes on the basis of hunger and satiety inputs, in amygdala (AMYG) and lateral hypothalamus (LH). Object-Value Categories resolve the value of competing perceptual stimuli in medial (MORB) and lateral (ORB) orbitofrontal cortex. The Reward Expectation Filter detects the omission or delivery of rewards using a circuit that spans ventral striatum (VS), ventral pallidum (VP), striosomal delay (SD) cells in the ventral striatum, the pedunculopontine nucleus (PPTN) and midbrain dopaminergic neurons of the substantia nigra pars compacta/ventral tegmental area (SNc/VTA). The circuit that processes CS-related visual information (ITA, AMTG, ORB) operates in parallel with a circuit that processes US-related visual and gustatory information (RHIN, AMYG, MORB). (b) Reciprocal adaptive connections between hypothalamus and amygdala enable amygdala cells to become learned value categories. The bottom region represents hypothalmic cells, which receive converging taste and metabolite inputs whereby they become taste-drive cells. Bottom-up signals from activity patterns across these cells activate competing value category, or US Value Representations, in the amygdala. A winning value category learns to respond selectively to specific combinations of taste-drive activity patterns and sends adaptive top-down priming signals back to the taste-drive cells that activated it. CS-activated conditioned reinforcer signals are also associatively linked to value categories. Adpative connections end in (approximately) hemidiscs. See the text for details.
    ||
  • image p548fig15.16 Homologous recognition learning and reinforcement learning macrocicuits enable adaptively timed conditioning in the reinforcement learning circuit to increase inhibition of the orienting system at times when a mismatch in the recognition system would have reduced inhibition of it.
    || Homolog between ART and CogEM model, complementary systems. [Recognition, Reinforcement] learning vs [Attentional, Orienting] system. Reinforcement: timing, drive representation.
  • p353 Chapter 10 Laminar computing by cerebral cortex - Towards a unified theory of biologucal and artificial intelligence
  • image p030fig01.20 A schematic cross-section of a slice of laminar neocortex whose cells are organized in a characteristic way in six layers, which themselves may be organized into distinct sublaminae. The computational paradigm of Laminar Computing attempts to show how different parts of neocortex can represent and control very different kinds of behavior - including vision, speech, can cognition - using specializations of the same canonical laminar cortical design.
    || Projection fibres: Cortico[spinal, bulbar, pontine, striate, reticulat, etc]; Thalamocortical fibres; Diffuse cortical afferent fibres: [nonspecific thalamocortical, Cholinergic, Monoaminergic]; Corticocortical efferents; Projection [cell, fibre]; Corticocortical efferent terminals.
  • image p141fig04.19 A laminar cortical circuit for computing binocular disparities in layer 3B of V1 at binocular simple cells. These cells add positionally disparate inputes from like polarized monocular simple cells (layer 4 of V1). Binocular simple cells at each position that is sensitive to opposite polarities then add their outputs at complex cells in layer 2/3. Chapter 10 will explain how these laminar circuits work in greater detail.
    || Laminar cortical circuit for complex cells. [left, right] eye.
    V1 layerdescription
    2/3Acomplex cells
    3Bbinocular simple cells
    4monocular simple cells
  • image p163fig04.39 A schematic of the LAMINART model that explains key aspects of laminar visual cortical anatomy and dynamics. LGN -> V1 [6, 4, 2/3] -> V2 [6, 4, 2/3]
    || p163c1h0.6 "... The first article abount laminar computing ... proposed how the laminar cortical model could process 2D pictures using bottom-up filtering and horizontal bipole grouping interactions (Grossbeerg, Mingolla, Ross 1997). In 1999, I was able to extend the model to also include top-down circuits for expectation and attention (Grossberg 1999)(right panel). Such a synthesis of laminar bottom-up, horizontal, and top-down circuits is characteristic of the cerebral cortex (left panel). I called it LAMINART because it began to show how properties of Adaptive Resonance Theory, or ART, notably the ART prediction about how top-down expectations and attention work, and are realized by identical cortical cells and circuits. You can immediately see from the schematic laminar circuit diagram ... (right panel) that circuits in V2 seem to repeat circuits in V1, albeit with a larger spatial scale, despite the fact that V1 and V2 carry out different functions, How this anatomical similarity can coexist with functional diversity will be clarified in subsequent sections and chapters. It enable dfifferent kinds of biological intelligence to communicate seamlessly while carrying out their different psychological functions. ..."
  • image p174fig04.52 An example of how the 3D LAMINART model can transform the two monocular images of the random dot stereogream in the top row into the three depth-separated surfaces representations in the bottom row.
    || Stereogram surface percepts: surface lightnesses are segregated in depth (Fang and Grossberg 2009). [left, right] inputs, [far, fixation, near] planes. Contrast algorithms that just compute disparity matches and let computer code build the surface eg (Marr, Poggio, etal 1974).
  • image p182fig04.58 LAMINART model processing stage that are sufficient to explain many percepts of transparency, including those summarized in Figure 4.57.
    || [left, right] eye, [LGN, V1 [6, 4, 3B, 2/3 A], V2 [4, 2/3]], [mo, bi]nocular cart [simple, complex] cells, [excita, inhibi]tory cart [connection, cell]s.
  • image p230fig05.38 The Synchronous Matching ART, or SMART, model includes spiking neurons in a laminar cortical hierarchy. I developed SMART with my PhD student Massimiliano Versace. By unlumping LAMINART to include spiking neurons, finer details of neurodynamics, such as the existence of faster gamma oscillations during good enough matches, and slower beta oscillations during bad enough mismatches, could be shown as emergent properties of network interactions.
    || Second order thalamus -> specific thalamic nucleus -> Thalamic reticulate nucleus -> neocortical laminar circuit [6ll, 6l, 5, 2/3, 1] -> Higher order cortex. Similar for First order thalamus -> First order cortex, with interconnection to Second order, nonspecific thalamic nucleus
  • image p231fig05.39 The SMART hypothesis testing and learning cycle predicts that vigilance increases when a mismatch in subcortical regions like the nonspecific thanlamus activates the nucleus basalis of Meynert which, in turn, broadcasts a burst of the neurotransmitter acetylcholine, or ACh, to deeper cortical layers. Due to the way in which LAMINART proposes that cortical matching and mismatching occurs, this ACh burst can increase vigilance and thereby trigger a memory search. See the text for details.
    || [BU input, [, non]specific thalamic nucleus, thalamic reticulate nucleus, neocortical laminar circuit] cart [Arousal, Reset, Search, Vigilance]
  • image p232fig05.41 (a)-(c). The sequence of interlaminar events that SMART predicts during a mismatch reset. (d) Some of the compatible neurophysiological data.
    || Mismatch causes layer 5 dendritic spike that trigger reset. (a) Arousal causes increase in nonspecific thalamic nuclei firing rate and layer 5 dendritic and later somatic spikes (Karkum and Zhu 2002, Williams and Stuart 1999) (b) Layer 5 spikes reach layer 4 via layer 6i and inhibitory neurons (Lund and Boothe 1975, Gilbert and Wiesel 1979) (c) habituative neurotransmitters in layer 6i shift the balance of active cells in layer 4 (Grossberg 1972, 1976) (d) Dendritic stimulation fires layer 5 (Larkum and Zhu 2002) stimulation apical dendrites of nonspecific thalamus
  • image p246fig05.48 Microcircuits of the LAMINART model that I developed with Rajeev Raizada. See the text for details of how they integrate bottom-up adaptive filtering, horizontal bipole grouping, and top-down attentional matching that satisfied the ART Matching Rule.
    ||
  • image p248fig05.49 This circuit of the LAMINART model helps to explain properties of Up and Down states during slow wave sleep, and how disturbances in ACh dynamics can disrupt them.
    ||
  • image p356fig10.03 Laminar computing achieves at least three basic properties of visual processing that have analogs in all biologically intelligent behaviors. These properties may be found in all cortical circuits in specialized form.
    || What does Laminar Computing achieve? 1. Self-stabilizing development and learning; 2. Seamless fusion of a) pre-attentive automatic bottom-up processing, b) attentive task-selective top-down processing; 3. Analog coherence: Solution of Binding Problem for perceptual grouping without loss of analog sensitivity. Even the earliest visual cortical stages carry out active adaptive information processing: [learn, group, attention]ing
  • image p357fig10.04 Laminar Computing achieves its properties by computing in a new way that sythesizes the best properties of feedforward and feedback interactions, analog and digital computations, and preattentive and attentive learning. The property of analog coherence enables coherent groupings and decisions to form without losing sensitivity to the amount of evidence that supports them.
    || Laminar Computing: a new way to compute. 1. Feedforward and feedback: a) Fast feedforward processing when data are unambiguous (eg Thorpe etal), b) slower feedback chooses among ambiguous alternatives [self-normalizing property, real-time probabiligy theory], c) A self-organizing system that trades certainty against speed: Goes beyond Bayesian models! 2. Analog and Digital: Analog Coherence combines the stability of digital with the sensitivity of analog. 3. Preattentive and Attentive Learning: Reconciles the differences of (eg) Helmholtz and Kanizsa, "A preattentive grouping is its own
  • image p362fig10.11 Feedback between layer 2/3 to the layer 6-to-4-to-2/3 feedback loop chooses the strongest grouping in cases where there is more than one. If only one grouping exists, then the circuit can function very quickly in a feedforward manner. When multiple groupings exist, the cortex "runs as fast as it can" to select the one with the most evidence to support it using the self-normalizing inhibition in the layer 6-to-4 off-surround.
    || How is the final grouping selected? Folded feedback LGN-> 6-> 4-> 2/3. 1. Layer 2/3 groupings feed back into 6-to-4 on-center off-surround: a) direct layer 2/3 -to-6 path; b) can also go via layer 5 (Blasdel etal 1985; Kisvarday etal 1989). 2. Strongest grouping enhanced by its on-center. 3. Inputs to weaker groupings suppressed by off-surround. 4. Interlaminar feedback creates functional columns. Activities of conflicting groupings are reduced by self-normalizing inhibition, slowing processing; intracortical feedback selects and contrast-enhances the winning grouping, speeding processing.
  • image p363fig10.12 The same laminar circuit design repeats in V1 and V2, albeit with specializations that include longer horizontal grouping axoms and figure-ground separation interactions.
    || V2 repeats V1 circuitry at larger spatial scale, LGN-> V1[6,4,2/3]-> V2[6,4,2/3]. V2 layer 2/3 horizontal axons longer-range than in V1 (Amir etal 1993). Therefore, longer-range groupings can form in V2 (Von der Heydt etal 1984)
  • image p375fig11.07 The 3D LAMINART model uses both monocular and binocular simple cells to binocularly fuse like image contrasts. The remainder of the model generates 3D boundary and surface representations of multiple kinds of experiments as well as of natural scenes.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A], V2 thin stripe [4->2/3A], V4]. V1 blob [V1-4 monocular, V1 interior binocular] simple cells. [complex, simple, inhibitory] cells, on-center off-surround
  • image p376fig11.10 The 3D LAMINART model shows how the disparity filter can be integrated into the circuit that completes 3D boundary representations using bipole grouping cells. It also explains how surface contours can strengthen boundaries that succeed in generating closed filling-in domains.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A] surface contour, V2 thin stripe (monocular surface) [4->2/3A], V2 interior [disynaptic inhibitory interneurons, bipole grouping cells, disparity filter, V4 binocular surface]. [complex, simple, inhibitory] cells, on-center off-surround
  • image p378fig11.12 How monocular and binocular information are combined in V1 and V2 in the 3D LAMINART model.
    || Model utilizes monocular information. [left, right] eye cart V1-[4 monocular simple, 3B binocular simple, complex2/3A [mo,bi]nocular] cells, V2-4 binocular complex cells. black = monocular cells, blue = binocular cells. In V2, monocular inputs add to binocular inputs along the line of sight and contribute to depth perception.
  • image p379fig11.13 How the 3D LAMINART model explains DaVinci stereopsis. All the stages of boundary and surface formation are color coded to clarify their explanation. Although each mechanism is very simple, when all of them act together, the correct depthful surface representation is generated. See the text for details.
    || DaVinci stereopsis (Nakayama, Shimojo 1990). An emergent property of the previous simple mechanisms working together. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] pair [Binocular match: boundaries of thick bar -> Add monocular boundaries along lines-of-sight -> Line-of-sight inhibition kills weaker vertical boundaries -> 3D surface percept not just a disparity match!] pair [Binocular match: right edge of thin and thick bars -> Strongest boundaries: binocular and monocular boundaries add -> Vertical boundaries from monocular left edge of thin bar survive -> Filling-in contained by connected boundaries]. cart [very near, near, fixation plane, far, very far]
  • image p380fig11.14 The model explanation of DaVinci stereopsis when the input stimuli have opposite contrast polarities.
    || Polarity-reversed Da Vinci stereopsis (Nakayama, Shimojo 1990). Same explanation! (... as Figure 11.13 ...) [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p395fig11.34 A comparison of the properties of other rivalry models with those of the 3D LAMINART model (surrounded by red border). Significantly, only 3D LAMINART explains both stable vision and rivalry (green border).
    || Comparison of rivalry models
  • image p401fig11.41 The 3D LAMINART model proposes how angle cells and disparity-gradient interact through learning to generate 3D representations of slanted objects.
    || 3D LAMINART model. [LGN, V1, V2, V4] Four key additions: 1. Angle cells - tuned to various angles; 2. Disparity-gradient cells - tuned to disparity gradients in the image; 3. weights from [angle to disparity-gradient] cells - learned while viewing 3D image; Colinear grouping between [angle to disparity-gradient] cells - disambiguates ambiguous groupings.
  • image p402fig11.44 3D scenic reconstruction of the image in Figure 11.43 by the 3D LAMINART model.
    || Disparity [5, 6, 8, 10, 11, 14]: images of objects in common depth planes
  • image p436fig12.30 The conscious ARTWORD, or cARTWORD, laminar cortical speech model simulates how future context can disambiguate noisy past speech sounds in such a way that the completed percept is consciously heard to proceed from past to future as a feature-item-list resonant wave propagates through time.
    || cARTWORD: Laminar cortical model macrocircuit (Grossberg, Kazerounian 2011) Simulates PHONEMIC RESTORATION: Cognitive Working Memory (processed item sequences) - [Excitatory-> inhibitory-> habituative-> adaptive filter-> adaptive filter-> adaptive filter with depletable synapse-> Acoustic [item, feature]
  • image p444fig12.42 The LIST PARSE laminar cortical model of working memory and list chunking that I published with Lance Pearson in 2008 simulated the Averbeck etal data in Figure 12.41, as in the left column of the figure. It also simulated cognitive data about working memory storage by human subjects. See the text for details.
    || LIST PARSE: Laminar cortical model of working memory and list chunking (Grossberg, Pearson 2008). Simulates data about: [immediate, delayed, continuous] distractor free recall; immediate serial recall; and variable-speed sequential perfomance of motor acts. [velocity, acceleration] vs time (ms) from recall cue.
  • image p445fig12.43 The LIST PARSE laminar cortical Cognitive Working Memory circuit, that is proposed to occur in ventrolateral prefrontal cortex, is homologous to the LAMINART circuit circuit that models aspects of how visual cortex sees. The Motor Working Memory, VITE Trajectory Generator, and Variable-Rate Volitional Control circuits model how other brain regions, including dorsolateral prefrontal cortex, motor cortex, cerebellum, and basal ganglia, interact with the Cognitive Working Memory to control working memory storage and variable-rate performance of item sequences.
    || List parse circuit diagram. Connectivity convention. sequence chunks [<- BU filter, TD expectation ->] working memory. Working memory and sequence chunking circuit is homologous to visual LAMINART circuit!
  • image p498fig13.22 (left column, top row) Adaptive filtering and conditioned arousal are both needed to regulate what cues can learn to activate particular space-time patterns. These developments lead inexorably to basic cognitive abilities, as embodied in the 3D LAMINART models for 3D vision and figure-ground perception (Chapter 11) and the 3D ARTSCAN SEARCH model for invariant object learning, recognition, and 3D search (Chapter 6). (right column, top row) Conditioned arousal enables only emotionally important cues to activate a motivationally relevant space-time pattern. (bottom row) Conditioned arousal and drive representations arise naturally from the unlumping of avalanche circuits to make them selective to motivationally important cues. The MOTIVATOR model is a natural outcome of this unlumping process (this chapter).
    || (top) Adaptive filtering and Conditioned arousal. Towards Cognition: need to filter inputs to the command cell. Towards Emotion: important signals turn arousal ON and OFF. (bottom) Conditioned arousal and Drive representations. Competition between conditioned arousal sources at drive representations, eg amygdala.
  • image p038fig01.25 The ART Matching Rule stabilizes real time learning using a [top-down, modulatory on-center, off-surround] network. Object attention is realized by such a network. See text for additional discussion.
    || ART Matching Rule [volition, categories, features]. [one, two] against one.
  • image p163fig04.39 A schematic of the LAMINART model that explains key aspects of laminar visual cortical anatomy and dynamics. LGN -> V1 [6, 4, 2/3] -> V2 [6, 4, 2/3]
    || p163c1h0.6 "... The first article abount laminar computing ... proposed how the laminar cortical model could process 2D pictures using bottom-up filtering and horizontal bipole grouping interactions (Grossbeerg, Mingolla, Ross 1997). In 1999, I was able to extend the model to also include top-down circuits for expectation and attention (Grossberg 1999)(right panel). Such a synthesis of laminar bottom-up, horizontal, and top-down circuits is characteristic of the cerebral cortex (left panel). I called it LAMINART because it began to show how properties of Adaptive Resonance Theory, or ART, notably the ART prediction about how top-down expectations and attention work, and are realized by identical cortical cells and circuits. You can immediately see from the schematic laminar circuit diagram ... (right panel) that circuits in V2 seem to repeat circuits in V1, albeit with a larger spatial scale, despite the fact that V1 and V2 carry out different functions, How this anatomical similarity can coexist with functional diversity will be clarified in subsequent sections and chapters. It enable dfifferent kinds of biological intelligence to communicate seamlessly while carrying out their different psychological functions. ..."
  • image p192fig05.06 Bottom-up and top-down circuits between the LGN and cortical area V1. The top-down circuits obey the ART Matching Rule for matching with bottom-up input patterns and focussing attention on expected critical features.
    || Model V1-LGN circuits, version [1, 2]. retina -> LGN relay cells -> interneurons -> cortex [simple, endstopped] cells -> cortex complex cells
  • image p200fig05.13 Instar and outstar learning are often used to learn the adaptive weights in the bottom-up filters and top-down expectations that occur in ART. The ART Matching Rule for object attention enables top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features.
    || Expectations focus attention: feature pattern (STM), Bottom-Up adaptive filter (LTM), Category (STM), competition, Top-Down expectation (LTM); ART Matching Rule: STM before top-down matching, STM after top-down matching (attention!)
  • image p207fig05.19 The ART hypothesis testing and learning cycle. See the text for details about how the attentional system and orienting system interact in order to incorporate learning of novel categories into the corpus of already learned categories without causing catastophic forgetting.
    ||
  • image p211fig05.21 Sequences of P120, N200, and P300 event-related potentials occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    || ERP support for mismatch-mediated reset: event-related potentials: human scalp potentials. ART predicted correlated sequences of P120-N200-P300 Event Related Potentials during oddball learning. P120 mismatch; N200 arousal/novelty; P300 STM reset. Confirmed in (Banquet and Grossberg 1987)
  • image p215fig05.28 How a mismatch between bottom-up and top-down input patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level.
    || Mismatch -> inhibition -> arousal -> reset. BU input orienting arousal, BU+TD mismatch arousal and reset. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
  • image p221fig05.31 A system like Fuzzy ARTMAP can learn to associate learned categories in one ART network with learned categories in a second ART network. Because both bottom-up and top-down interactions occur in both networks, a bottom-up input pattern to the first ART network can learn to generate a top-down output pattern from the second ART network.
    || Fuzzy ARTMAP. Match tracking realizes minimax learning principle: vigilence increases to just above the match ratio of prototype / exemplar, thereby triggering search
  • image p226fig05.35 I had shown in 1976 how a competitive learning or self-organizing map model could undergo catastrophic forgetting if the input environment was sufficiently dense and nonstationary, as illustrated by Figure 5.18. Later work with Gail Carpenter showed how, if the ART Matching Rule was shut off, repeating just four input patterns in the correct order could also casue catastrophic forgetting by causing superset recoding, as illustrated in Figure 5.36.
    || Code instability input sequences. D C A; B A; B C = ; |D|<|B|<|C|; where |E| is the number of features in the set E. Any set of input vectors that satisfy the above conditions will lead to unstable coding if they are periodically presented in the order ABCAD and the top-down ART Matching Rule is shut off.
  • image p226fig05.36 Column (a) shows catastrophic forgetting when the ART Matching Rule is not operative. It is due to superset recoding. Column (b) shows how category learning quickly stabilizes when the ART Martching Rule is restored.
    || Stabel and unstable learning, superset recoding
  • image p228fig05.37 A macrocircuit of the neurotrophic Spectrally Timed ART, or nSTART, model. I developed nSTART with my PhD student Daniel Franklin. It proposes how adaptively timed learning in the hippocampus, bolstered by Brain Derived Neurotrophic Factor, or BDNF, helps to ensure normal memory consolidation.
    || habituative gates, CS, US, Thalamus (sensory cortex, category learning, conditioned reinforcer learning, adaptively time learning and BDNF), Amygdala (incentive motivation learning), Hippocampus (BDNF), Prefrontal Cortex (attention), Pontine nuclei, Cerebellum (adaptively timed motor learning)
  • image p230fig05.38 The Synchronous Matching ART, or SMART, model includes spiking neurons in a laminar cortical hierarchy. I developed SMART with my PhD student Massimiliano Versace. By unlumping LAMINART to include spiking neurons, finer details of neurodynamics, such as the existence of faster gamma oscillations during good enough matches, and slower beta oscillations during bad enough mismatches, could be shown as emergent properties of network interactions.
    || Second order thalamus -> specific thalamic nucleus -> Thalamic reticulate nucleus -> neocortical laminar circuit [6ll, 6l, 5, 2/3, 1] -> Higher order cortex. Similar for First order thalamus -> First order cortex, with interconnection to Second order, nonspecific thalamic nucleus
  • image p240fig05.44 When an algebraic exemplar model is realized using only local computations, it starts looking like an ART prototype model.
    || How does the model know which exemplars are in category A? BU-TD learning. How does a NOVEL test item access category A?
  • image p241fig05.45 The 5-4 category structure is one example of how an ART network learns the same kinds of categories as human learners. See the text for details.
    || 5-4 Category structure. A1-A5: closer to the (1 1 1 1) prototype; B1-B4: closer to the (0 0 0 0) prototype
  • image p246fig05.48 Microcircuits of the LAMINART model that I developed with Rajeev Raizada. See the text for details of how they integrate bottom-up adaptive filtering, horizontal bipole grouping, and top-down attentional matching that satisfied the ART Matching Rule.
    ||
  • image p254fig06.03 These interactions of the ARTSCAN Search model enable it to learn to recognize and name invariant object categories. Interactions between spatial attention in the Where cortical stream, via surface-shroud resonances, and object attention in the What cortical stream, that obeys the ART Matching Rule, coordinate these learning, recognition, and naming processes.
    || Retinal image -> On & OFF cell contrast normalization (retina/LGN) -> polarity [, in]sensitive contrast enhancement (V1) -> object [boundary (V2), surface (V2/V4)] -> surface contour (V2); What stream categories: volition control (BG). object boundary (V2) <-> view (ITp) <-> view integrator (ITp) <-> object (ITa) <-> [object-value (ORB), value (Amyg)] <-> name (PFC)
  • image p315fig08.35 The output signals from the directional grouping network obey the ART Matching Rule. They thereby select consistent motion directional signals while suppressing inconsistent ones, and do not distort that the spared cells code. The aperture problem is hereby solved by the same mechanism that dynamically stabilizes the learning of directional grouping cells.
    || How to select correct direction and preserve speed estimates? Prediction: Feedback from MSTv to MT- obeys ART Matching Rule; Top-down, modulatory on-center, off-surround network (Grossberg 1976, 1980; Carpenter, Grossberg 1987, 1991); Explains how directional grouping network can stably develop and how top-down directional attention can work. (Cavanough 1992; Goner etal 1986; Sekuer, Ball 1977; Stelmach etal 1994). Directional grouping network (MSTv) <-> Directional long-range filter (MT). Modulatory on-center selects chosen direction and preserves speed. Off-surround inhibits incompatible directions.
  • image p316fig08.36 How the directional grouping network, notably properties of the ART Matching Rule, enables a small set of amplified feature tracking signals at the ends of a line to select consistent directions in the line interior, while suppressing inconsistent directions.
    || Motion capture by directional grouping feedback. Directional grouping network (MSTv) <-> Directional long-range filter (MT). It takes longer to capture ambiguous motion signals in the line interior as the length of the line increases cf (Castet etal 1993)
  • image p354fig10.01 The lamiar cortical circuit that realizes how we pay attention to an object sends signals from layer 6 of a higher cortical level to layer 6 of a lower cortical level and then back up to layer 4. This "folded feedback" circuit realizes a top-down, modulatory on-center, off-surround circuit that realizes the ART Matching Rule.
    || Top-down attention and folded feedback. Attentional signals also feed back into 6-to-4 on-center off-surround. 1-to-5-to-6 feedback path: Macaque (Lund, Booth 1975) cat (Gilbert, Wiesel 1979). V2-to-V1 feedback is on-center off-surround and affects layer 6 of V1 the most (Bullier etal 1996; Sandell, Schiller 1982). Attended stimuli enhanced, ignored stimuli suppressed. This circuit supports the predicted ART Matching Rule! [LGN, V[1,2][6->1]]
  • image p360fig10.08 The bottom-up on-center off-surround from LGN-to-6-to-4 has a modulatory on-center because of its role in realizing the ART Matching Rule and, with it, the ability of the cortex to dynamically stabilize its learned memories.
    || Modulation of priming by 6-to-4 on-center (Stratford etal 1996; Callaway 1998). On-center 6-to-4 excitation is inhibited down to being modulatory (priming, subthreshold). On-center 6-to-4 excitation cannot activate layer 4 on its own. Clarifies need for direct path. Prediction: plays key role in stable grouping, development and learning. ART Matching Rule!
  • image p419fig12.17 The auditory continuity illusion illustrates the ART Matching Rule at the level of auditory streaming. Its "backwards in time" effect of future context on past conscious perception is a signature of resonance.
    || Auditory continuity illusion. input, percept. Backwards in time - How does a future sound let past sound continue through noise? Resonance! - It takes a while to kick in. After it starts, a future tone can maintain it much more quickly. Why does this not happen if there is no noise? - ART Matching Rule! TD harmonic filter is maodulatory without BU input. It cannot create something out of nothing.
  • image p420fig12.18 The ARTSTREAM model explains and simulates the auditory continuity illusion as an example of a spectral-pitch resonance. Interactions of ART Matching Rule and asymmetric competition mechanisms in cortical strip maps explain how the tone selects the consistent frequency from the noise in its own stream while separating the rest of the noise into another stream.
    || ARTSTREAM model (Grossberg 1999; Grossberg, Govindarajan, Wyse, Cohen 2004). SPINET. Frequency and pitch strips. Bottom Up (BU) harmonic sieve. Top Down (TD) harmonic ART matching. Exclusive allocation. Learn pitch categories based on early harmonic processing. A stream is a Spectral-Pitch Resonance!
  • image p473fig12.69 Error rate and mean reaction time (RT) data from the lexical decision experiments of (Schvanevelt, McDonald 1981). ART Matching Rule properties explain these data in (Grossberg, Stone 1986).
    || (left) Error rate vs type of prime [R, N, U], [non,] word. (right) Mean RT (msec) vs type of prime [R, N, U], [non,] word.
  • image p483fig13.02 The object-value categories in the orbitofrontal cortex require converging specific inputs from the sensory cortex and nonspecific incentive motivational inputs from the amygdala in order to fire. When the orbitofrontal cortex fires, it can deliver top-down ART Matching Rule priming signals to the sensory cortical area by which it was activated, thereby helping to choose the active recognition categories there that have the most emotional support, while suppressing others, leading to attentional blocking of irrelevant cues.
    || Cognitive-Emotional-Motor (CogEM) model. Drive-> amygdala incentive motivational learning-> orbitofrontal cortex- need converging cue and incentive iputs to fire <-> sensory cortex- conditioned renforcer learning-> amygdala. CS-> sensory cortex. Motivated attention closes the cognitive-emotional feedback loop, focuses on relevant cues, and causes blocking of irrelevant cues.
  • image p548fig15.16 Homologous recognition learning and reinforcement learning macrocicuits enable adaptively timed conditioning in the reinforcement learning circuit to increase inhibition of the orienting system at times when a mismatch in the recognition system would have reduced inhibition of it.
    || Homolog between ART and CogEM model, complementary systems. [Recognition, Reinforcement] learning vs [Attentional, Orienting] system. Reinforcement: timing, drive representation.
  • image p600fig16.36 The entorhinal-hipppocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories. See the text for details.
    || Entorhinal-hippocampal interactions as an ART system. Hippocampal place cells as spatial categories. Angular head velocity-> head direction cells-> stripe cells- small scale 1D periodic code (ECIII) SOM-> grid cells- small scale 2D periodic code (ECII) SOM-> place cells- larger scale spatial map (DG/CA3)-> place cells (CA1)-> conjunctive-coding cells (EC V/VI)-> top-down feedback back to stripe cells- small scale 1D periodic code (ECIII). stripe cells- small scale 1D periodic code (ECIII)-> place cells (CA1).
  • image p613fig16.43 The main visual form and motion processing stream mechanisms of SOVEREIGN, many of them described at length in previous chapters.
    || Render 3-D scene (R3DS), figure-ground separation (FGS), log-polar transform (LPT), Gaussian coarse-coding (GCC), Invariant visual target map (IVTM), What Fuzzy ART (WhatFuzz), body spatial coordinates (BSC), where reactive visual TPV storage (WRVTS), Directional transient cell network (DTCN), Motion direction hemifild map (MDHM), Hemifiled left/right scoring (HLRS), reactive visual control signal (RVCS), Parvo/Magno/Erg competition (PMEC), Approach and Orient GOp (AOGp), GOm (GOm). R3DS [parvo-> FGS, magno-> DTCN], FGS-> [LPT, WRVTS], LPT-> GCC-> IVTM-> WhatFuzz, BSC-> [RVTS, PMEC], PMEC-> [gateRVTS-> RVTS, gateRVCS-> RVCS], DTCN-> MDHM-> HLRS, HLRS-> [PMEC, RVCS], AOGp-> gateRVTS, GOm-> gateRVCS.
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
    bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :image p163fig04.39 A schematic of the LAMINART model that explains key aspects of laminar visual cortical anatomy and dynamics. LGN -> V1 [6, 4, 2/3] -> V2 [6, 4, 2/3]
    || p163c1h0.6 "... The first article abount laminar computing ... proposed how the laminar cortical model could process 2D pictures using bottom-up filtering and horizontal bipole grouping interactions (Grossbeerg, Mingolla, Ross 1997). In 1999, I was able to extend the model to also include top-down circuits for expectation and attention (Grossberg 1999)(right panel). Such a synthesis of laminar bottom-up, horizontal, and top-down circuits is characteristic of the cerebral cortex (left panel). I called it LAMINART because it began to show how properties of Adaptive Resonance Theory, or ART, notably the ART prediction about how top-down expectations and attention work, and are realized by identical cortical cells and circuits. You can immediately see from the schematic laminar circuit diagram ... (right panel) that circuits in V2 seem to repeat circuits in V1, albeit with a larger spatial scale, despite the fact that V1 and V2 carry out different functions, How this anatomical similarity can coexist with functional diversity will be clarified in subsequent sections and chapters. It enable dfifferent kinds of biological intelligence to communicate seamlessly while carrying out their different psychological functions. ..."
  • image p174fig04.52 An example of how the 3D LAMINART model can transform the two monocular images of the random dot stereogream in the top row into the three depth-separated surfaces representations in the bottom row.
    || Stereogram surface percepts: surface lightnesses are segregated in depth (Fang and Grossberg 2009). [left, right] inputs, [far, fixation, near] planes. Contrast algorithms that just compute disparity matches and let computer code build the surface eg (Marr, Poggio, etal 1974).
  • image p182fig04.58 LAMINART model processing stage that are sufficient to explain many percepts of transparency, including those summarized in Figure 4.57.
    || [left, right] eye, [LGN, V1 [6, 4, 3B, 2/3 A], V2 [4, 2/3]], [mo, bi]nocular cart [simple, complex] cells, [excita, inhibi]tory cart [connection, cell]s.
  • image p230fig05.38 The Synchronous Matching ART, or SMART, model includes spiking neurons in a laminar cortical hierarchy. I developed SMART with my PhD student Massimiliano Versace. By unlumping LAMINART to include spiking neurons, finer details of neurodynamics, such as the existence of faster gamma oscillations during good enough matches, and slower beta oscillations during bad enough mismatches, could be shown as emergent properties of network interactions.
    || Second order thalamus -> specific thalamic nucleus -> Thalamic reticulate nucleus -> neocortical laminar circuit [6ll, 6l, 5, 2/3, 1] -> Higher order cortex. Similar for First order thalamus -> First order cortex, with interconnection to Second order, nonspecific thalamic nucleus
  • image p231fig05.39 The SMART hypothesis testing and learning cycle predicts that vigilance increases when a mismatch in subcortical regions like the nonspecific thanlamus activates the nucleus basalis of Meynert which, in turn, broadcasts a burst of the neurotransmitter acetylcholine, or ACh, to deeper cortical layers. Due to the way in which LAMINART proposes that cortical matching and mismatching occurs, this ACh burst can increase vigilance and thereby trigger a memory search. See the text for details.
    || [BU input, [, non]specific thalamic nucleus, thalamic reticulate nucleus, neocortical laminar circuit] cart [Arousal, Reset, Search, Vigilance]
  • image p246fig05.48 Microcircuits of the LAMINART model that I developed with Rajeev Raizada. See the text for details of how they integrate bottom-up adaptive filtering, horizontal bipole grouping, and top-down attentional matching that satisfied the ART Matching Rule.
    ||
  • image p248fig05.49 This circuit of the LAMINART model helps to explain properties of Up and Down states during slow wave sleep, and how disturbances in ACh dynamics can disrupt them.
    ||
  • image p375fig11.07 The 3D LAMINART model uses both monocular and binocular simple cells to binocularly fuse like image contrasts. The remainder of the model generates 3D boundary and surface representations of multiple kinds of experiments as well as of natural scenes.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A], V2 thin stripe [4->2/3A], V4]. V1 blob [V1-4 monocular, V1 interior binocular] simple cells. [complex, simple, inhibitory] cells, on-center off-surround
  • image p376fig11.10 The 3D LAMINART model shows how the disparity filter can be integrated into the circuit that completes 3D boundary representations using bipole grouping cells. It also explains how surface contours can strengthen boundaries that succeed in generating closed filling-in domains.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A] surface contour, V2 thin stripe (monocular surface) [4->2/3A], V2 interior [disynaptic inhibitory interneurons, bipole grouping cells, disparity filter, V4 binocular surface]. [complex, simple, inhibitory] cells, on-center off-surround
  • image p378fig11.12 How monocular and binocular information are combined in V1 and V2 in the 3D LAMINART model.
    || Model utilizes monocular information. [left, right] eye cart V1-[4 monocular simple, 3B binocular simple, complex2/3A [mo,bi]nocular] cells, V2-4 binocular complex cells. black = monocular cells, blue = binocular cells. In V2, monocular inputs add to binocular inputs along the line of sight and contribute to depth perception.
  • image p379fig11.13 How the 3D LAMINART model explains DaVinci stereopsis. All the stages of boundary and surface formation are color coded to clarify their explanation. Although each mechanism is very simple, when all of them act together, the correct depthful surface representation is generated. See the text for details.
    || DaVinci stereopsis (Nakayama, Shimojo 1990). An emergent property of the previous simple mechanisms working together. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] pair [Binocular match: boundaries of thick bar -> Add monocular boundaries along lines-of-sight -> Line-of-sight inhibition kills weaker vertical boundaries -> 3D surface percept not just a disparity match!] pair [Binocular match: right edge of thin and thick bars -> Strongest boundaries: binocular and monocular boundaries add -> Vertical boundaries from monocular left edge of thin bar survive -> Filling-in contained by connected boundaries]. cart [very near, near, fixation plane, far, very far]
  • image p380fig11.14 The model explanation of DaVinci stereopsis when the input stimuli have opposite contrast polarities.
    || Polarity-reversed Da Vinci stereopsis (Nakayama, Shimojo 1990). Same explanation! (... as Figure 11.13 ...) [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p395fig11.34 A comparison of the properties of other rivalry models with those of the 3D LAMINART model (surrounded by red border). Significantly, only 3D LAMINART explains both stable vision and rivalry (green border).
    || Comparison of rivalry models
  • image p401fig11.41 The 3D LAMINART model proposes how angle cells and disparity-gradient interact through learning to generate 3D representations of slanted objects.
    || 3D LAMINART model. [LGN, V1, V2, V4] Four key additions: 1. Angle cells - tuned to various angles; 2. Disparity-gradient cells - tuned to disparity gradients in the image; 3. weights from [angle to disparity-gradient] cells - learned while viewing 3D image; Colinear grouping between [angle to disparity-gradient] cells - disambiguates ambiguous groupings.
  • image p402fig11.44 3D scenic reconstruction of the image in Figure 11.43 by the 3D LAMINART model.
    || Disparity [5, 6, 8, 10, 11, 14]: images of objects in common depth planes
  • image p445fig12.43 The LIST PARSE laminar cortical Cognitive Working Memory circuit, that is proposed to occur in ventrolateral prefrontal cortex, is homologous to the LAMINART circuit circuit that models aspects of how visual cortex sees. The Motor Working Memory, VITE Trajectory Generator, and Variable-Rate Volitional Control circuits model how other brain regions, including dorsolateral prefrontal cortex, motor cortex, cerebellum, and basal ganglia, interact with the Cognitive Working Memory to control working memory storage and variable-rate performance of item sequences.
    || List parse circuit diagram. Connectivity convention. sequence chunks [<- BU filter, TD expectation ->] working memory. Working memory and sequence chunking circuit is homologous to visual LAMINART circuit!
  • image p498fig13.22 (left column, top row) Adaptive filtering and conditioned arousal are both needed to regulate what cues can learn to activate particular space-time patterns. These developments lead inexorably to basic cognitive abilities, as embodied in the 3D LAMINART models for 3D vision and figure-ground perception (Chapter 11) and the 3D ARTSCAN SEARCH model for invariant object learning, recognition, and 3D search (Chapter 6). (right column, top row) Conditioned arousal enables only emotionally important cues to activate a motivationally relevant space-time pattern. (bottom row) Conditioned arousal and drive representations arise naturally from the unlumping of avalanche circuits to make them selective to motivationally important cues. The MOTIVATOR model is a natural outcome of this unlumping process (this chapter).
    || (top) Adaptive filtering and Conditioned arousal. Towards Cognition: need to filter inputs to the command cell. Towards Emotion: important signals turn arousal ON and OFF. (bottom) Conditioned arousal and Drive representations. Competition between conditioned arousal sources at drive representations, eg amygdala.
  • image p228fig05.37 A macrocircuit of the neurotrophic Spectrally Timed ART, or nSTART, model. I developed nSTART with my PhD student Daniel Franklin. It proposes how adaptively timed learning in the hippocampus, bolstered by Brain Derived Neurotrophic Factor, or BDNF, helps to ensure normal memory consolidation.
    || habituative gates, CS, US, Thalamus (sensory cortex, category learning, conditioned reinforcer learning, adaptively time learning and BDNF), Amygdala (incentive motivation learning), Hippocampus (BDNF), Prefrontal Cortex (attention), Pontine nuclei, Cerebellum (adaptively timed motor learning)
  • image p541fig15.02 The neurotrophic Spectrally Timed Adaptive Resonance Theory, or nSTART, model of (Franklin, Grossberg 2017) includes hippocampus to enable adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between CS and US.
    || Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. [CS, US] -> Sensory Cortex (SC) <- motivational attention <-> category learning -> Prefrontal Cortex (PFC). SC conditioned reinforcement learning-> Amygdala (cannot bridge the temporal gap) incentive motivational learning-> PFC. SC adaptively timer learning and BDNF-> Hippocampus (can bridge the temporal gap) BDNF-> PFC. PFC adaptively timed motor learning-> cerebellum.
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
  • bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :image p221fig05.31 A system like Fuzzy ARTMAP can learn to associate learned categories in one ART network with learned categories in a second ART network. Because both bottom-up and top-down interactions occur in both networks, a bottom-up input pattern to the first ART network can learn to generate a top-down output pattern from the second ART network.
    || Fuzzy ARTMAP. Match tracking realizes minimax learning principle: vigilence increases to just above the match ratio of prototype / exemplar, thereby triggering search
  • image p225fig05.33 Some early ARTMAP benchmark studies. These successes led to the use of ARTMAP, and many variants that we and other groups have developed, in many large-scale applications in engineering and technology that has not abated even today.
    || see Early ARTMAP benchmark studies
  • image p225fig05.34 ARTMAP was successfully used to learn maps of natural terrains with many advantages over those of mapping projects that used AI expert systems. The advantages are so great that many mapping projects started to use this technology.
    || AI expert system - 1 year: field identification of natural regions; derivation of ad hoc rules for each region by expert geographers; correct 80,000 of 250,000 site labels; 230m (site-level) scale. ARTMAP system - 1 day: rapid, automatic, no natural regions or rules; confidence map; 30m (pixel-level) scale can see roads; equal accuracy at test sites
  • image p242fig05.46 Computer simulations of how two variants of Distributed ARTMAP incrementally learn the 5-4 category structure. See the text for details.
    || Distributed ARTMAP with [self-supervised learning, post-training LTM noise]
  • image p436fig12.30 The conscious ARTWORD, or cARTWORD, laminar cortical speech model simulates how future context can disambiguate noisy past speech sounds in such a way that the completed percept is consciously heard to proceed from past to future as a feature-item-list resonant wave propagates through time.
    || cARTWORD: Laminar cortical model macrocircuit (Grossberg, Kazerounian 2011) Simulates PHONEMIC RESTORATION: Cognitive Working Memory (processed item sequences) - [Excitatory-> inhibitory-> habituative-> adaptive filter-> adaptive filter-> adaptive filter with depletable synapse-> Acoustic [item, feature]
  • image p453fig12.49 The ARTWORD model that I published in 2000 with my PhD student Christopher Myers simulates data such as the (Repp etal 1978) data in Figure 12.68. See the text for details.
    || ARTWORD model (Grossberg, Myers 2000). Input phonetic features-> Phonemic item working memory-> Masking Field unitized lists-> Automatic gain control-> Phonemic item working memory. [habituative gate, adaptive filter]s.
  • image p453fig12.50 The ARTWORD perception cycle shows how sequences of items activate possible list chunks, which compete among each other and begin to send their top-down expectations back to the item working memory. An item-list resonance develops through time as a result.
    || ARTWORD perception cycle. (a) bottom-up activation (b) list chunk competition (c) item-list resonance (d) chunk reset due to habituative collapse.
  • image p455fig12.52 Simulation of cARTWORD dynamics in response to the complete list /1/-/2/-/3/. The relevant responses are surrounded by a red box.
    || Presentation of a normal sequence: input /1/-/2/-/3/. |c(i,1)-5| vs time (msec). List chunks select most predictive code. Order stored in WM layers. Resonant activity of /1/-/2/-/3/ in item and feature layers corresponds to conscious speech percept.
  • image p456fig12.53 Simulation of cARTWORD dynamics in response to the partial list /1/-silence-/3/ with /2/ replaced by silence. Only the representations of these items can be seen in the red box.
    || Presentation with silence duration: input /1/-silence-/3/. |c(i,1)-5| vs time (msec). List chunks select most predictive code. Order stored in WM layers. Gap in resonant activity of /1/-silence-/3/ in item and feature layers corresponds to perceived silence.
  • image p456fig12.54 Item /2/ is restored in the correct list position in response to the list /1/-noise-/3/.
    || Presentation with noise: input /1/-noise-/3/. |c(i,1)-5| vs time (msec). List chunks select the most predictive code. Order restored in WM layers. Resonant activity of /1/-/2/-/3/ in item and feature layers corresponds to restoration of item /2/ replaced by noise in input.
  • image p457fig12.55 Item /4/ is restored in the correct list position in response to the list /1/-noise-/5/. This and the previous figure show how future context can disambiguate past noisy sequences that are otherwise identical.
    || Presentation with noise: input /1/-noise-/5/. |c(i,1)-5| vs time (msec). List chunks select the most predictive code. Order restored in WM layers. Resonant activity of /1/-/4/-/3/ in item and feature layers corresponds to restoration of item /4/ replaced by noise in input.
  • image p444fig12.42 The LIST PARSE laminar cortical model of working memory and list chunking that I published with Lance Pearson in 2008 simulated the Averbeck etal data in Figure 12.41, as in the left column of the figure. It also simulated cognitive data about working memory storage by human subjects. See the text for details.
    || LIST PARSE: Laminar cortical model of working memory and list chunking (Grossberg, Pearson 2008). Simulates data about: [immediate, delayed, continuous] distractor free recall; immediate serial recall; and variable-speed sequential perfomance of motor acts. [velocity, acceleration] vs time (ms) from recall cue.
  • image p445fig12.43 The LIST PARSE laminar cortical Cognitive Working Memory circuit, that is proposed to occur in ventrolateral prefrontal cortex, is homologous to the LAMINART circuit circuit that models aspects of how visual cortex sees. The Motor Working Memory, VITE Trajectory Generator, and Variable-Rate Volitional Control circuits model how other brain regions, including dorsolateral prefrontal cortex, motor cortex, cerebellum, and basal ganglia, interact with the Cognitive Working Memory to control working memory storage and variable-rate performance of item sequences.
    || List parse circuit diagram. Connectivity convention. sequence chunks [<- BU filter, TD expectation ->] working memory. Working memory and sequence chunking circuit is homologous to visual LAMINART circuit!
  • image p446fig12.44 (left column, top row) LIST PARSE can model linguistic data from human subjects. In this figure, model parameters are fixed to enable a close fit to data about error-type distributions in immediate free recall experiments, notably transposition errors. (right column, top row) Simulation and data showing bowing of the serial position curve, including an extended primacy gradient. (left column, bottom row) The simulation curve overlays data about list length effects, notably the increasing recall difficulty of longer lists during immediate serial recall (ISR). (right column, bottom row) Simulation (bottom image) and data (top image) of the limited temporal extent for recall.
    || (1. TL) Error-type distributions in immediate serial recall (Hanson etal 1996). % occurence vs serial position. Graph convention: Data- dashed lines; Simulations- solid lines. Six letter visual ISR. Order errors- transpositions of neighboring items are the most common. Model explanation: Noisy activation levels change relative oorder in primacy gradient. Similar activation of neighboring items most susceptible to noise. Model paramenters fitted on these data. (2. TR) Bowing of serial position curve (Cowan etal 1999). % correct vs serial position. Auditory ISR with various list lengths (graphs shifted rightward): For [, sub-]span lists- extended primacy, with one (or two) item recency; Auditory presentation- enhanced performance for last items. LIST PARSE: End effects- first and last items half as many members; Echoic memory- last presented item retained in separate store. (3. BL) List length effects, circles (Crannell, Parrish 1968), squares (Baddeley, Hitch 1975), solid line- simulation. % list correct vs list length. Variable list length ISR: longer lists are more difficult to recall. LIST PARSE: More items- closer activation levels and lower absolute activity level with enough inputs; Noise is more likely to produce order errors, Activity levels more likely to drop below threshold;. (4. BR) Limited temporal extent for recall (Murdoch 1961). % recalled vs retention interval (s). ISR task with distractor-filled retention intervals (to prevent rehersal): Increasing retention interval - decreases probabilty of recalling list correctly; Load dependence- longer list more affected by delays; Performance plateau- subjects reach apparent asymptote. LIST PARSE: Increase convergence of activities with time; loss of order information;.
  • image p447fig12.45 (left column) LIST PARSE simulations of the proportion of order errors as a function of serial position for 6 item lists with (a) an extended pause of 7 time units between the third and fourth items, and (b) pauses of 5 time units (solid curve) and 10 time units (dashed curve) between all items. (right column) Simulations (solid curves) and data (dashed curves) illustrating close model fits in various immediate free recall tasks.
    || (Left) Temporal grouping and presentation variability. Temporal grouping: Inserting an extended pause leads to inter-group bowing; Significantly different times of integration and activity levels across pause, fewer interchanges. (Right) Immediate free recall, and [delayed, continuous] distractor-free recall. Overt rehersal IFR task with super-span (ie 20 item) lists: Extended recency- even more extended with shorter ISIs; Increased probability of recall with diminished time from last rehearsal; Early items in list rehearsed most;. LIST PARSE (unique) for long lists: Incoming items from recency gradient; Rehearsal (re-presentation) based upon level of activity.
  • image p254fig06.03 These interactions of the ARTSCAN Search model enable it to learn to recognize and name invariant object categories. Interactions between spatial attention in the Where cortical stream, via surface-shroud resonances, and object attention in the What cortical stream, that obeys the ART Matching Rule, coordinate these learning, recognition, and naming processes.
    || Retinal image -> On & OFF cell contrast normalization (retina/LGN) -> polarity [, in]sensitive contrast enhancement (V1) -> object [boundary (V2), surface (V2/V4)] -> surface contour (V2); What stream categories: volition control (BG). object boundary (V2) <-> view (ITp) <-> view integrator (ITp) <-> object (ITa) <-> [object-value (ORB), value (Amyg)] <-> name (PFC)
  • image p255fig06.04 The ARTSCAN Search model can also search for a desired target object in a scene, thereby clarifying how our brains solve the Where
  • image p259fig06.08 The distributed ARTSCAN, or dARTSCAN, model includes spatial attention in both PPC and PFC, and both fast-acting attention, triggered by transient cells in Where cortical areas such as MT, and slower-acting surface-shroud resonances in What cortical areas such as V4 and PPC. See the text for details.
    || dARTSCN spatial attention hierarchy, Fast (Where stream) Slow (What stream) (Foley, Grossberg, and Mingolia 2012). [transient cells (MT) ->, object surfaces (V4) <->] [object shrouds (PPC) <-> spatial shrouds (PPC/PFC)]
  • image p271fig06.17 Persistent activity in IT cells is just what is needed to enable view-invariant object category learning by ARTSCAN to be generalized to [view, position, size]-invariant category learning by positional ARTSCAN, or pARTSCAN. See the text for details.
    || Persistent activity in IT. Physiological data show that persistent activity exist in IT (Fuester and Jervey 1981, Miyashita and Chang 1988, Tomita etal 1999). Adapted from (Tomita etal 1999 Nature)
  • image p272fig06.18 The pARTSCAN model can learn [view, position, size]-invariant categories by adding view category integrator cells that have the properties of persistent neurons in IT. These integrator cells get reset with the invariant object category, not the view category.
    || pARTSCAN: positionally-invariant object learning. (Cao, Grossberg, Markowitz 2011). IT cells with persistent activities are modeled by view category integrators in ITp. View-specific category cells are RESET as the eyes move within the object. View category integrator cells are NOT RESET when the view-specific category is reset. They are RESET along with invariant object category cells when a spatial attention shift occurs.
  • image p273fig06.20 pARTSCAN can simulate the IT cell recoding that Li and DiCarlo reported in their swapping experiments because the swapping procedure happens without causing a parietal reset burst to occur. Thus the originally activated invariant category remains activated and can get associated with the swapped object features.
    || Simulation of Li and DiCarlo swapping data. data (Li and DiCarlo 2008), model (Cao, Grossberg, Markowitz 2011). normalized response vs. exposure (swaps and/or hours)
  • image p274fig06.21 pARTSCAN can also simulate the trade-off in IT cell responses between position invariance and selectivity that was reported by Zoccolan etal 2007. This trade-off limits the amount of position invariance that can be learned by a cortical area like V1 that is constrained by the cortical magnification factor.
    || Trade-off in IT cell response properties. Inferotemporal cortex cells with greater position invariance respond less selectively to natural objects. invariance-tolerance, selectivity-sparseness. data (Zoccolan etal 2007) model (Grossberg, Markowitzm, Cao 2011). position tolerance (PT, degrees) vs sparseness (S)
  • image p274fig06.22 pARTSCAN can simulate how IT cortex processes image morphs, when it learns with high vigilance. See the text for details.
    || Akrami etal simulation: a case of high vigilance. tested on morphs between image pairs
  • image p498fig13.22 (left column, top row) Adaptive filtering and conditioned arousal are both needed to regulate what cues can learn to activate particular space-time patterns. These developments lead inexorably to basic cognitive abilities, as embodied in the 3D LAMINART models for 3D vision and figure-ground perception (Chapter 11) and the 3D ARTSCAN SEARCH model for invariant object learning, recognition, and 3D search (Chapter 6). (right column, top row) Conditioned arousal enables only emotionally important cues to activate a motivationally relevant space-time pattern. (bottom row) Conditioned arousal and drive representations arise naturally from the unlumping of avalanche circuits to make them selective to motivationally important cues. The MOTIVATOR model is a natural outcome of this unlumping process (this chapter).
    || (top) Adaptive filtering and Conditioned arousal. Towards Cognition: need to filter inputs to the command cell. Towards Emotion: important signals turn arousal ON and OFF. (bottom) Conditioned arousal and Drive representations. Competition between conditioned arousal sources at drive representations, eg amygdala.
  • image p531fig14.06 Classification of scenic properties as texture categories by the ARTSCENE model. See the text for details.
    || Image-> Feature extraction (texture principal component rankings)-> Learning feature-to-scene mapping (texture category principal component rankings)<- scene class. Large-to-small attentional shrouds as principle component higher.
  • image p531fig14.07 Voting in the ARTSCENE model achieves even better prediction of scene type. See the text for details.
    || Image-> Feature extraction (texture principal component rankings)-> Learning feature-to-scene mapping (texture category principal component rankings)-> evidence accumulation (sum)-> scene class winner-take-all inference. Large-to-small attentional shrouds as principle component higher.
  • image p532fig14.08 Macrocircuit of the ARTSCENE Search neural model for learning to search for desired objects by using the sequences of already experienced objects and their locations to predict what and where the desired object is. V1 = First visual area or primary visual cortex; V2 = Second visual area; V4 = Fourth visual area; PPC = Posterior Parietal Cortex; ITp = posterior InferoTemporal cortex; ITa = anterior InferoTemporal cortex; MTL = Medial Temporal Lobe; PHC = ParaHippoCampal cortex; PRC = PeriRhinal Cortex; PFC = PreFrontal Cortex; DLPFC = DorsoLateral PreFrontal Cortex; VPFC = Ventral PFC; SC = Superior Colliculus.
    ||
  • image p533fig14.09 Search data and ARTSCENE Search simulations of them in each pair of images from (A) to (F). See the text for details.
    || 6*[data vs simulation], [Response time (ms) versus epoch].
  • image p214fig05.26 When a big enough mismatch occurs, the orienting system is activated and sends a burst of nonspecific arousal to the category level. This Mismatch Detector has properties of the N200 ERP.
    || Mismatch triggers nonspecific arousal. Mismatch at F1 eleicits a nonspecific event at F2. Call this event nonspecific arousal. N200 ERP Naatanen etal: 1. bottom-up, 2. unconditionable, 3. nonspecific, 4. mismatch
  • image p230fig05.38 The Synchronous Matching ART, or SMART, model includes spiking neurons in a laminar cortical hierarchy. I developed SMART with my PhD student Massimiliano Versace. By unlumping LAMINART to include spiking neurons, finer details of neurodynamics, such as the existence of faster gamma oscillations during good enough matches, and slower beta oscillations during bad enough mismatches, could be shown as emergent properties of network interactions.
    || Second order thalamus -> specific thalamic nucleus -> Thalamic reticulate nucleus -> neocortical laminar circuit [6ll, 6l, 5, 2/3, 1] -> Higher order cortex. Similar for First order thalamus -> First order cortex, with interconnection to Second order, nonspecific thalamic nucleus
  • image p231fig05.39 The SMART hypothesis testing and learning cycle predicts that vigilance increases when a mismatch in subcortical regions like the nonspecific thanlamus activates the nucleus basalis of Meynert which, in turn, broadcasts a burst of the neurotransmitter acetylcholine, or ACh, to deeper cortical layers. Due to the way in which LAMINART proposes that cortical matching and mismatching occurs, this ACh burst can increase vigilance and thereby trigger a memory search. See the text for details.
    || [BU input, [, non]specific thalamic nucleus, thalamic reticulate nucleus, neocortical laminar circuit] cart [Arousal, Reset, Search, Vigilance]
  • image p232fig05.40 Computer simulation of how the SMART mode generates (a) gamma oscillations if a good enough match occurs, or (c) beta oscillations if a bad enough match occurs. See the text for details.
    || Brain oscillations during match/mismatch, data, simulation. (a) TD corticothalamic feedback increases synchrony (Sillito etal 1994) (b) Match increases γ oscillations (c) Mismatch increases θ,β oscillations
  • image p232fig05.41 (a)-(c). The sequence of interlaminar events that SMART predicts during a mismatch reset. (d) Some of the compatible neurophysiological data.
    || Mismatch causes layer 5 dendritic spike that trigger reset. (a) Arousal causes increase in nonspecific thalamic nuclei firing rate and layer 5 dendritic and later somatic spikes (Karkum and Zhu 2002, Williams and Stuart 1999) (b) Layer 5 spikes reach layer 4 via layer 6i and inhibitory neurons (Lund and Boothe 1975, Gilbert and Wiesel 1979) (c) habituative neurotransmitters in layer 6i shift the balance of active cells in layer 4 (Grossberg 1972, 1976) (d) Dendritic stimulation fires layer 5 (Larkum and Zhu 2002) stimulation apical dendrites of nonspecific thalamus
  • Grossberg 2021 p229c2h0.60 SMART computer simulations demonstrate that a good enough match of a top-down expectation with a bottom-up feature pattern generates an attentive resonance during which the spikes of active cells synchronize in the gamma frequency range of 20-70 Hz (Figure 5.40). Many labs have reported a link between attention and gamma oscillations in the brain, including two articles published in 2001, one from the laboratory of Robert Desimone when he was at the the National Institute of Mental Health in Bethseda (Fries, Reynolds, Rorie, Desimone 2001), and the other from the laboratory of Wolf Singer in Frankfurt (Engel, Fries, Singer 2001). You
  • image p468fig12.65 Linguistic properties of the PHONET model and some of the data that it simulates. The upper left image summarizes the asymmetric transient-to-sustained gain control that helps to create invariant intraword ratios during variable-rate speech. The lower left image summarizes the rate-dependent gain control of the ARTPHONE model that creates rate-invariant working memory representations in response to sequences of variable-rate speech. The right image summarizes the kind of paradoxical VC-CV category boundary data of (Repp 1980) that ARTPHONE simulates. See the text for details.
    || (left upper) [transient, sustained] [working memory, filter, category]. (left lower) phone inputs-> [input rate estimate, features], Features w <- habituative transmitter gates -> categories-> rate invariant phonetic output, input rate estimate-> gain control-> [features, categories] rate-dependent integration of categories and features. (right) % 2-stop vs VC-CV silent interval (msec): [ib-ga, ib-ba, iga, iba].
  • image p420fig12.18 The ARTSTREAM model explains and simulates the auditory continuity illusion as an example of a spectral-pitch resonance. Interactions of ART Matching Rule and asymmetric competition mechanisms in cortical strip maps explain how the tone selects the consistent frequency from the noise in its own stream while separating the rest of the noise into another stream.
    || ARTSTREAM model (Grossberg 1999; Grossberg, Govindarajan, Wyse, Cohen 2004). SPINET. Frequency and pitch strips. Bottom Up (BU) harmonic sieve. Top Down (TD) harmonic ART matching. Exclusive allocation. Learn pitch categories based on early harmonic processing. A stream is a Spectral-Pitch Resonance!
  • image p422fig12.19 The ARTSTREAM model includes mechanisms for deriving streams both from pitch and from source direction. See the text for details.
    || [left, right] cart Peripheral processing = [input signal-> outer & middle ear preemphasis-> basilar membrane gammatone filterbank-> energy measure]. Spectral stream layer-> spectral summation layer-> delays-> [f-, tau] plane-> pitch stream layer-> pitch summation layer.
  • image p423fig12.20 The Spatial Pitch Network, or SPINET, model shows how a log polar spatial representation of the sound frequency spectrum can be derived from auditory signals occuring in time. The spatial representation allows the ARTSTREAM model to compute spatially distinct auditory streams.
    || SPINET model (Spatial Pitch Network) (Cohen, Grossberg, Wyse 1995). 1. input sound 2. Gamma-tone filter bank 3. Shaort-term average energy spectrum 4. MAP transfer function 5. On-center off-surround and rectification 6. Harmonic weighting 7. Harmonic summation and competition -> PITCH
  • image p424fig12.21 One of the many types of data about pitch processing that are simulated by the SPINET model. See the text for details.
    || Pitch shifts with component shifts (Patterson, Wightman 1976; Schouten 1962). Pitch vs lowest harmonic number.
  • image p425fig12.23 ARTSTREAM simulations of the auditory continuity illusion and other streaming properties (left column, top row). When two tones are separated by silence (Input), a percept of silence also separates them in a spectral-pitch resonance. (left column, bottom row). When two tones are separated by broadband noise, the percept of tone continues through the noise in one stream (stream 1) while the remainder of the noise occurs in a different stream (stream 2). (right column) Some of the other streaming properties that have been simulated by the ARTSTREAM model.
    || Auditory continuity does not occur without noise. Auditory continuity in noise. Other simulated streaming data.
  • image p431fig12.27 The strip maps that occur in ARTSTREAM and NormNet are variants of a cortical design that aalso creates ocular dominance columns in the visual cortex.
    || Adult organization of V1 (Grinvald etal http://www.weizmann.ac.il/brain/images/cubes.html). (1) Occular dominance columns (OCDs): Alternating strips of cortex respond preferentially to visual inputs of each eye (R/L corresponds to Right and Left eye inputs in the figure); Orientation columns: A smooth pattern of changing orientation preference within each ODC. Organized in a pinwheel like fashion.
  • p370 Chapter 11 means (Grossberg 2021) page 370, Chapter 11
    p002sec Illusion and realitymeans (Grossberg 2021) page 2, section Illusion and reality
    p013fig01.09means (Grossberg 2021) page 13, Figure 1.09 (1.9 as in book)
    p030tbl01.02 means (Grossberg 2021) page 30, Table 1.02 (1.2 as in book)
    p111c2h0.5means (Grossberg 2021) page 111, column 2, height from top as fraction of page height
    || text...Are notes in addition to [figure, table] captions, mostly comprised of text within the image, but also including quotes of text in the book. Rarely, it includes comments by Howell preceded by "Howell". The latter are distinct from "readers notes" (see, for example : reader Howell notes).
    p044 Howell: grepStr
  • p001 Chapter 1 Overview - From Complementary Computing and Adaptive Resonance to conscious awareness
  • p250 Chapter 6 Conscious seeing and invariant recognition - Complementary cortical streams coordinate attention for seeing and recognition
  • p539 Chapter 15 Adaptively timed learning - How timed motivation regulates conscious learning and memory consolidation
    p370 Chapter 11 means (Grossberg 2021) page 370, Chapter 11
    p002sec Illusion and realitymeans (Grossberg 2021) page 2, section Illusion and reality
    p013fig01.09means (Grossberg 2021) page 13, Figure 1.09 (1.9 as in book)
    p030tbl01.02 means (Grossberg 2021) page 30, Table 1.02 (1.2 as in book)
    p111c2h0.5means (Grossberg 2021) page 111, column 2, height from top as fraction of page height
    || text...Are notes in addition to [figure, table] captions, mostly comprised of text within the image, but also including quotes of text in the book. Rarely, it includes comments by Howell preceded by "Howell". The latter are distinct from "readers notes" (see, for example : reader Howell notes).
    p044 Howell: grepStr
  • image p039tbl01.03 The link between consciousness and movement
    ||
    VISUALseeing, knowing, and reaching
    AUDITORYhearing, knowing, and speaking
    EMOTIONALfeeling, knowing, and acting
  • image p042tbl01.04 The six main kinds of resonances which support different kinds of conscious awareness that will be explained and discussed in this book.
    ||
    type of resonancetype of consciousness
    surface-shroudsee visual object or scene
    feature-categoryrecognize visual object or scene
    stream-shroudhear auditory object or stream
    spectral-pitch-and-timbrerecognize auditory object or stream
    item-listrecognize speech and language
    cognitive-emotionalfeel emotion and know its source
  • image p270fig06.16 The same target position signal that can command the next saccade also updates a gain field that predictively maintains the attentional shroud in head-centered coordinates, even before the eye movement is complete. This process keeps the shroud invariant under eye movements, so that it can continue to inhibit reset of an emerging invariant category as t is associated with multiple object views, even while the conscious surface representation shifts with each eye movement in retinotopic coordinates. This pdating process is often called predictive re mapping.
    || Predictive remapping of eye movements! From V3A to LIP. [spatial attention, object attention, figure-ground separation, eye movement remapping, visual search]. (Beauvillaib etal 2005, Carlson-Radvansky 1999, Cavanaugh etal 2001, Fecteau & Munoz 2003, Henderson & Hollingworth 2003, Irwin 1991)
  • image p278fig06.27 A surface-shroud resonance through the Where stream enables us to consciously see an object while a feature-category resonance into the What stream enables us to recognize it. Both kinds of resonances can synchronize via visual cortex so that we can know what an object is when we see it.
    || What kinds of resonances support knowing vs seeing? What stream [knowing, feature-prototype resonance], Where stream [seeing, surface-shroud resonance]
  • image p278fig06.28 If the feature-category resonances cannot form, say due to a lesion in IT, then a surface-shroud resonance can still support conscious seeing of an attended object, and looking at or reaching for it, even if the individual doing so knows nothing about the object, as occurs during visual agnosia. The surface-shroud resonance supports both spatial attention and releases commands that embody the intention to move towards the attended object.
    || What kinds of resonances support knowing vs seeing? visual agnosia: reaching without knowing Patient DF (Goodale etal 1991). Attention and intention both parietal cortical functions (Anderson, Essick, Siegel 1985; Gnadt, Andersen 1988; Synder, Batista, Andersen 1997, 1998)
  • image p355fig10.02 Distinguishing processes of seeing vs knowing has been difficult because they interact so strongly.
    || Seeing vs. Knowing. Seeing and knowing [operate at different levels of the brain, use specialized circuits], but they [interact via feedback, use similar cortical designs, feedback is needed for conscious perception]. Cerebral Cortex: Seeing [V1-V4, MS-MST], Knowing [IT, PFC].
  • image p369fig10.19 Data from (Watanabe etal 2001) showing perceptual learning of the coherent motion direction, despite the lack of extra-foveal attention and awareness of the moving stimuli.
    || Unconscious perceptual learning of motion direction, % correct for two tests, compared to chance level results.
  • image p396fig11.35 Three properties of bipole boundary grouping in V2 can explain how boundaries oscillate in response to rivalry-inducing stimuli. Because all boundaries are invisible, however, these properties are not sufficient to generate a conscious percept of rivalrous surfaces.
    || 3 V2 boundary properties cause binocular rivalry. 1. Bipole grouping, 2. Orientational competition, 3. Actovity-dependent habituation
  • image p419fig12.17 The auditory continuity illusion illustrates the ART Matching Rule at the level of auditory streaming. Its "backwards in time" effect of future context on past conscious perception is a signature of resonance.
    || Auditory continuity illusion. input, percept. Backwards in time - How does a future sound let past sound continue through noise? Resonance! - It takes a while to kick in. After it starts, a future tone can maintain it much more quickly. Why does this not happen if there is no noise? - ART Matching Rule! TD harmonic filter is maodulatory without BU input. It cannot create something out of nothing.
  • image p436fig12.30 The conscious ARTWORD, or cARTWORD, laminar cortical speech model simulates how future context can disambiguate noisy past speech sounds in such a way that the completed percept is consciously heard to proceed from past to future as a feature-item-list resonant wave propagates through time.
    || cARTWORD: Laminar cortical model macrocircuit (Grossberg, Kazerounian 2011) Simulates PHONEMIC RESTORATION: Cognitive Working Memory (processed item sequences) - [Excitatory-> inhibitory-> habituative-> adaptive filter-> adaptive filter-> adaptive filter with depletable synapse-> Acoustic [item, feature]
  • image p455fig12.52 Simulation of cARTWORD dynamics in response to the complete list /1/-/2/-/3/. The relevant responses are surrounded by a red box.
    || Presentation of a normal sequence: input /1/-/2/-/3/. |c(i,1)-5| vs time (msec). List chunks select most predictive code. Order stored in WM layers. Resonant activity of /1/-/2/-/3/ in item and feature layers corresponds to conscious speech percept.
  • image p484fig13.04 The top-down feedback from the orbitofrontal cortex closes a feedback loop that supports a cognitive-emotional resonance. If this resonance can be sustained long enough, it enables us to have feelings at the same time that we experience the categories that caused them.
    || Cognitive-Emotional resonance. Basis of "core consciousness" and "the feeling of what happens". (Damasio 1999) derives heuristic version of CogEM model from his clinical data. Drive-> amygdala-> prefrontal cortex-> sensory cortex, resonance around the latter 3. How is this resonance maintained long enough to become conscious?
  • Figure 2.37 Modelling method and cycle, one starts with Behavioral Data from scores or hundreds of psychological experiments. These data are analyszed as the result of an individual adapting autonomously in real time to a changing world. This is the Arty of Modeling. It requires that one be able to infer from static data curves the dynamical processes that control individual behaviors occuring in real time. One of the hardest things that I teach to my students to do is "how to think in real time" to be able to carry out this speculative leap.

    Properly carried out, this analysis leads to the discovery of new Design Principles that are embodied by these behavioral processes. The Design Principles highlight the functional meaning of the data, and clarify how individual behaviors occurring in real time give rise to these static data curves.

    These principles are then converted into the simplest Mathematical Model using a method of minimal anatomies, which is a form of Occam
  • image p484fig13.04 The top-down feedback from the orbitofrontal cortex closes a feedback loop that supports a cognitive-emotional resonance. If this resonance can be sustained long enough, it enables us to have feelings at the same time that we experience the categories that caused them.
    || Cognitive-Emotional resonance. Basis of "core consciousness" and "the feeling of what happens". (Damasio 1999) derives heuristic version of CogEM model from his clinical data. Drive-> amygdala-> prefrontal cortex-> sensory cortex, resonance around the latter 3. How is this resonance maintained long enough to become conscious?
  • image p514fig13.44 Analog of the COgEM model in Figure 6.1 of (Damasio 1999).
    || (a) map of object X-> map of proto-self at inaugural instant-> [, map of proto-self modified]-> assembly of second-order map. (b) map of object X enhanced-> second-order map imaged.
  • image p105fig03.23 The pointillist painting A Sunday on la Grande Jatte by Georges Seurat illustrates how we group together both large-scale coherence among the pixels of the painting, as well as forming small groupings around the individual dabs of color.
    ||
  • image p107fig03.25 The Roofs of Collioure by Matisse. See the text for details
    || p107c1h0.6 "... [Matisse] showed how patches of pure color, when laid down properly on a canvas, could be grouped by the brain into emergent boundarues, without the intervention of visible outlines. ... The trick was that these emergent boundaries, being invisible, or amodal, did not darken the colors in the surface representations. In this sense, Matisse intuitively realized that "all boundaries are invisible" through the masterful way in which he arranged his colors on canvas to generate boundaries that could support compelling surface representations. ..."
  • image p108fig03.27 Matisse
  • image p110fig03.32 Claude Monet
  • image p120fig03.43 Four paintings by Monet of the Rouen cathedral under different lighting conditions (top row) and their monochromatic versions (bottom row). See the text for details.
    || p119c2h0.25 "... Monet uses nearby colors that are nearly equiluminant, and sharp, high-contrast luminance defined edges are sparse. He hereby creates weaker boundary signals within and between the parts of many forms, and stronger boundary signals between the forms. This combination facilitates color spreading within the forms and better separation of brightness and collor differences between forms. ... The grayscale versions of these paintings demonstrate the near equiluminance of the brushstrokes within forms, and places in which brightness and color differences significantly influence the groupings that differentiate between forms, including the differentiation between the cathedral and the sky. ..."
  • image p120fig03.44 The Rouen cathedral at sunset generates very different boundary webs than it does in full sunlight, as illustrated by Figure 3.45.
    || Rouen Cathedral at sunset (Monet 1892-1894).
    • Lighting almost equiluminant
    • Most boundaries are thus caused by color differences, not luminance differences
    • Fine architectural details are obscured, leading to...
    • Coarser and more uniform boundary webs, so...
    • Less depth in the painting.
  • image p121fig03.45 The Rouen cathedral in full sunlight.
    || Rouen Cathedral full sunlight (Monet 1892-1894).
    • Lighting is strongly non-uniform across most of the painting
    • Strong boundaries due to both luminance and color differences
    • Fine architectural details are much clearer, leading to...
    • Finer and more non-uniform boundary webs, so...
    • Much more detail and depth
  • image p121fig03.46 The Rouen cathedral in full sunlight contains T-Junctions that are not salient in the painting of it at sunset. These are among the painting
  • image p171fig04.49 An example of DaVinci stereopsis in which the left eye sees more of the wall between A and C than the right eye does. The region between B and C is seen only by the left eye because the nearer wall between C and D occludes it from the right eye view.
  • image p377fig11.11 DaVinci stereopsis phenomena occur when only one eye can receive visual inputs from part of a 3D scene due to occlusion by a nearer surface.
    || How does monocular information contribute to depth perception? DaVinci steropsis (Gillam etal 1999). Only by utilizing monocular information can visual system create correct depth percept. [left, right] eye view
  • image p379fig11.13 How the 3D LAMINART model explains DaVinci stereopsis. All the stages of boundary and surface formation are color coded to clarify their explanation. Although each mechanism is very simple, when all of them act together, the correct depthful surface representation is generated. See the text for details.
    || DaVinci stereopsis (Nakayama, Shimojo 1990). An emergent property of the previous simple mechanisms working together. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] pair [Binocular match: boundaries of thick bar -> Add monocular boundaries along lines-of-sight -> Line-of-sight inhibition kills weaker vertical boundaries -> 3D surface percept not just a disparity match!] pair [Binocular match: right edge of thin and thick bars -> Strongest boundaries: binocular and monocular boundaries add -> Vertical boundaries from monocular left edge of thin bar survive -> Filling-in contained by connected boundaries]. cart [very near, near, fixation plane, far, very far]
  • image p380fig11.14 The model explanation of DaVinci stereopsis when the input stimuli have opposite contrast polarities.
    || Polarity-reversed Da Vinci stereopsis (Nakayama, Shimojo 1990). Same explanation! (... as Figure 11.13 ...) [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p381fig11.15 The same model mechanisms explain the surface percept that is generated by the variant of DaVinci stereopsis that Gillam, Blackburn, and Nakayama studied in 1999.
    || DaVinci stereopsis (Gillam, Blackburn, Nakayama 1999). same model mechanisms. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p382fig11.16 The version of DaVinci steropsis wherein three narrow rectangles are binocularly matched with one thick rectangle can also be explained is a similar way.
    || DaVinci stereopsis of [3 narrow, one thick] rectangles (Gillam, Blackburn, Nakayama 1999). [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • p618 Chapter 17 A universal development code - Mental measurements embody universal laws of cell biology and physics
  • image p073fig02.19 Computing with cells: infinity does not exist in biology!
    || Computing in a bounded activity domain, Gedanken experiment (Grossberg 1970). Vm sub-areas [xm, B - xm], I(all m)], m=[1, i, B].
    Bexcitable sites
    xi(t)excited sites (activity, potential)
    B - xi(t)unexcited sites
  • image p082fig02.37 My models begin with behavioral data, since brains are designed to achieve behavioral success. The text explains how models evolve in stages, through a process of successive refinements, or unlumpings. These unlumpings together carry out a kind of conceptual evolution, leading to models that can explain and predict ever larger psychological and neurobiological databases.
    || Modelling method and cycle.
    Behavioral data -(art of modeling)-> Design principles <- Neural data <-(brain predictions)- Mathematical model and analysis -(behavioral predictions)-> Behavioural data
    Operationalizes "proper level of abstraction"
    Operationalizes that you cannot "derive a brain" in one step.
  • image p501fig13.26 A simple differential equation describes the processes of transmitter accumulation and release that do their best, at a finite rate, to carry out unbiased transduction.
    || Transmitter accumulation and release. Transmitter y cannot be restored at an infinite rate: T = S*ym y ~= B, Differential equations: d[dt: y] = A*(B - y) - S*y = accumulate - release. Transmitter y tries to recover to ensure unbiased transduction. What if it falls behind? Evolution has exploited the good properties that happen then.
  • Grossberg 2021 p081c2h0.66 sub-section: How does one evolve a computational brain?
    The above discussion illustrates that no single step of theoretical derivation can derive a whole brain. One needs a method for deriving a brain in stages, or cycles, much as evolution has incrementally discovered ever more complex brains over many thousands of years. The following theoretical method has been successfully applied many times since I first used it in 1957. It embodies a kind of conceptual evolutionary process for deriving a brain.

    Because "brain evolution needs to achieve behavioural success", we need to start with data that embodiey indices of behavioral success. That is why, as illustrated in Figure 2.37 Modelling method and cycle, one starts with Behavioral Data from scores or hundreds of psychological experiments. These data are analyszed as the result of an individual adapting autonomously in real time to a changing world. This is the Arty of Modeling. It requires that one be able to infer from static data curves the dynamical processes that control individual behaviors occuring in real time. One of the hardest things that I teach to my students to do is "how to think in real time" to be able to carry out this speculative leap.

    Properly carried out, this analysis leads to the discovery of new Design Principles that are embodied by these behavioral processes. The Design Principles highlight the functional meaning of the data, and clarify how individual behaviors occurring in real time give rise to these static data curves.

    These principles are then converted into the simplest Mathematical Model using a method of minimal anatomies, which is a form of Occam
  • image p248fig05.49 This circuit of the LAMINART model helps to explain properties of Up and Down states during slow wave sleep, and how disturbances in ACh dynamics can disrupt them.
    ||
  • image p278fig06.28 If the feature-category resonances cannot form, say due to a lesion in IT, then a surface-shroud resonance can still support conscious seeing of an attended object, and looking at or reaching for it, even if the individual doing so knows nothing about the object, as occurs during visual agnosia. The surface-shroud resonance supports both spatial attention and releases commands that embody the intention to move towards the attended object.
    || What kinds of resonances support knowing vs seeing? visual agnosia: reaching without knowing Patient DF (Goodale etal 1991). Attention and intention both parietal cortical functions (Anderson, Essick, Siegel 1985; Gnadt, Andersen 1988; Synder, Batista, Andersen 1997, 1998)
  • image p557fig15.26 Brain regions and processes that contribute to autistic behavioral symptoms when they become imbalanced in prescribed ways.
    || Basal Gamglia prolonged gate opening <-> { Amygdala emotionally depressed-> [hippocampus- hyperspecific learning; Cerebellum- adaptive timing fails; hypofrontal blocking fails, no Theory of Mind]-> Neocortex; Neocortex- rewards not received-> Amygdala}.
  • image p189fig05.04 The hippocampus is one of several brain regions that are important in learning and remembering about objects and events that we experience throughout life. The book will describe several hippocampal processes that contribute to this achievement in different ways.
    || hypothalmic nuclei, amygdala, hippocampus, cingulate gyrus, corpus callosum, thalamus
  • image p228fig05.37 A macrocircuit of the neurotrophic Spectrally Timed ART, or nSTART, model. I developed nSTART with my PhD student Daniel Franklin. It proposes how adaptively timed learning in the hippocampus, bolstered by Brain Derived Neurotrophic Factor, or BDNF, helps to ensure normal memory consolidation.
    || habituative gates, CS, US, Thalamus (sensory cortex, category learning, conditioned reinforcer learning, adaptively time learning and BDNF), Amygdala (incentive motivation learning), Hippocampus (BDNF), Prefrontal Cortex (attention), Pontine nuclei, Cerebellum (adaptively timed motor learning)
  • image p233fig05.42 Mismatch-induced beta oscillations have been reported in at least three parts of the brain: V1, V4, and hippocampus. Althpough there may be other reasons for beta oscillations in the brain, those that are caused by a mismatch should be studied in concert with the gamma oscillations that occur during a good enough match. See tyhe text for details.
    || Is there evidence for the [gamma, beta] prediction? Yes, in at least three parts of the brain, (Buffalo EA, Fries P, Ladman R, Buschman TJ, Desimone R 2011, PNAS 108, 11262-11267) Does this difference in average oscillation frequencies in the superficial and deep layers reflect layer 4 reset? Superficial recording γ (gamma), Deep recording β (beta) (Berke etal 2008, hippocampus; Buschman and Miller 2009, FEF)
  • image p520fig14.02 Macrocircuit of the main brain regions, and connections between them, that are modelled in the unified predictive Adaptive Resonance Theory (pART) of cognitive-emotional and working memory dynamics. Abbreviations in red denote brain regions used in cognitive-emotional dynamics. Those in green denote brain regions used in working memory dynamics. Black abbreviations denote brain regions that carry out visual perception, learning and recognition of visual object categories, and motion perception, spatial representation and target tracking. Arrow denote non-excitatory synapses. Hemidiscs denote adpative excitatory synapses. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown. Also not shown are output signals from cortical areas to motor responses. V1: striate, or primary, visual cortex; V2 and V4: areas of prestriate visual cortex; MT: Middle Temporal cortex; MST: Medial Superior Temporal area; ITp: posterior InferoTemporal cortex; ITa: anterior InferoTemporal cortex; PPC: Posterior parietal Cortex; LIP: Lateral InterParietal area; VPA: Ventral PreArculate gyrus; FEF: Frontal Eye Fields; PHC: ParaHippocampal Cortex; DLPFC: DorsoLateral Hippocampal Cortex; HIPPO: hippocampus; LH: Lateral Hypothalamus; BG: Basal Ganglia; AMYG: AMYGdala; OFC: OrbitoFrontal Cortex; PRC: PeriRhinal Cortex; VPS: Ventral bank of the Principal Sulcus; VLPFC: VentroLateral PreFrontal Cortex. See the text for further details.
    ||
  • image p532fig14.08 Macrocircuit of the ARTSCENE Search neural model for learning to search for desired objects by using the sequences of already experienced objects and their locations to predict what and where the desired object is. V1 = First visual area or primary visual cortex; V2 = Second visual area; V4 = Fourth visual area; PPC = Posterior Parietal Cortex; ITp = posterior InferoTemporal cortex; ITa = anterior InferoTemporal cortex; MTL = Medial Temporal Lobe; PHC = ParaHippoCampal cortex; PRC = PeriRhinal Cortex; PFC = PreFrontal Cortex; DLPFC = DorsoLateral PreFrontal Cortex; VPFC = Ventral PFC; SC = Superior Colliculus.
    ||
  • image p541fig15.02 The neurotrophic Spectrally Timed Adaptive Resonance Theory, or nSTART, model of (Franklin, Grossberg 2017) includes hippocampus to enable adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between CS and US.
    || Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. [CS, US] -> Sensory Cortex (SC) <- motivational attention <-> category learning -> Prefrontal Cortex (PFC). SC conditioned reinforcement learning-> Amygdala (cannot bridge the temporal gap) incentive motivational learning-> PFC. SC adaptively timer learning and BDNF-> Hippocampus (can bridge the temporal gap) BDNF-> PFC. PFC adaptively timed motor learning-> cerebellum.
  • image p543fig15.06 The circuit between dentate granule cells and CA1 hippocampal pyramid cells seems to compute spectrally timed responses. See the text for details.
    || Hippocampal interpretation. 1. Dentate granule cells (Berger, Berry, Thompson 1986): "increasing firing...in the CS period...the latency...was constant". 2. Pyramidal cells: "Temporal model" Dentate granule cells-> CA3 pyramids. 3. Convergence (Squire etal 1989): 1e6 granule cells, 1.6e5 CA3 pyramids. 80-to-1 (ri).
  • image p549fig15.19 How the adaptively timed hippocampal spectrum T inhibits (red arrow) the orienting system A as motivated attention in orbitofrontal cortex Si(2) peaks at the ISI.
    || Conditioning, Attention, and Timing circuit. Hippocampus spectrum-> Amgdala orienting system-> neocortex motivational attention. Adaptive timing inhibits orienting system and maintains adaptively timed Motivated Attention on the CS.
  • image p557fig15.26 Brain regions and processes that contribute to autistic behavioral symptoms when they become imbalanced in prescribed ways.
    || Basal Gamglia prolonged gate opening <-> { Amygdala emotionally depressed-> [hippocampus- hyperspecific learning; Cerebellum- adaptive timing fails; hypofrontal blocking fails, no Theory of Mind]-> Neocortex; Neocortex- rewards not received-> Amygdala}.
  • image p573fig16.01 The experimental chamber (A) and neurophysiological recordings from a rat hippocampus (B) that led to the discovery of place cells. See the text for details.
    ||
  • image p575fig16.03 As a rat navigates in its experimental chamber (black curves), neurophysiological recordings disclose the firing patterns (in red) of (a) a hippocampal place cell and (b) an entrorhinal grid cell.
    ||
  • image p578fig16.04 Cross-sections of the hippocampal regions and the inputs to them. See the text for details.
    || EC-> CA1-> CA3-> DG. Layers [V/V1, II, II].
  • image p583fig16.10 The GRIDSmap model is embedded into a more complete representation of the processing stages from receipt of angular head velocity and linear velocity signals to this learning of place cells.
    || GRIDSmap. Pre-wired 2D stripe cells, learns 2D grid cells. vestibular cells [angular head velocity-> head direction cells, linear velocity]-> stripe cells- small scale 1D periodic spatial code (ECIII)-> SOM grid cells entorhinal cortex- small scale 2D periodic spatial scale-> SOM place cells hippocampal cortex- large scale 2D spatial code (dentate/CA3). Unified hierarchy of SOMs.
  • image p600fig16.36 The entorhinal-hipppocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories. See the text for details.
    || Entorhinal-hippocampal interactions as an ART system. Hippocampal place cells as spatial categories. Angular head velocity-> head direction cells-> stripe cells- small scale 1D periodic code (ECIII) SOM-> grid cells- small scale 2D periodic code (ECII) SOM-> place cells- larger scale spatial map (DG/CA3)-> place cells (CA1)-> conjunctive-coding cells (EC V/VI)-> top-down feedback back to stripe cells- small scale 1D periodic code (ECIII). stripe cells- small scale 1D periodic code (ECIII)-> place cells (CA1).
  • image p602fig16.37 Data showing the effect of hippocampal inactivation by muscimol on grid cell firing before, during, and six hours after the muscimol, reading from left to right.
    || Hippocampal inactivation disrupts grid cells (Bonnevie etal 2013). muscimole inactivation. spikes on trajectory: [before, after min [6-20, 20-40, 40-60, 6h]]. rate map (Hz) [18.6, 11.4, 9.5, 6.7, 10.8]. spatial autocorrelogram g=[1.12, 0.05, -0.34, 0.09, 1.27].
  • image p603fig16.38 Role of hippocampal feedback in maintaining grid fields. (a) Data showing the effect of hippocampal inactivation before and during muscimol inhibition of hippocampal cells, as in Figure 16.37. (b) Model simulation with normal grid fields. (c) Model simulation that emulates the effect of hippocampal inhibition on grid fields.
    || (a) Data: hippocampal inactivation [before, after] cart [spikes on trajectory (p: [18.6, 6.7] Hz), spatial autocorrelogram (g= [1.12, 0.09])]. (b) Model: noise-free path integration, [spikes on trajectory (p: 14.56 Hz), rate map, spatial autocorrelogram (g= 1.41), dynamic autocorrelogram (g=0.6)]. (c) Model: noisy path integration + non-specific tonic inhibition, [spikes on trajectory (p: 11.33 Hz), rate map, spatial autocorrelogram (g= 0.05), dynamic autocorrelogram (g=0.047)].
  • image p617fig16.50 The perirhinal and parahippocampal cortices enable adaptively timed reinforcement learning and spatial navigational processes that are modeled by Spectral Spacing models in the What and Where cortical streams, respectively, to be fused in the hippocampus.
    || What and Where inputs to the hippocampus (Diana, Yonelinas, Ranganath 2007). Adaptively timed conditioning and spatial naviga039tbl01.03 tion. Hippocampus <-> Entorhinal Cortex <-> [Perirhinal Cortex <-> what, Parahippocampal Cortex <-> where].
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
    bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :image p419fig12.17 The auditory continuity illusion illustrates the ART Matching Rule at the level of auditory streaming. Its "backwards in time" effect of future context on past conscious perception is a signature of resonance.
    || Auditory continuity illusion. input, percept. Backwards in time - How does a future sound let past sound continue through noise? Resonance! - It takes a while to kick in. After it starts, a future tone can maintain it much more quickly. Why does this not happen if there is no noise? - ART Matching Rule! TD harmonic filter is maodulatory without BU input. It cannot create something out of nothing.
  • image p420fig12.18 The ARTSTREAM model explains and simulates the auditory continuity illusion as an example of a spectral-pitch resonance. Interactions of ART Matching Rule and asymmetric competition mechanisms in cortical strip maps explain how the tone selects the consistent frequency from the noise in its own stream while separating the rest of the noise into another stream.
    || ARTSTREAM model (Grossberg 1999; Grossberg, Govindarajan, Wyse, Cohen 2004). SPINET. Frequency and pitch strips. Bottom Up (BU) harmonic sieve. Top Down (TD) harmonic ART matching. Exclusive allocation. Learn pitch categories based on early harmonic processing. A stream is a Spectral-Pitch Resonance!
  • image p425fig12.23 ARTSTREAM simulations of the auditory continuity illusion and other streaming properties (left column, top row). When two tones are separated by silence (Input), a percept of silence also separates them in a spectral-pitch resonance. (left column, bottom row). When two tones are separated by broadband noise, the percept of tone continues through the noise in one stream (stream 1) while the remainder of the noise occurs in a different stream (stream 2). (right column) Some of the other streaming properties that have been simulated by the ARTSTREAM model.
    || Auditory continuity does not occur without noise. Auditory continuity in noise. Other simulated streaming data.
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
  • bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :p404 Chapter 12From seeing and reaching to hearing and speaking - Circular reaction, streaming, working memory, chunking, and number
  • image p030tbl01.02 The What and Where cortical processing streams obey complementary laws. These laws enable the What stream to rapidly and stably learn invariant object categories without experiencing catastrophic forgetting, while the Where stream learns labile spatial and action representations to control actions that are aimed towards these objects.
    ||
  • WHATWHERE
    spatially-invariant object learning and recognitionspatially-variant reaching and movement
    fast learning without catastrophic forgettingcontinually update sensory-motor maps and gains
    IT InterferoTemporal CortexPPC Posterior Parietal Cortex
    WhatWhere
    matchingexcitatoryinhibitory
    learningmatchmismatch
  • image p032fig01.21 At least three parallel visual cortical streams respond to visual inputs that reach the retina. Two parvocellular streams process visual surfaces (blob stream) and visual boundaries (interblob stream). The magnocellular stream processes visual motion.
    || [Retina, LGNs, V[1,2,3,4], MT] to [What- inferotemporal areas, Where- parietal areas]: visual parallel streams [2x blob, 1x bound]
  • image p039tbl01.03 The link between consciousness and movement
    ||
    VISUALseeing, knowing, and reaching
    AUDITORYhearing, knowing, and speaking
    EMOTIONALfeeling, knowing, and acting
  • image p092fig03.05 A cross-section of the retinal layer. Note that light stimuli need to go through all retinal layers before they reach the photoreceptor layer at which the light signals are registered.
    || light stimuli ->
    retinal layerscellular composition
    inner limiting membrane
    retinal nerve fibreganglion nerve fibres
    ganglion cellganglion
    inner plexiformamacrine
    inner nuclearhorizontal
    outer plexiform
    outer limiting membrane
    photoreceptorrod
    photoreceptorcone
    retinal pigment epithelium
    <- signal transduction. http://brain.oxfordjournals.org/content/early/2011/01/20/brain.awq346
  • image p232fig05.41 (a)-(c). The sequence of interlaminar events that SMART predicts during a mismatch reset. (d) Some of the compatible neurophysiological data.
    || Mismatch causes layer 5 dendritic spike that trigger reset. (a) Arousal causes increase in nonspecific thalamic nuclei firing rate and layer 5 dendritic and later somatic spikes (Karkum and Zhu 2002, Williams and Stuart 1999) (b) Layer 5 spikes reach layer 4 via layer 6i and inhibitory neurons (Lund and Boothe 1975, Gilbert and Wiesel 1979) (c) habituative neurotransmitters in layer 6i shift the balance of active cells in layer 4 (Grossberg 1972, 1976) (d) Dendritic stimulation fires layer 5 (Larkum and Zhu 2002) stimulation apical dendrites of nonspecific thalamus
  • image p278fig06.28 If the feature-category resonances cannot form, say due to a lesion in IT, then a surface-shroud resonance can still support conscious seeing of an attended object, and looking at or reaching for it, even if the individual doing so knows nothing about the object, as occurs during visual agnosia. The surface-shroud resonance supports both spatial attention and releases commands that embody the intention to move towards the attended object.
    || What kinds of resonances support knowing vs seeing? visual agnosia: reaching without knowing Patient DF (Goodale etal 1991). Attention and intention both parietal cortical functions (Anderson, Essick, Siegel 1985; Gnadt, Andersen 1988; Synder, Batista, Andersen 1997, 1998)
  • image p303fig08.20 The G-wave speeds up with the distance between flashes at a fixed delay, and has a consitent motion across multiple spatial scales.
    || G-wave properties (Grossberg 1977). Theorem 2 (Equal half-time property) The time at which the motion signal reaches position w=L/2. Apparent motion speed-up with distance: this half-time is independent of the distance L between the two flashes. Consistent motion across scales: half-time is independent of the scale size K. Method of proof: elementary algebra and calculus (Grossberg, Rudd 1989 appendix)
  • image p304fig08.21 A computer simulation of the equal half-time property whereby the apparent motions within different scales that respond to the same flashes all reach the half-way point in the motion trajectory at the same time.
    || Equal half-time property: how multiple scales cooperate to generate motion percept. Travelling waves from Gaussian filters of different sizes bridge the same distance in comparable time. The time needed to bridge half the distance between flashes is the same.
  • image p335fig08.61 Behavioral data (left image) and simulation (right image) about speed in correct and error trials of the RT task. See text for details.
    || Behavioral data: speed, correct and error trials (RT task) (Roltman, Shadien 2002). More coherence in the motion causes faster reaction time.
  • image p350fig09.22 How the negative Gaussian of an obstacle causes a peak shift to avoid the obstacle without losing sight of how to reach the goal.
    || Steering dynamics: obstacle avoidance. body-centered coordinates [obstacle, goal, heading] -> steering
  • image p351fig09.25 By the time MT+ is reached, directional transient cells and directional filters have begun to extract more global directional information from the image.
    || M+ computes global motion estimate. Estimate global motion from noisy local motion estimates.
  • image p414fig12.11 Neurophysiological data from cortical areas 4 and 5 (every other column) and simulations thereof (other columns) during a reach.
    || activation vs time. (a) area 4 phasic RT (IFV) (b) area 4 tonic (OPV) (c) area 4 phasic-tonic (OFPV) (d) area 4 phasic MT (DVV) (e) area 5 phasic (DV) (f) area 5 tonic (PPV)
  • image p416fig12.13 The DIRECT model learns, using a circular reaction that is energized by an Endogenous Random Generator, or ERG, to make motor-equivalent volitionally-activated reaches. This circular reaction learns a spatial representation of a target in space. It can hereby make accurate reaches with clamped joints and on its first try using a tool under visual guidance; see Figure 12.16.
    || DIRECT model (Bulloch, Grossberg, Guenther 1993). learns by circular reaction. learns spatial reresentation to me4diate between vision and action. motor-equivalent reaching. can reach target with clamped joints. can reach target with a TOOL on the first try under visual guidance. How did tool use arise?!
  • image p416fig12.14 Computer simulations of DIRECT reaches with (b) a tool, (c) a clamped elbow, and (d) with a blindfold, among other constraints.
    || Computer simulationsd of direct reaches [unconstrained, with TOOL, elbow clamped at 140°, blindfolded]
  • image p417fig12.15 The DIRECT and DIVA models have homologous circuits to learn and control motor-equivalent reaching and speaking, with tool use and coarticulation resulting properties. See the text for why.
    || From Seeing and Reaching to Hearing and Speaking, Circular reactions (Piaget 1945, 1951, 1952). Homologous circuits for development and learning of motor-equivalent REACHING and SPEAKING. DIRECT TOOL use (Bullock, Grossberg, Guenther 1993), DIVA Coarticulation (Guenther 1995)
  • image p428fig12.25 (left architecture) Auditory-articulatory feedback loop whereby babbled sounds active learning in an imitative map that is later used to learn to reproduce the sounds of other speakers. An articulatory-to-auditory expectation renders learning possible by making the auditory and motor data dimensionally consistent, as in the motor theory of speech. (right architecture) Parallel streams in the ARTSPEECH model for learning speaker-independent speech and language meaning, including a mechanism for speaker normalization (right cortical stream) and for learning speaker-dependent vocalic qualities (left cortical stream).
    || left: Speaker-dependent vocalic qualities; right: Speaker-independent speech and language meaning
  • image p430fig12.26 The NormNet model shows how speaker normalization can be achieved using specializations of the same mechanisms that create auditory streams. See the text for how.
    || [Anchor vs Stream] log frequency map. -> diagonals-> Speaker-independent acoustic item information-> [BU adaptive filter, TD learned expectation]-> leaned item recognition categories
  • image p446fig12.44 (left column, top row) LIST PARSE can model linguistic data from human subjects. In this figure, model parameters are fixed to enable a close fit to data about error-type distributions in immediate free recall experiments, notably transposition errors. (right column, top row) Simulation and data showing bowing of the serial position curve, including an extended primacy gradient. (left column, bottom row) The simulation curve overlays data about list length effects, notably the increasing recall difficulty of longer lists during immediate serial recall (ISR). (right column, bottom row) Simulation (bottom image) and data (top image) of the limited temporal extent for recall.
    || (1. TL) Error-type distributions in immediate serial recall (Hanson etal 1996). % occurence vs serial position. Graph convention: Data- dashed lines; Simulations- solid lines. Six letter visual ISR. Order errors- transpositions of neighboring items are the most common. Model explanation: Noisy activation levels change relative oorder in primacy gradient. Similar activation of neighboring items most susceptible to noise. Model paramenters fitted on these data. (2. TR) Bowing of serial position curve (Cowan etal 1999). % correct vs serial position. Auditory ISR with various list lengths (graphs shifted rightward): For [, sub-]span lists- extended primacy, with one (or two) item recency; Auditory presentation- enhanced performance for last items. LIST PARSE: End effects- first and last items half as many members; Echoic memory- last presented item retained in separate store. (3. BL) List length effects, circles (Crannell, Parrish 1968), squares (Baddeley, Hitch 1975), solid line- simulation. % list correct vs list length. Variable list length ISR: longer lists are more difficult to recall. LIST PARSE: More items- closer activation levels and lower absolute activity level with enough inputs; Noise is more likely to produce order errors, Activity levels more likely to drop below threshold;. (4. BR) Limited temporal extent for recall (Murdoch 1961). % recalled vs retention interval (s). ISR task with distractor-filled retention intervals (to prevent rehersal): Increasing retention interval - decreases probabilty of recalling list correctly; Load dependence- longer list more affected by delays; Performance plateau- subjects reach apparent asymptote. LIST PARSE: Increase convergence of activities with time; loss of order information;.
  • image p231fig05.39 The SMART hypothesis testing and learning cycle predicts that vigilance increases when a mismatch in subcortical regions like the nonspecific thanlamus activates the nucleus basalis of Meynert which, in turn, broadcasts a burst of the neurotransmitter acetylcholine, or ACh, to deeper cortical layers. Due to the way in which LAMINART proposes that cortical matching and mismatching occurs, this ACh burst can increase vigilance and thereby trigger a memory search. See the text for details.
    || [BU input, [, non]specific thalamic nucleus, thalamic reticulate nucleus, neocortical laminar circuit] cart [Arousal, Reset, Search, Vigilance]
  • image p232fig05.41 (a)-(c). The sequence of interlaminar events that SMART predicts during a mismatch reset. (d) Some of the compatible neurophysiological data.
    || Mismatch causes layer 5 dendritic spike that trigger reset. (a) Arousal causes increase in nonspecific thalamic nuclei firing rate and layer 5 dendritic and later somatic spikes (Karkum and Zhu 2002, Williams and Stuart 1999) (b) Layer 5 spikes reach layer 4 via layer 6i and inhibitory neurons (Lund and Boothe 1975, Gilbert and Wiesel 1979) (c) habituative neurotransmitters in layer 6i shift the balance of active cells in layer 4 (Grossberg 1972, 1976) (d) Dendritic stimulation fires layer 5 (Larkum and Zhu 2002) stimulation apical dendrites of nonspecific thalamus
  • image p461fig12.58 The lisTELOS model built upon key processes that were earlier modeled by the TELOS model. See the text for details.
    || TELOS model (Brown, Bulloch, Grossberg 1999, 2004). shows [BG nigro-[thalamic, collicular], FEF, ITa, PFC, PNR-THAL, PPC, SEF, SC, V1, V4/ITp, Visual Cortex input] and [GABA].
  • image p523fig14.03 (a) The MOTIVATOR neural model generalizes CogEM by also including the basal ganglia. It can hereby explain and simulate complementary functions of the amygdala and basal ganglia (SNc) during conditioning and learned performance. The basal ganglia generate Now Print signals in response to unexpected rewards. These signals modulate learning of new associations in many brain regions. The amygdala supports motivated attention to trigger actions that are expected to occur in response to conditioned or unconditioned stimuli. Object Categories represent visual or gustatory inputs in anterior inferotemporal (ITA) and rhinal (RHIN) cortices, respectively. Value Categories represent the value of anticipated outcomes on the basis of hunger and satiety inputs, in amygdala (AMYG) and lateral hypothalamus (LH). Object-Value Categories resolve the value of competing perceptual stimuli in medial (MORB) and lateral (ORB) orbitofrontal cortex. The Reward Expectation Filter detects the omission or delivery of rewards using a circuit that spans ventral striatum (VS), ventral pallidum (VP), striosomal delay (SD) cells in the ventral striatum, the pedunculopontine nucleus (PPTN) and midbrain dopaminergic neurons of the substantia nigra pars compacta/ventral tegmental area (SNc/VTA). The circuit that processes CS-related visual information (ITA, AMTG, ORB) operates in parallel with a circuit that processes US-related visual and gustatory information (RHIN, AMYG, MORB). (b) Reciprocal adaptive connections between hypothalamus and amygdala enable amygdala cells to become learned value categories. The bottom region represents hypothalmic cells, which receive converging taste and metabolite inputs whereby they become taste-drive cells. Bottom-up signals from activity patterns across these cells activate competing value category, or US Value Representations, in the amygdala. A winning value category learns to respond selectively to specific combinations of taste-drive activity patterns and sends adaptive top-down priming signals back to the taste-drive cells that activated it. CS-activated conditioned reinforcer signals are also associatively linked to value categories. Adpative connections end in (approximately) hemidiscs. See the text for details.
    ||
  • image p524fig14.04 (a) Model basal ganglia circuit for the control of dopaminergic Now Print signals from the substantia nigra pars compacta, or SNc, in response to unexpected rewards. Cortical inputs (Ii), activated by conditioned stimuli, learn to excite the SNc via a multi-stage pathway from the vantral striatum (S) to the ventral pallidum and then on to the PPTN (P) and the SNc (D). The inputs Ii excite the ventral striatum via adaptive weights WIS, and the ventral striatum excites the SNc with strength W_PD. The striosomes, which contain an adaptive spectral timing mechanism [xij, Gij, Yij, Zij], learn to generate adaptively timed signals that inhibit reward-related activation of the SNc. Primary reward signals (I_R) from the lateral hypothalamus both excite the PPTN directly (with strength W_RP) and act as training signals to the ventral striatum S (with strength W_RS) that trains the weights W_IS. Arrowheads denote excitatory pathways, circles denote inhibitory pathways, and hemidiscs denote synapses at which learning occurs. Thick pathways denote dopaminergic signals.
    ||
  • image p559fig15.27 Brain regions and processes that contribute to the release of dopaminergic Now Print signals by the substantia nigra pars compacta, or SNc, in response to unexpected reinforcing events. See the text for details.
    || Model of spectrally timed SNc learning (Brown, Bulloch, Grossberg 1999). Delayed inhibitory expectations of reward. Dopamine cells signal an error in reqard prediction timing or magnitude. Immediate excitatory predictions of reward. Lateral hypothalamus (Primary Reward Input)-> [(+)ventral striatum <-> ventral pallidium (+)-> PPTN(+)-> SNc]. SNc-> [dopamine signal -> ventral striatum, Striosomal cells]. Conditioned Stimuli (CS)(+)-> [ventral striatum, striosomal cells]. Striosomal cells(-)-> SNc.
  • image p560fig15.29 Excitatory pathways that support activation of the SNc by a US and the conditioning of a CS to the US.
    || Excitatory pathway. Primary reward (apple juice) briefly excites lateral hypothalamus. Hypothalamic-PPTN excitation causes SNc dopamine burst. Hypothalamic activity excites ventral striatum for training. Active CS working memory signals learn to excite ventral striatum. Lateral hypothalamus (Primary Reward Input)-> [(+)ventral striatum <-> ventral pallidium(+)-> PPTN(+)-> SNc]. SNc-> [dopamine signal -> ventral striatum. Conditioned Stimuli working memory trace (CS)(+)-> ventral striatum.
  • image p560fig15.30 The inhibitory pathway from striosomal cells to the SNc is able to inhibit the SNc when a reward occurs with expected timing and magnitude.
    || Inhibitory pathway. Learning: CS-striosomal LTP occurs due to a three-way coincidence [An active CS working memory input, a Ca2+ spike, a dopamine burst]; Signaling: The delayed Ca2+ spike facilitates striosomal-SNc inhibition;. Striosomal cells learn to predict both timing and magnitude of reward signal to cancel it: reward expectation;. Conditioned stimuli (CS) LTP-> Striosomal cells <- dopamine | (-)-> SNc->.
  • image p561fig15.32 The SNc can generate both dopamine bursts and dips in response to rewards whose amplitude is unexpectedly large or small.
    || Inhibitory pathway: expectation magnitude. 1. If reward is greater than expected, a dopamine burst causes striosomal expectation to increase. 2. If reward is less than expected, a dopamine dip causes striosomal expectation to decrease. 3. This is a negative feedback control system for learning. Conditioned stimuli (CS)-> Striosomal cells <- dopamine | (-)-> SNc->.
  • image p569fig15.40 The direct and indirect basal ganglia circuits that control GO and STOP movement signals. See the text for details.
    || [Direct path GO(+), Indirect path STOP(+), dopamine from SNc(+-)]-> striatum. GO-> GPi/SNr-> Thalamus (VA/Vlo) <-> frontal cortex. STOP-> GPe <-> STN-> GPi/SNr. NAc-> GPi/SNr.
  • image p375fig11.06 The contrast constraint on binocular fusion is realized by obligate cells in layer 3B of cortical area V1.
    || Model implements contrast constraint on binocular fusion (cf. "obligate" cells Poggio 1991). An ecological constraint on cortical development. [left, right] eye cart V1-[4 monocular simple, 3B binocular simple, complex2/3A] cells. Inhibitory cells (red) ensure that fusion occurs when contrasts in left and right eye are approximately equal.
  • image p376fig11.09 The disparity filter in V2 helps to solve the correspondence problem by eliminating spurious contrasts using line-of-sight inhibition.
    || Model V2 disparity filter solves the correspondence problem. An ecological constraint on cortical development. [left, right] eye view: False matches (black) suppressed by line-of-sight inhibition (green lines). "Cells that fire together wire together".
  • image p581fig16.06 The learning of hexagonal grid cell receptive fields as an animal navigates an open field is a natural consequence of simple trigonometric properties of the positions at which the firing of stripe cells that are tuned to different directions will co-occur.
    || The Trigonometry of spatial navigation. Coactivation of stripe cells.
  • image p583fig16.09 The GRIDSmap model used algorithmically defined stripe cells to process realistic rat trajectories. The stripe cell outputs then formed inputs to the adaptive filter of a self-organizing map which learned hexagonal grid cell receptive fields.
    || GRIDSmap. Self-organizing map receives inputs from stripe cells and learns to respond to most frequent co-activation patterns. Stripe cells combine speed and head direction to create a periodic 1D position code. Virtual rat navigated using live rat trajectories from Moser Lab. Speed and head direction drives stripe cells.
  • image p584fig16.11 GRIDSmap simulation of the learning of hexagonal grid fields. See the text for details.
    || Simulation results. Multiple phases per scale. response vs lenght scale (0.5m+).
  • image p585fig16.13 Hexagonal grid cell receptive fields develop if their stripe cell directional preferences are separated by 7, 10, 15, 20, or random numbers degrees. The number and directional selectivities of stripe cells can thus be chosen within broad limits without undermining grid cell development.
    ||
  • image p585fig16.14 Superimposing firing of stripe cells whose directional preferences differ by 60 degrees supports learning hexagonal grid cell receptive fields in GRIDSmap.
    || GRIDSmap: from stripe cells to grid cells. Grid-cell Regularity from Integrated Distance through Self-organizing map. Superimposing firing of stripe cells oriented at intervals of 60 degrees. Hexagonal grid!
  • image p586fig16.15 Superimposing stripe cells oriented by 45 degrees does not lead to learning of rectangular grids in GRIDSmap, but it does in an oscillatory inference model.
    || Why is a hexagonal grid favored? Superimposing firing of stripe cells oriented at intervals of 45 degrees. Rectangular grid. This and many other possibilities do not happen in vivo. They do happen in the oscillatory inference model. How are they prevented in GRIDSmap?
  • image p587fig16.17 A finer analysis of the 2D trigonometry of spatial navigation showed that both the frequency and amplitude of coactivations by stripe cells determine the learning of hexagonal grid fields.
    || A refined analysis: SOM amplifies most frequent and energetic coactivations (Pilly, Grossberg 2012). [linear track, 2D environment]. (left) Stripe fields separated by 90°. 25 coactivations by 2 inputs. (right) Stripe fields separated by 60°. 23 coactivations by 3 inputs.
  • image p588fig16.18 Simulations of coordinated learning of grid cell receptive fields (second row) and unimodal place cell receptive fields (third row) by the hierarchy of SOMs in the GridPlaceMap model. Note the exquisite regularity of the hexagonal grid cell firing fields.
    || [stripe, grid, place] cells vs [spikes on trajectory, unsmoothed rate map, smoothed rate map].
  • image p605fig16.39 Data showing effects of medial septum (MS) inactivation on grid cells and network theta oscillations in medial entorhinal cortex (MEC). (A) Examples of disruption in the spatial expression of the hexagonal grid structure for two grid cells (Brandon etal 2011). (B) Temporal reduction in the power and frequency of network theta oscillations (Koenig etal 2011). (C) Temporary reduction in the gridness score, mean firing rate, and patial stability of grid cells (Koenig etal 2011).
    || Disruptive effects of Medial Septum inactivation in Medial Entorhinal Cortex (Brandon etal 2011; Koenig etal 2011). (A) Rate map [rate map, spatial autocorrelations, trajectory] vs [baseline, sub-sampled, medial septum inactivation, 3-6 hour recovery, 24 hour recovery], [rate map (Hz- m, p), spatial autocorrelations (gridness)][ 1.2, 7.2, 1.1; 0.25, 1.7, 0.6; 0.25, 2.5, -0.53; 0.7, 5.1, 0.55; 1.0, 5.3, 1.3; 2.1, 15, 0.19; 1.7, 12, 0.71; 1.7, 3.2, -0.22; 1.8, 9.1, 0.68; 2.5, 13, 0.46]. (B) [normalized power at 7-9 Hz, frequency (Hz)] vs 5-minute periods. (C) [mean gridness score (+-SEM), mean firing rate (% of baseline), mean correlation coeff (+-SEM)] vs 10-minute periods.
  • p353 Chapter 10 Laminar computing by cerebral cortex - Towards a unified theory of biologucal and artificial intelligence
  • image p011fig01.07 The choice of signal function f determines how an initial activity pattern will be transformed and stored in short-term memory (STM). Among [same, slower, faster]-than-linear signal functions, only the last one can suppress noise. It does so as it chooses the population that receives the largest input for storage, while suppressing the activities of all other population, thereby giving rise to a winner-take-all choice.
    || initial pattern (xi(0) vs i):
    fXi(∞)= xi(∞)/sum[j: xj(∞)]x(∞)
    linearperfect storage of any patternamplifies noise (or no storage)
    slower-than-linearsaturatesamplifies noise
    faster-than-linearchooses max [winner-take-all, Bayesian], categorical perceptionsuppresses noise, [normalizes, quantizes] total activity, finite state machine
  • image p357fig10.04 Laminar Computing achieves its properties by computing in a new way that sythesizes the best properties of feedforward and feedback interactions, analog and digital computations, and preattentive and attentive learning. The property of analog coherence enables coherent groupings and decisions to form without losing sensitivity to the amount of evidence that supports them.
    || Laminar Computing: a new way to compute. 1. Feedforward and feedback: a) Fast feedforward processing when data are unambiguous (eg Thorpe etal), b) slower feedback chooses among ambiguous alternatives [self-normalizing property, real-time probabiligy theory], c) A self-organizing system that trades certainty against speed: Goes beyond Bayesian models! 2. Analog and Digital: Analog Coherence combines the stability of digital with the sensitivity of analog. 3. Preattentive and Attentive Learning: Reconciles the differences of (eg) Helmholtz and Kanizsa, "A preattentive grouping is its own
  • image p230fig05.38 The Synchronous Matching ART, or SMART, model includes spiking neurons in a laminar cortical hierarchy. I developed SMART with my PhD student Massimiliano Versace. By unlumping LAMINART to include spiking neurons, finer details of neurodynamics, such as the existence of faster gamma oscillations during good enough matches, and slower beta oscillations during bad enough mismatches, could be shown as emergent properties of network interactions.
    || Second order thalamus -> specific thalamic nucleus -> Thalamic reticulate nucleus -> neocortical laminar circuit [6ll, 6l, 5, 2/3, 1] -> Higher order cortex. Similar for First order thalamus -> First order cortex, with interconnection to Second order, nonspecific thalamic nucleus
  • image p232fig05.40 Computer simulation of how the SMART mode generates (a) gamma oscillations if a good enough match occurs, or (c) beta oscillations if a bad enough match occurs. See the text for details.
    || Brain oscillations during match/mismatch, data, simulation. (a) TD corticothalamic feedback increases synchrony (Sillito etal 1994) (b) Match increases γ oscillations (c) Mismatch increases θ,β oscillations
  • image p233fig05.42 Mismatch-induced beta oscillations have been reported in at least three parts of the brain: V1, V4, and hippocampus. Althpough there may be other reasons for beta oscillations in the brain, those that are caused by a mismatch should be studied in concert with the gamma oscillations that occur during a good enough match. See tyhe text for details.
    || Is there evidence for the [gamma, beta] prediction? Yes, in at least three parts of the brain, (Buffalo EA, Fries P, Ladman R, Buschman TJ, Desimone R 2011, PNAS 108, 11262-11267) Does this difference in average oscillation frequencies in the superficial and deep layers reflect layer 4 reset? Superficial recording γ (gamma), Deep recording β (beta) (Berke etal 2008, hippocampus; Buschman and Miller 2009, FEF)
  • image p296fig08.07 When two flashes turn on and off out of phase with the correct range of interstimulus intervals, and not too far from one another, then either beta motion of phi motion are perceived.
    || Beta and Phi motion percepts. Beta motion: percepts of continuous motion of a well-defined object across empty intervening space. Phi motion: sense of "pure" motion without a concurrent percept of moving object. (Exner 1875) http://www.yorku.ca/eye/balls.htm
  • image p297fig08.08 When a second flash is more intense than the first flash, then apparent motion may occur from the second to the first flash.
    || Delta motion: motions from the second to the first flash. Data: (Kolers 1972; Korte 1915). Simulation: (Grossberg, Rudd 1992). This occurs when the luminance or contrast of the second flash is large compared to that of the first flash. Sustained and transient cells obey shunting dynamics whose averaging rates speed up with output intensity. The first flash to wane is the one that will be the source of the G-wave.
  • image p340fig09.07 Log polar remapping from the retina to cortical area V1 and beyond converts expansion, translation, and spiral flows on the retina into parallel flows, with different orientations, on the cortical map.
    || Log polar remapping of optic flow. retina -> cortex. Any combination of expansion and circular motion centered on the fovea maps to cortex as a single direction. Retinal Cartesian coordinates (x,y) map to cortical polar coordinates (r,theta). This makes it easy to compute directional receptive fields in the cortex!
  • image p423fig12.20 The Spatial Pitch Network, or SPINET, model shows how a log polar spatial representation of the sound frequency spectrum can be derived from auditory signals occuring in time. The spatial representation allows the ARTSTREAM model to compute spatially distinct auditory streams.
    || SPINET model (Spatial Pitch Network) (Cohen, Grossberg, Wyse 1995). 1. input sound 2. Gamma-tone filter bank 3. Shaort-term average energy spectrum 4. MAP transfer function 5. On-center off-surround and rectification 6. Harmonic weighting 7. Harmonic summation and competition -> PITCH
  • image p598fig16.34 The spiking GridPlaceMap model generates theta-modulated place and grid cell firing, unlike the rate-based model.
    || Theta-modulated cells in spiking model. [place, grid] cell vs [membrane potential (mV vs time), frequency vs inter-spike intervals (s), power spectra (normalized power vs frequency (Hz))].
  • image p605fig16.39 Data showing effects of medial septum (MS) inactivation on grid cells and network theta oscillations in medial entorhinal cortex (MEC). (A) Examples of disruption in the spatial expression of the hexagonal grid structure for two grid cells (Brandon etal 2011). (B) Temporal reduction in the power and frequency of network theta oscillations (Koenig etal 2011). (C) Temporary reduction in the gridness score, mean firing rate, and patial stability of grid cells (Koenig etal 2011).
    || Disruptive effects of Medial Septum inactivation in Medial Entorhinal Cortex (Brandon etal 2011; Koenig etal 2011). (A) Rate map [rate map, spatial autocorrelations, trajectory] vs [baseline, sub-sampled, medial septum inactivation, 3-6 hour recovery, 24 hour recovery], [rate map (Hz- m, p), spatial autocorrelations (gridness)][ 1.2, 7.2, 1.1; 0.25, 1.7, 0.6; 0.25, 2.5, -0.53; 0.7, 5.1, 0.55; 1.0, 5.3, 1.3; 2.1, 15, 0.19; 1.7, 12, 0.71; 1.7, 3.2, -0.22; 1.8, 9.1, 0.68; 2.5, 13, 0.46]. (B) [normalized power at 7-9 Hz, frequency (Hz)] vs 5-minute periods. (C) [mean gridness score (+-SEM), mean firing rate (% of baseline), mean correlation coeff (+-SEM)] vs 10-minute periods.
  • Grossberg 2021 p229c2h0.60 SMART computer simulations demonstrate that a good enough match of a top-down expectation with a bottom-up feature pattern generates an attentive resonance during which the spikes of active cells synchronize in the gamma frequency range of 20-70 Hz (Figure 5.40). Many labs have reported a link between attention and gamma oscillations in the brain, including two articles published in 2001, one from the laboratory of Robert Desimone when he was at the the National Institute of Mental Health in Bethseda (Fries, Reynolds, Rorie, Desimone 2001), and the other from the laboratory of Wolf Singer in Frankfurt (Engel, Fries, Singer 2001). You
  • p618 Chapter 17 A universal development code - Mental measurements embody universal laws of cell biology and physics
  • image p025fig01.16 (left panel) The main processing stages of the Cognitive-Emotional-Motor (CogEM) model have anatomical interpretations in terms of sensory cortex, amygdala, and prefrontal cortex. Chapter 13 will describe in greater detail how CS cues activate invariant object categories in the sensory cortex, value categories in the amygdala, and object-value categories in the prefrontal cortex, notably the orbitofrontal cortex. The amygdala is also modulated by internal drive inputs like hunger and satiety. (right panel) Anatomical data support this circuit, as do many neurophysiological data.
    || drive -> amygdala -> prefrontal cortex <-> sensory cortex -> amygdala. [visual, somatosensory, auditory, gustatory, olfactory] cortex -> [amygdala, Orbital Prefrontal Cortex]. amygdala -> Lateral Prefrontal Cortex
  • image p058fig02.04 Serial learning paradigm: Learning the temporal order of events by practicing them in the order that they occur in time.
    || Learning a global arrow in time. How do we learn to encode the temporal order of events in LTM? serial learning. [w=intra, W=inter]trial interval. "... data about serial verbal learning (Figure 2.4) seemed to suggest that events can go "backwards in time". ..."
  • image p059fig02.05 Bowed serial position curve. This kind of data emphasizes the importance of modelling how our brains give rise to our minds using nonlinear systems of differential equations.
    || Effects of [inter, intra]trial intervals (Hovland 1938). # of errors vs list position. [w (sec), W (sec)] = (2 6) (4 6) (2 126) (4 126). Nonoccurance of future items reduces the number of errors in response to past items. Thes data require a real-time theory for their explanation! that is, DIFFERENTIAL equations.
  • image p059fig02.06 The bowed serial position curve illustrates the sense in which "events can go backwards in time" during serial learning.
    || Bow due to backward effect in time. If the past influenced the future, but no conversely: # of errors vs list position; Data (Hoyland Hull, Underwood, etc).
  • image p071fig02.16 To solve the noise-saturation dilemma, individual neurons in a network that is receiving a distributed spatial patterns of inputs need to remain sensitive to the ratio of input to them divided by all the inputs in that spatial pattern. Although the inputs are delivered to a finite number of neurons, the input and activity patterns are drawn continuously across the cells for simplicity.
    || Noise-Saturation Dilemma. [Ii, xi] vs t. [Input, Activity] pattern [small -> noise, large -> saturation]. Problem: remain sensitive to input RATIOS θi = Ii / sum[j: Ij] as total input I = sum[j: Ij] -> ∞. Many kinds of data exhibit sensitivity to ratios of inputs.
  • image p073fig02.19 Computing with cells: infinity does not exist in biology!
    || Computing in a bounded activity domain, Gedanken experiment (Grossberg 1970). Vm sub-areas [xm, B - xm], I(all m)], m=[1, i, B].
    Bexcitable sites
    xi(t)excited sites (activity, potential)
    B - xi(t)unexcited sites
  • image p082fig02.37 My models begin with behavioral data, since brains are designed to achieve behavioral success. The text explains how models evolve in stages, through a process of successive refinements, or unlumpings. These unlumpings together carry out a kind of conceptual evolution, leading to models that can explain and predict ever larger psychological and neurobiological databases.
    || Modelling method and cycle.
    Behavioral data -(art of modeling)-> Design principles <- Neural data <-(brain predictions)- Mathematical model and analysis -(behavioral predictions)-> Behavioural data
    Operationalizes "proper level of abstraction"
    Operationalizes that you cannot "derive a brain" in one step.
  • image p085fig02.38 Our models have been used in many large-scale applications to engineering and technology. Linking brain to behavior explains how brain mechanisms give rise to psychological functions, and do so autonomously. The combination of mechanism, function, and autonomy helps to explain their value in helping to solve outstanding problems in technology.
    || Modeling method and cycle.
    Behavioral data -(art of modeling)-> Design principles <- Neural data <-(brain predictions)- Mathematical model and analysis -(behavioral predictions)-> Behavioural data
    Technology: Mathematical model and analysis <-> Technological applications
    At every stage, spin off new model designs and mechanisms to technologist who need autonomous intelligent applications.
  • image p134fig04.14 The kinds of displays that Michael Paradiso and Ken Nakayamas used to catch filling-in "in the act" and which Karl Arrington then simulated using the Grossberg and Todorovic 1988 model.
    || Experiments on filling-in. Catching "filling0in" in the act (Paradiso, Nakayama 1991). (Arrington 1994 Vision Research 34, 3371-3387) simulated these data using the model of Grossberg and Todorovic 1988.
  • image p145fig04.23 If end gaps were not closed by end cuts, then color would flow out of every line end!
    || A perceptual disaster in the feature contour system. feature contour, line boundary. input -> [boundary, surface]. boundary -> surface. Color would flow out of every line end! as it does during neon color spreading.
  • image p151fig04.29 Experimental evidence of bipole cells in cortical area V2 was reported by Von der Heydt, Peterhans, and Baumgarter (1984).
    || Bipoles: first neurophysiological evidence (V2) (von der Heydt, Peterhans, Baumgartner 1984, Peterhans, von der Heydt 1988). (Grossberg 1984) prediction.
    Ordering:
    Stimulus (S)
    probe location *
    cells in V2
    response?
    ...(S)*...YES
    ...*...(S)NO
    (S)...*...NO
    (S)...*...(S)YES
    (S)...*...
    (more contrast)
    NO
    (S)...*.....(S)YES
    Evidence for receptive field.
  • image p151fig04.30 Anatomical evidence for long-range horizontal connections has also been reported, as illustrated by the example above from (Bosking etal 1997).
    || Anatomy: horizontal connections (V1) (Bosking etal 1997). tree shrew. [10, 20]*[20, 10, 0, -10, -20] (degrees).
  • image p152fig04.31 The predicted bipole cell receptive field (upper left corner) has been supported by both neurophysiological data and psychophysical data, and used in various forms by many modelers. See the text for details.
    || Bipoles through the ages. (Grossberg 1984; Grossberg, Mongolla 1985). (Field, Hayes, Hess 1993) "association field". (Heitger, von der Heydt 1993). (Williams, Jacobs 1997). cf. "relatability" geometric constraints on which countours get to group (Kellman & Shipley 1991). Also "tensor voting" (Ullman, Zucker, Mumford, Guy, Medioni, ...).
  • image p159fig04.36 Graffiti art by Banksy exploits properties of amodal boundary completion and spatial impenetrability.
    || p159c1h0.75 perceptual psychologist Nava Rubin "... When the wall is smooth, Banksy leaves the regions previously covered by stencil unpainted, relying of observers
  • image p162fig04.38 How long-range cooperation among bipole cells and short-range competition by hypercomplex cells work together to generate the inverted-U in boundary strength that is found in the data of Figure 4.37 (right panel).
    || Cooperation and competition during grouping.
    few lineswide spacing, inputs outside spatial range of competition, more inputs cause higher bipole activity
    more linesnarrower spacing, slightly weakens net input to bipoles from each inducer
    increasing line densitycauses inhibition to reduce net total input to bipoles
  • image p163fig04.39 A schematic of the LAMINART model that explains key aspects of laminar visual cortical anatomy and dynamics. LGN -> V1 [6, 4, 2/3] -> V2 [6, 4, 2/3]
    || p163c1h0.6 "... The first article abount laminar computing ... proposed how the laminar cortical model could process 2D pictures using bottom-up filtering and horizontal bipole grouping interactions (Grossbeerg, Mingolla, Ross 1997). In 1999, I was able to extend the model to also include top-down circuits for expectation and attention (Grossberg 1999)(right panel). Such a synthesis of laminar bottom-up, horizontal, and top-down circuits is characteristic of the cerebral cortex (left panel). I called it LAMINART because it began to show how properties of Adaptive Resonance Theory, or ART, notably the ART prediction about how top-down expectations and attention work, and are realized by identical cortical cells and circuits. You can immediately see from the schematic laminar circuit diagram ... (right panel) that circuits in V2 seem to repeat circuits in V1, albeit with a larger spatial scale, despite the fact that V1 and V2 carry out different functions, How this anatomical similarity can coexist with functional diversity will be clarified in subsequent sections and chapters. It enable dfifferent kinds of biological intelligence to communicate seamlessly while carrying out their different psychological functions. ..."
  • image p165fig04.41 The Kanizsa-Minguzzi ring. See the text for details.
    || p165c1h0.6 "... (left panel), the annulus is divided by two line segments into annular sectors of unequal area. Careful viewing shows that the smaller sector looks a little brighter than the larger one. (Kanizsa, Minguzzi 1986) noted that "this unexpected effect is not easily explained. In fact, it cannot be accounted for by any simple psychological mechanism such as lateral inhibition or freuency filtering. Furthermore, it does not seem obvious to invoke oganizational factors, like figural belongingness or figure-ground articulation."". p165c2h0.35 "... (Grossberg, Todorovic 1988). Our main claim is that the two radial lines play two roles, one in the formation of boundaries with which to contain the filling-in process, and the other as a source of feature contour signals that are filled-in within the annular regions to create a surface brightness percept. ..."
  • image p232fig05.40 Computer simulation of how the SMART mode generates (a) gamma oscillations if a good enough match occurs, or (c) beta oscillations if a bad enough match occurs. See the text for details.
    || Brain oscillations during match/mismatch, data, simulation. (a) TD corticothalamic feedback increases synchrony (Sillito etal 1994) (b) Match increases γ oscillations (c) Mismatch increases θ,β oscillations
  • image p232fig05.41 (a)-(c). The sequence of interlaminar events that SMART predicts during a mismatch reset. (d) Some of the compatible neurophysiological data.
    || Mismatch causes layer 5 dendritic spike that trigger reset. (a) Arousal causes increase in nonspecific thalamic nuclei firing rate and layer 5 dendritic and later somatic spikes (Karkum and Zhu 2002, Williams and Stuart 1999) (b) Layer 5 spikes reach layer 4 via layer 6i and inhibitory neurons (Lund and Boothe 1975, Gilbert and Wiesel 1979) (c) habituative neurotransmitters in layer 6i shift the balance of active cells in layer 4 (Grossberg 1972, 1976) (d) Dendritic stimulation fires layer 5 (Larkum and Zhu 2002) stimulation apical dendrites of nonspecific thalamus
  • image p252fig06.01 A surface-shroud resonance begins to form when the surface representations of objects bid for spatial attention. In addition to these topographic excitatory inputs, there is long-range inhibition of the spatial attention cells that determines which inputs will attract spatial attention.
    || Bottom-up spatial attention competition. [more, less] luminous perceptual surfaces -> competition -> spatial attention
  • image p253fig06.02 After bottom-up surface inputs activate spatial attentional cells, they send top-down topographic excitatory signals back to the surface representations. This recurrent shunting on-center off-surround network contrast enhances larger attentional activities while approximately normalizing the total spatial attentional activity. A surface-shroud resonance hereby forms that selects an attentional shroud, enhances the perceived contrast of the attended surface (light blue region), and maintains spatial attention on it.
    || Surface-shroud resonance. perceptual surfaces -> competition -> spatial attention. (Carrasco, Penpeci-Talgar, and Eckstein 2000, Reynolds and Desimone 2003)
  • image p257fig06.05 A curve tracing task with monkeys was used by Roelfsema, Lamme, and Spekreijse in 1998 to demonstrate how spatial attention can flow along object boundaries. See the text for details.
    || Attention flows along curves: Roelfsema etal 1998: Macaque V1. fixation (300ms) -> stimulus (600ms RF - target curve, distractor) -> saccade. Crossed-curve condition: attention flows across junction between smoothly connected curve segments, Gestalt good continuation
  • image p258fig06.06 Neurophysiological data and simulation of how attention can flow along a curve. See the text for details.
    || Simulation of Roelfsema etal 1998, data & simulation. Attention directed only to far end of curve. Propagates along active layer 2/3 grouping to distal neurons.
  • image p265fig06.13 The basal ganglia gate perceptual, cognitive, emotional, and more processes through parallel loops.
    || [motor, ocularmotor, dorsolateral, ventral-orbital, anterior cingulate] vs. [Thalamus, pallidum-subs, nigra, Striatum, Cortex]
  • image p267fig06.14 Feedback from object surfaces to object boundaries uses surface contours. This feedback assures complementary consistency and enables figure-ground separation. A corollary discharge of the surface contours can be used to compite salient object feature positions.
    || Perceptual consistency and figure-ground separation.
  • image p271fig06.17 Persistent activity in IT cells is just what is needed to enable view-invariant object category learning by ARTSCAN to be generalized to [view, position, size]-invariant category learning by positional ARTSCAN, or pARTSCAN. See the text for details.
    || Persistent activity in IT. Physiological data show that persistent activity exist in IT (Fuester and Jervey 1981, Miyashita and Chang 1988, Tomita etal 1999). Adapted from (Tomita etal 1999 Nature)
  • image p273fig06.20 pARTSCAN can simulate the IT cell recoding that Li and DiCarlo reported in their swapping experiments because the swapping procedure happens without causing a parietal reset burst to occur. Thus the originally activated invariant category remains activated and can get associated with the swapped object features.
    || Simulation of Li and DiCarlo swapping data. data (Li and DiCarlo 2008), model (Cao, Grossberg, Markowitz 2011). normalized response vs. exposure (swaps and/or hours)
  • image p274fig06.21 pARTSCAN can also simulate the trade-off in IT cell responses between position invariance and selectivity that was reported by Zoccolan etal 2007. This trade-off limits the amount of position invariance that can be learned by a cortical area like V1 that is constrained by the cortical magnification factor.
    || Trade-off in IT cell response properties. Inferotemporal cortex cells with greater position invariance respond less selectively to natural objects. invariance-tolerance, selectivity-sparseness. data (Zoccolan etal 2007) model (Grossberg, Markowitzm, Cao 2011). position tolerance (PT, degrees) vs sparseness (S)
  • image p275fig06.23 Data from (Akrami etal 2009) and our simulation of it. See the text for details.
    || IT responses to image morphs. data vs model
  • image p284fig07.02 Psychophysical data (top row) and simulation (bottom row) of how persistence decreases with flash illuminance and duration.
    || Persistence data and simulations. (Francis, Grossberg, Mingolia 1994 Vision Research, 34, 1089-1104). Persistence decreases with flash illuminance and duration (Bowen, Pola, Matin 1974; Breitmeyer 1984; Coltheart 1980). Higher luminance or longer duration habituates the gated dipole ON channel more. Causes larger and faster rebound in the OFF channel to shut persisting ON activity off.
  • image p285fig07.03 Persistence decreases with flash illuminance and duration due to the way in which habituative transmitters regulate the strength of the rebound in response to offset of a stimulating input, and how this rebound inhibits previously activated bipole cells.
    || Persistence data and simulations (Francis, Grossberg, Mingolia 1994 Vision Research, 34, 1089-1104). Persistence decreases with flash illuminance and duration. Horizontal input excites a horizontal bipole cell, which supports persistence. Offset of the horizontal input causes a rebound of activity in the vertical pathway, which inhibits the horizontal bipole cell, thereby terminating persistence.
  • image p286fig07.04 Illusory contours persist longer than real contours because real contours have more inducers whose rebound at contour offset can cause faster boundary reset. Illusory contours also take longer to form than real contours, which explains the increasing portion of the curve.
    || Persistence data and simulations (Meyer, Ming 1988; Reynolds 1981). Increasing portion of curve is due to formation time of the illusory contour. Longer persistence is due to fewer bottom-up inducers of an illusory contour that has the same length as a real contour: only illuminance-derived edges generate reset signals. When bottom-up inducers are inhibited by OFF cell rebounds, their offset gradually propagates to the center of the illusory contour.
  • image p286fig07.05 This figure shows the propagation through time of illusory contour offset from the rebounded cells that got direct inputs to the center of the contour.
    || Persistence data and simulations. Illusory contours persist longer than real contours (Meyer, Ming 1988; Reynolds 1981). When bottom-up inducers are inhibited by OFF cell rebounds, their offset gradually propagates to the center of the illusory contour.
  • image p287fig07.06 The relative durations of persistence that occur due to an adaptation stimulus of the same or orthogonal orientation follow from the properties of the habituative gated dipoles that are embedded in the boundary completion system.
    || Persistence data and simulations. Change in persistence depends on whether adaptation stimulus has same or orthogonal orientation as test grating (Meyer, Lawson, Cohen 1975). If adaptation stimulus and test stimulus have the same orientation, they cause cumulative habituation, which causes a stronger reset signal, hence less persistence. When they are orthogonal, the competition on the ON channel is less, hence more persistence.
  • image p287fig07.07 Persistence increases with distance between a target and a masking stimulus due to weakening of the spatial competition in the first competitive stage of hypercomplex cells.
    || Persistence data and simulations. Persistence increases with distance between a target and a masking stimulus (Farrell, Pavel, Sperling 1990). There is less spatial competition from the masker to the target when they are more distant, hence the target is more persistent.
  • image p297fig08.08 When a second flash is more intense than the first flash, then apparent motion may occur from the second to the first flash.
    || Delta motion: motions from the second to the first flash. Data: (Kolers 1972; Korte 1915). Simulation: (Grossberg, Rudd 1992). This occurs when the luminance or contrast of the second flash is large compared to that of the first flash. Sustained and transient cells obey shunting dynamics whose averaging rates speed up with output intensity. The first flash to wane is the one that will be the source of the G-wave.
  • image p297fig08.09 Simulation of motion in opposite directions that is perceived when two later flashes occur on either side of the first flash.
    || Split motion. Data: (H.R. Silva 1926), Simulation: (Grossberg, Rudd 1992)
  • image p298fig08.10 Simulation of the motion speed-up that is perceived when flash duration decreases.
    || "The less you see it, the faster it moves". Data: (Giaschi, Anstis 1989), Simulation: (Grossberg, Rudd 1992). ISI = 0, flash duration decreases; SOA = constant, flash duration decreases
  • image p304fig08.22 Data (top image) and simulation (bottom image) of Korte
  • image p311fig08.30 The data of (Castet etal 1993) in the left image was simulated in the right image by the 3D FORMOTION model that I developed with my PhD student Jonathan Chey. These data provide insight into how feature tracking signals propagate from the ends of a line to its interior, where they capture consistent motion directional signals and inhibit inconsistent ones.
    || Solving the aperture problem. A key design problem: How do amplified feature tracking signals propagate within depth to select the cirrect motion directions at ambiguous positions? This propagation from feature tracking signals to the line interior determines perceived speed in Castet etal data, which is why speed depends on line tilt and length. Data: (Castet etal 1993), Simulation: (Chey etal 1997)
  • image p319fig08.38 The neurophysiological data from MT (left image) confirms the prediction embodied in the simulation of MT (right image) concerning the fact that it takes a long time for MT to compute an object
  • image p333fig08.58 Neurophysiological data (left image) and simulation (right image) of LIP data during correct trials on the RT task. See the text for details.
    || LIP responses during RT task correct trials (Roltman, Shadlen 2002). More coherence in favored direction causes faster cell activation. More coherence in opposite direction causes faster cell inhibition. Coherence stops playing a role in the final stages of LIP firing.
  • image p334fig08.59 Neurophysiological data (left column) and simulations (right column) of LIP responses for the FD task during both [correct, error] trials. See the text for details.
    || LIP responses for the FD task during both [correct, error] trials (Shadlen, Newsome 2001). LIP encodes the perceptual decision regardless of the true direction of the dots. Predictiveness of LIP responses on error trials decreases with increasing coherence.
  • image p334fig08.60 Behavioral data (left image) and simulation (right image) about accuracy in both the RT and FD tasks. See text for details
    || Behavioral data: % correct vs % coherence (Mazurek etal 2003; Roltman, Shadien 2002). More coherence in the motion causes more accurate decisions. RT task accuracy at weaker coherence levels is slightly better than FD task accuracy.
  • image p335fig08.61 Behavioral data (left image) and simulation (right image) about speed in correct and error trials of the RT task. See text for details.
    || Behavioral data: speed, correct and error trials (RT task) (Roltman, Shadien 2002). More coherence in the motion causes faster reaction time.
  • image p335fig08.62 More remarkable simulation fits (right column) to LIP neurophysiology data (left column) about where and when to move the eyes.
    || LIP encodes not only where, but also when, to move the eyes. ...No Bayes(Roltman, Shadien 2002). Firing rate (sp/s) vs time (ms). Slope of firing rate (sp/s^2) vs % correct.
  • image p342fig09.11 Psychophysical data (left panel) and computer simulation (right column) of the importance of efference copy in real movements. See the text for details.
    || Heading: move to wall and fixate stationary object (adapted from Warren, Hannon 1990). Inaccurate for simulated eye rotation, accurate for real eye rotation, need confirmation by efference copy!
  • image p343fig09.13 When one scans the three different types of pears in the left image, as illustrated by the jagged blue curve with red movement end positions, and transforms the resulting retinal images via the cortical magnification factor, or log polar mapping, the result is the series of images in the right column. How do our brains figure out from such confusing data which views belong to which pear?
    || View-invariant object learning and recognition Three pears: Anjou, Bartlett, Comice. Which is the Bartlett pear? During unsupervised scanning and learning about the world, no one tells the brain what views belong to which objects while it learns view-invariant object categories. Cortical magnificantion in V1.
  • image p349fig09.20 Using virtual reality displays (left image), (Fajen, Warren 2003) collected data (right two images) about how observers avoid obstacles (open circular disks) as a function of their distance and angular position as they navigate towards a fixed goal (x). These data illustrate how goals act as attractors while obstacles act as repellers.
    || Steering from optic flow (Fajen, Warren 2003). goals are attractors, obstacles are repellers. Damped spring model explains human steering data.
  • image p352fig09.26 The final stage of the model computes a beautiful expansion optic flow that permits an easy estimate of the heading direction, with an accuracy that matches that of human navigators.
    || The model generates accurate heading (Warren, Hannon 1990; Royden, Crowell, Banks 1994). Maximally active MSTd cell = heading estimate. Accuracy matches human data. Random dots [mean +-1.5°, worst +-3.8°], Random dots with rotation [accurate with rotations <1°/s, rotation increases, error decreases], OpenGL & Yosemite benchmark +-1.5°, Driving video +-3°.
  • image p356fig10.03 Laminar computing achieves at least three basic properties of visual processing that have analogs in all biologically intelligent behaviors. These properties may be found in all cortical circuits in specialized form.
    || What does Laminar Computing achieve? 1. Self-stabilizing development and learning; 2. Seamless fusion of a) pre-attentive automatic bottom-up processing, b) attentive task-selective top-down processing; 3. Analog coherence: Solution of Binding Problem for perceptual grouping without loss of analog sensitivity. Even the earliest visual cortical stages carry out active adaptive information processing: [learn, group, attention]ing
  • image p357fig10.04 Laminar Computing achieves its properties by computing in a new way that sythesizes the best properties of feedforward and feedback interactions, analog and digital computations, and preattentive and attentive learning. The property of analog coherence enables coherent groupings and decisions to form without losing sensitivity to the amount of evidence that supports them.
    || Laminar Computing: a new way to compute. 1. Feedforward and feedback: a) Fast feedforward processing when data are unambiguous (eg Thorpe etal), b) slower feedback chooses among ambiguous alternatives [self-normalizing property, real-time probabiligy theory], c) A self-organizing system that trades certainty against speed: Goes beyond Bayesian models! 2. Analog and Digital: Analog Coherence combines the stability of digital with the sensitivity of analog. 3. Preattentive and Attentive Learning: Reconciles the differences of (eg) Helmholtz and Kanizsa, "A preattentive grouping is its own
  • image p360fig10.09 Perceptual grouping is carried out in layer 2/3 by long-range horizontal excitatory recurrent connections, supplemented by short-range disynaptic inhibitory connections that together realize the bipole grouping properties that are diagrammed in Figure 10.10.
    || Grouping starts in layer 2/3. LGN-> 6-> 4-> 2/3: 1. Long-range horizontal excitation links collinear, coaxial receptive fields (Gilbert, Wiesel 1989; Bosking etal 1997; Schmidt etal 1997) 2. Short-range disynaptic inhibition of target pyramidal via pool of intraneurons (Hirsch, Gilbert 1991) 3. Unambiguous groupings can form and generate feedforward outputs quickly (Thorpe etal 1996).
  • image p361fig10.10 Bipole grouping is achieved by long-range horizontal recurrent connections that also give rise to short-range inhibitory interneurons which inhibit nearby bipole cells as well as each other.
    || Bipole property controls perceptual grouping. Collinear input on both sides. Excitatory inputs summate. Inhibitory inputs normalize, Shunting inhibition! Two-against-one. Cell is excited.
  • image p367fig10.15 Data (left column) and simulation (right column) of how attention prevents a masking stimulus from inhibiting the response to the on-center of the cell from which the recording was made.
    || Attention protects target from masking stimulus (Reynolds etal 1999; Grossberg, Raizada 2000).
  • image p367fig10.16 Neurophysiological data (left image) and simulation (right image) of how a low-contrast target can be facilitated if it is surrounded by a paid (31May2023 Howell - is word correct?) of collinear flankers, and suppresssed by them if it has high contrast.
    || Flankers can enhance or suppress targets (Polat etal 1998; Grossberg, Raizada 2000). target alone, target + flankers, flankers alone.
  • image p368fig10.17 Neurophysiological data (left image) and simulation (right image) showing that attention has a greater effect on low contrast than high contrast targets.
    || Attention has greater effect on low contrast targets (DeWeerd etal 1999; Raizada, Grossberg 2001). Threshold increase (deg) vs Grating contrast (%), [no, with] attention
  • image p368fig10.18 Neurophysiological data (left image) and simulation (right image) of relative on-cell activities when the input to that cell may also be surroubded by iso-orientation or perpendicular textures.
    || Texture reduces response to a bar: iso-orientation suppression (Knierim, van Essen 1992), perpendicular suppression (Raizada, Grossberg 2001)
  • image p369fig10.19 Data from (Watanabe etal 2001) showing perceptual learning of the coherent motion direction, despite the lack of extra-foveal attention and awareness of the moving stimuli.
    || Unconscious perceptual learning of motion direction, % correct for two tests, compared to chance level results.
  • image p393fig11.31 (Todd, Akerstrom 1987) created a series of 2D images from discrete black patches on a white disk and showed how the perceived depth varies with the factors summarized in the figure. The LIGHTSHAFT model quantitatively simulated their data.
    || Factors determining depth-from-texture percept. Perceived depth varies with texture element width, but only when elements are elongated and sufficiently aligned with one another to form long-range groupings. Data of (Todd, Akerstrom 1987) simulated by the LIGHTSHAFT model of (Grossberg, Kuhlmann 2007). [HP, LP, CCE, CCS, RO]
  • image p399fig11.39 Simulation of the eye rivalry data of (Lee, Blake 1999).
    || [Binocular, [left, right] eye] activity
  • image p402fig11.43 A pair of disparate images of a scene from the University of Tsukuba. Multiview imagre database.
    || input [left, right]
  • image p407fig12.03 Neurophysiological data showing how motor cortical cells code different vectors that are sensitive to both the direction of the commanded movement and its length.
    || (a) Single primary motor cortex neuron, onset of movement -> on..., radial architecture... (b) Motor cortex neuronal population, radial architecture...
  • image p409fig12.04 (top half) Neurophysiological data of vector cell responses in motor cortex. (bottom half) VITE model simulations of a simple movement in which the model
  • image p414fig12.11 Neurophysiological data from cortical areas 4 and 5 (every other column) and simulations thereof (other columns) during a reach.
    || activation vs time. (a) area 4 phasic RT (IFV) (b) area 4 tonic (OPV) (c) area 4 phasic-tonic (OFPV) (d) area 4 phasic MT (DVV) (e) area 5 phasic (DV) (f) area 5 tonic (PPV)
  • image p424fig12.21 One of the many types of data about pitch processing that are simulated by the SPINET model. See the text for details.
    || Pitch shifts with component shifts (Patterson, Wightman 1976; Schouten 1962). Pitch vs lowest harmonic number.
  • image p425fig12.23 ARTSTREAM simulations of the auditory continuity illusion and other streaming properties (left column, top row). When two tones are separated by silence (Input), a percept of silence also separates them in a spectral-pitch resonance. (left column, bottom row). When two tones are separated by broadband noise, the percept of tone continues through the noise in one stream (stream 1) while the remainder of the noise occurs in a different stream (stream 2). (right column) Some of the other streaming properties that have been simulated by the ARTSTREAM model.
    || Auditory continuity does not occur without noise. Auditory continuity in noise. Other simulated streaming data.
  • image p428fig12.25 (left architecture) Auditory-articulatory feedback loop whereby babbled sounds active learning in an imitative map that is later used to learn to reproduce the sounds of other speakers. An articulatory-to-auditory expectation renders learning possible by making the auditory and motor data dimensionally consistent, as in the motor theory of speech. (right architecture) Parallel streams in the ARTSPEECH model for learning speaker-independent speech and language meaning, including a mechanism for speaker normalization (right cortical stream) and for learning speaker-dependent vocalic qualities (left cortical stream).
    || left: Speaker-dependent vocalic qualities; right: Speaker-independent speech and language meaning
  • image p432fig12.28 (left image) The SpaN model simulates how spatial representations of numerical quantities are generated in the parietal cortex. (right image) Behavior numerosity data and SpaN model simulations of it.
    || (Left) preprocessor-> spatial number map-> Comparison wave. (Right) data axis: number of lever presses; model axis: node position in the spatial number axis
  • image p437fig12.32 Data from a free recall experiment illustrate the bowed serial position curve.
    || Serial position function for free recall Data: (Murdock 1962 JEP 64, 482-488). % correct vs position of word on a 40-word list. Primacy gradient can be a mixture of STM and LTM read-out.
  • image p437fig12.33 Item and Order working memory models explain free recall data, as well as many other psychological and neurobiological data, by simulating how temporal series of events are stored as evolving spatial patterns of activity at content-addressable item categories. The categories with the largest activities are rehearsed first, and self-inhibit their activity as they do so in order to prevent tem from being rehearsed perseveratively. The laws whereby the items are stored in working memory obey basic design principles concerning list categories, or chunks, of sequences of stored items can be stably remembered.
    || Working memory models: item and order, or competitive queuing (Grossberg 1978; Houghton 1990; Page, Norris 1998). Event sequence in time stored as an evolving spatial pattern of activity. Primacy gradient of working memory activation stores correct temporal order at content-addressable cells. Maximally activated cell populations is performed next when a rehearsal wave is turned on. Output signal from chosen cell population inhibits its own activity to prevent perseveration: inhibition of return. Iterate until entire sequence is performed.
  • image p443fig12.41 Neurophysiological data from the Averbeck etal sequential copying experiments show the predicted primacy gradient in working memory and the self-inhibition of activity as an item is stored. When only the last item remains stored, it has the highest activity becasuse it has been freed from inhibition by earlier items.
    || Neurophysiology of sequential copying
  • image p444fig12.42 The LIST PARSE laminar cortical model of working memory and list chunking that I published with Lance Pearson in 2008 simulated the Averbeck etal data in Figure 12.41, as in the left column of the figure. It also simulated cognitive data about working memory storage by human subjects. See the text for details.
    || LIST PARSE: Laminar cortical model of working memory and list chunking (Grossberg, Pearson 2008). Simulates data about: [immediate, delayed, continuous] distractor free recall; immediate serial recall; and variable-speed sequential perfomance of motor acts. [velocity, acceleration] vs time (ms) from recall cue.
  • image p446fig12.44 (left column, top row) LIST PARSE can model linguistic data from human subjects. In this figure, model parameters are fixed to enable a close fit to data about error-type distributions in immediate free recall experiments, notably transposition errors. (right column, top row) Simulation and data showing bowing of the serial position curve, including an extended primacy gradient. (left column, bottom row) The simulation curve overlays data about list length effects, notably the increasing recall difficulty of longer lists during immediate serial recall (ISR). (right column, bottom row) Simulation (bottom image) and data (top image) of the limited temporal extent for recall.
    || (1. TL) Error-type distributions in immediate serial recall (Hanson etal 1996). % occurence vs serial position. Graph convention: Data- dashed lines; Simulations- solid lines. Six letter visual ISR. Order errors- transpositions of neighboring items are the most common. Model explanation: Noisy activation levels change relative oorder in primacy gradient. Similar activation of neighboring items most susceptible to noise. Model paramenters fitted on these data. (2. TR) Bowing of serial position curve (Cowan etal 1999). % correct vs serial position. Auditory ISR with various list lengths (graphs shifted rightward): For [, sub-]span lists- extended primacy, with one (or two) item recency; Auditory presentation- enhanced performance for last items. LIST PARSE: End effects- first and last items half as many members; Echoic memory- last presented item retained in separate store. (3. BL) List length effects, circles (Crannell, Parrish 1968), squares (Baddeley, Hitch 1975), solid line- simulation. % list correct vs list length. Variable list length ISR: longer lists are more difficult to recall. LIST PARSE: More items- closer activation levels and lower absolute activity level with enough inputs; Noise is more likely to produce order errors, Activity levels more likely to drop below threshold;. (4. BR) Limited temporal extent for recall (Murdoch 1961). % recalled vs retention interval (s). ISR task with distractor-filled retention intervals (to prevent rehersal): Increasing retention interval - decreases probabilty of recalling list correctly; Load dependence- longer list more affected by delays; Performance plateau- subjects reach apparent asymptote. LIST PARSE: Increase convergence of activities with time; loss of order information;.
  • image p447fig12.45 (left column) LIST PARSE simulations of the proportion of order errors as a function of serial position for 6 item lists with (a) an extended pause of 7 time units between the third and fourth items, and (b) pauses of 5 time units (solid curve) and 10 time units (dashed curve) between all items. (right column) Simulations (solid curves) and data (dashed curves) illustrating close model fits in various immediate free recall tasks.
    || (Left) Temporal grouping and presentation variability. Temporal grouping: Inserting an extended pause leads to inter-group bowing; Significantly different times of integration and activity levels across pause, fewer interchanges. (Right) Immediate free recall, and [delayed, continuous] distractor-free recall. Overt rehersal IFR task with super-span (ie 20 item) lists: Extended recency- even more extended with shorter ISIs; Increased probability of recall with diminished time from last rehearsal; Early items in list rehearsed most;. LIST PARSE (unique) for long lists: Incoming items from recency gradient; Rehearsal (re-presentation) based upon level of activity.
  • image p452fig12.48 (left column) In experiments of (Repp etal 1978), the silence duration between the words GRAY and SHIP was varied, as was the duration of the fricative noise in S, with surprising results. (right column) The red arrow directs our attention to surprising perceptual changes as silence and noise durations increase. See the text for details.
    || Perceptual integration of acoustic cues, data (Repp etal 1978). GRAY-> silence duration-> SHIP (noise duration from start of word). Noise duration vs silence duration: GRAY SHIP <-> [GREAT SHIP <-> GRAY CHIP] <-> GREAT CHIP.
  • image p453fig12.49 The ARTWORD model that I published in 2000 with my PhD student Christopher Myers simulates data such as the (Repp etal 1978) data in Figure 12.68. See the text for details.
    || ARTWORD model (Grossberg, Myers 2000). Input phonetic features-> Phonemic item working memory-> Masking Field unitized lists-> Automatic gain control-> Phonemic item working memory. [habituative gate, adaptive filter]s.
  • image p465fig12.63 Neurophysiological data (left image) and lisTELOS stimulation (right figure) showing how microstimulation biases saccadic performance order but not the positions to which the saccades will be directed. See the text for details.
    || Saccade trajectories converge to a single location in space. Microstimulation biased selection so saccade trajectories converged toward a single location in space. [Data, model] contra <-> Ipsi (msec)
  • image p468fig12.65 Linguistic properties of the PHONET model and some of the data that it simulates. The upper left image summarizes the asymmetric transient-to-sustained gain control that helps to create invariant intraword ratios during variable-rate speech. The lower left image summarizes the rate-dependent gain control of the ARTPHONE model that creates rate-invariant working memory representations in response to sequences of variable-rate speech. The right image summarizes the kind of paradoxical VC-CV category boundary data of (Repp 1980) that ARTPHONE simulates. See the text for details.
    || (left upper) [transient, sustained] [working memory, filter, category]. (left lower) phone inputs-> [input rate estimate, features], Features w <- habituative transmitter gates -> categories-> rate invariant phonetic output, input rate estimate-> gain control-> [features, categories] rate-dependent integration of categories and features. (right) % 2-stop vs VC-CV silent interval (msec): [ib-ga, ib-ba, iga, iba].
  • image p469fig12.66 (left column) A schematic of how preserving relative duration, as in the first and third images, of consonant and vowel pairs can preserve a percept, in this case of /ba/, but not doing so, as in the first and second images, can cause a change in percept, as from /ba/ to /wa/, as in the data of (Miller, Liberman 1979) that PHONET simulates. (right column) Changing frequency extent can also cause a /ba/ - /wa/ transition, as shown in data of (Schwab, Sawusch, Nusbaum 1981) that PHONET also simulates.
    || (left image) Maintaining relative duration as speech speeds up preserves percept (Miller, Liberman 1979). frequency vs time- [/ba/, /wa/, /ba/] (right image) Changing frequency extent causes /b/-/wa/ transition (Schwab, Sawusch, Nusbaum 1981). frequency vs time- [/ba/, /wa/] Dt extent.
  • image p473fig12.69 Error rate and mean reaction time (RT) data from the lexical decision experiments of (Schvanevelt, McDonald 1981). ART Matching Rule properties explain these data in (Grossberg, Stone 1986).
    || (left) Error rate vs type of prime [R, N, U], [non,] word. (right) Mean RT (msec) vs type of prime [R, N, U], [non,] word.
  • image p474fig12.70 The kind of model macrocircuit that was used in (Grossberg, Stone 1986) to explain lexical decision task data.
    || inputs-> A1 <-> A2 oconic sensory features <-> A3 item and order in sensory STM <-> A4 list parsing in STM (masking field) <-> A5 semantic network (self-feedback). [A4, A5] <-> V* visual object recognition system. M1-> [outputs, A1]. M1 <-> M2 iconic motor features <-> M3 item and order in motor STM. A2-> M2. A3-> M3.
  • image p476fig12.71 Word frequency data of (Underwood, Freund 1970) that were explained in (Grossberg, Stone 1986).
    || percent errors vs frequency of old words [L-H to H-H, L-L to H-L].
  • image p484fig13.04 The top-down feedback from the orbitofrontal cortex closes a feedback loop that supports a cognitive-emotional resonance. If this resonance can be sustained long enough, it enables us to have feelings at the same time that we experience the categories that caused them.
    || Cognitive-Emotional resonance. Basis of "core consciousness" and "the feeling of what happens". (Damasio 1999) derives heuristic version of CogEM model from his clinical data. Drive-> amygdala-> prefrontal cortex-> sensory cortex, resonance around the latter 3. How is this resonance maintained long enough to become conscious?
  • image p485fig13.06 (left column) An inverted-U occurs in conditioned reinforcer strength as a function of the ISI between the CS and the US. Why is learning attenuated at 0 ISI? (right column) Some classical conditioning data that illustrate the inverted-U in conditioning as a function of the ISI.
    || InterStimulus Interval (ISI) effect. Data from (Dmith etal 1969; Schneiderman, Gormezano 1964).
  • image p490fig13.15 The CogEM circuit is an ancient design that is found even in mollusks like Aplysia. See the text for details.
    || Aplysia (Buononamo, Baxter, Byrne, Neural Networks 1990; Grossberg, Behavioral and Brain Sciences 1983). Facilitator neuron ~ drive representation.
  • image p504fig13.31 Behavioral contrast can occur during reinforcement learning due to decreases in either positive or negative reinforcers. See Figure 13.32 for illustrative operant conditioning data.
    || Behavioral contrast: rebounds! Shock level vs trials. 1. A sudden decrease in frequency or amount of food can act as a negative reinforcer: Frustration. 2. A sudden decrease in frequency or amount of shock can act as a positive reinforcer: Relief.
  • image p523fig14.03 (a) The MOTIVATOR neural model generalizes CogEM by also including the basal ganglia. It can hereby explain and simulate complementary functions of the amygdala and basal ganglia (SNc) during conditioning and learned performance. The basal ganglia generate Now Print signals in response to unexpected rewards. These signals modulate learning of new associations in many brain regions. The amygdala supports motivated attention to trigger actions that are expected to occur in response to conditioned or unconditioned stimuli. Object Categories represent visual or gustatory inputs in anterior inferotemporal (ITA) and rhinal (RHIN) cortices, respectively. Value Categories represent the value of anticipated outcomes on the basis of hunger and satiety inputs, in amygdala (AMYG) and lateral hypothalamus (LH). Object-Value Categories resolve the value of competing perceptual stimuli in medial (MORB) and lateral (ORB) orbitofrontal cortex. The Reward Expectation Filter detects the omission or delivery of rewards using a circuit that spans ventral striatum (VS), ventral pallidum (VP), striosomal delay (SD) cells in the ventral striatum, the pedunculopontine nucleus (PPTN) and midbrain dopaminergic neurons of the substantia nigra pars compacta/ventral tegmental area (SNc/VTA). The circuit that processes CS-related visual information (ITA, AMTG, ORB) operates in parallel with a circuit that processes US-related visual and gustatory information (RHIN, AMYG, MORB). (b) Reciprocal adaptive connections between hypothalamus and amygdala enable amygdala cells to become learned value categories. The bottom region represents hypothalmic cells, which receive converging taste and metabolite inputs whereby they become taste-drive cells. Bottom-up signals from activity patterns across these cells activate competing value category, or US Value Representations, in the amygdala. A winning value category learns to respond selectively to specific combinations of taste-drive activity patterns and sends adaptive top-down priming signals back to the taste-drive cells that activated it. CS-activated conditioned reinforcer signals are also associatively linked to value categories. Adpative connections end in (approximately) hemidiscs. See the text for details.
    ||
  • image p533fig14.09 Search data and ARTSCENE Search simulations of them in each pair of images from (A) to (F). See the text for details.
    || 6*[data vs simulation], [Response time (ms) versus epoch].
  • image p542fig15.04 Conditioning data from (Smith 1968; Millenson etal 1977). The former shows the kind of Weber Law and inverted U that were simulated in Figure 15.3. The latter shows that, if there are two ISIs during an experiment, then the animals learn to adaptively time their responses with two properly scaled Weber laws.
    || (left) One ISI (Smith 1968) [mean membrane extension (mm) versus time after CS onset (msec)]. (right) Two ISIs (Millenson etal 1977) [200, 100] msec CS test trials, [mean momentary CS amplitude (mm) vs time after CS onset (msec)]. (bottom) Conditioned eye blinks, made with nictitating membrane and/or eyelid, are adaptively timed: peak closure occurs at expected time(s) of arrival of the US following the CS and obeys a Weber Law.
  • image p543fig15.05 Simulation of conditioning with two ISIs that generate their own Weber Laws, as in the data shown in Figure 15.4.
    || Learning with two ISIs: simulation: R = sum[all: f(xi)*yi*xi] vs msec. Each peak obeys Weber Law! strong evidence for spectral learning.
  • image p556fig15.24 (a) Data showing normally timed responding (solid curve) and short latency responses after lesioning cerebellar cortex (dashed curve). (b) computer simulation of short latency response after ablation of model cerebellar cortex.
    ||
  • image p559fig15.28 Neurophysiological data (left column) and model simulations (right column) of SNc responses. See the text for details.
    || membrane potential vs time
  • image p573fig16.01 The experimental chamber (A) and neurophysiological recordings from a rat hippocampus (B) that led to the discovery of place cells. See the text for details.
    ||
  • image p574fig16.02 Neurophysiological recordings of 18 different place cell receptive fields. See the text for details.
    ||
  • image p575fig16.03 As a rat navigates in its experimental chamber (black curves), neurophysiological recordings disclose the firing patterns (in red) of (a) a hippocampal place cell and (b) an entrorhinal grid cell.
    ||
  • image p582fig16.08 Some experimental evidence for stripe-like cell receptive fields has been reported. The band cells posited by Neil Burgess also exhibit the one-dimensional firing symmetry of stripe cells, but are modeled by oscillatory intererence. See the text for details.
    || Evidence for stripe-like cells. Entorhinal cortex data (Sargolini, Fyhn, Hafting, McNaughton, Witter, Moser, Moser 2006; Krupic, Burgess, O
  • image p589fig16.19 Neurophysiological data showing the smaller dorsal grid cell scales and the larger ventral grid cell scales.
    || Spatial scale of grid cells increase along the MEC dorsoventral axis (Hafting etal 2005; Sargolini etal 2006; Brun etal 2008). [dorsal (left), ventral (right)] cart [rate map, autocortelogram]. How does the spatial scale increase along the MEC dorsoventral axis?
  • image p593fig16.26 Data (left column) and simulations (right column) of the gradient of increasing grid cell spacing along the dorsoventral axis of MEC.
    || Gradient of grid spacing along dorsoventral axis of MEC (Brun etal 2008). data-[Distance (m?), Median grid spacing (m?)] simulations-[Grid spacing (cm), Grid spacing (cm)] vs response rate.
  • image p594fig16.27 Data (left column) and simulations (right column) of the gradient of increasing grid cell field width along the dorsoventral axis of MEC.
    || Gradient of field width along dorsoventral axis of MEC (Brun etal 2008). data-[Distance (m?), Width autocorr peak (m?)] simulations-[Grid field width (cm), Width autocorr peak (cm)] vs response rate.
  • image p595fig16.28 Data (left column) and simulations (right column) about peak and mean grid cell response rates along the dorsoventral axis of MEC.
    || Peak and mean rates at different locations along DV axis of MEC (Brun etal 2008). Peak rate (Hz) vs [data- DV quarter, simulations- Response rate].
  • image p596fig16.29 Data (top row) and simulations (bottom row) showing decreasing frequency of subthreshold membrane potential oscillations along the DV axis of MEC.
    || Subthreshold membrane potential oscillations at different locations along DV axis of MEC (Giocomo etal 2020; Yoshida etal 2011). Data [oscillations (Hz) vs distance from dorsal surface (mm) @[-50, -45] mV, Frequency (Hz) vs [-58, -54, -50] mV]. Simulations MPO frequency (Hz) s [response, habituation] rate.
  • image p596fig16.30 Data (top row) and simulations (bottom row) of spatial phases of learned grid and place cells.
    || Spatial phases of learned grid and place cells (Hafting etal 2005). Data: Cross-correlogram of rate maps of two grid cells; Distribution of phase difference: distance from origin to nearest peak in cross-correlogram. Simulations: Grid cell histogram of spatial correlation coefficients; Place cell histogram of spatial correlation coefficients.
  • image p597fig16.31 Data (a) and simulations (b-d) about multimodal place cell receptive fields in large spaces. The simulations are the result of learned place fields.
    || Multimodal place cell firing in large spaces (Fenton etal 2008; Henriksen etal 2010; Park etal 2011). Number of cells (%) vs Number of place fields. [2, 3] place fields, 100*100 cm space.
  • image p597fig16.32 Data (top row) and simulations (bottom row) about grid cell development in juvenile rats. Grid score increases (a-b and d), whereas grid spacing remains fairly flat (c and e).
    || Model fits data about grid cell development (Wills etal 2010; Langston etal 2010). Data: [Gridness, grid score, inter-field distance (cm)]. Simulations: [Gridness score, Grid spacing (cm)] vs trial.
  • image p598fig16.33 Data (top row) and simulations (bottom row) of changes in place cell properties in juvenile rats, notably about spatial information (a,c) and inter-trial stability (b,d).
    || Model fits data about grid cell development (Wills etal 2010). [Data, Simulation] vs [spatial information, inter-trial stability]. x-axis [age (postnatal day), trial].
  • image p599fig16.35 Data (a) and simulations (b,c) about anatomically overlapping grid cell modules. (a) shows the anatomical distribution of grid cells belonging to different modules in one animal. DV location (mm) vs postrhinal border. (b) shows the simulated distribution of learned grid cell spacings from two stripe cell scales. frequency (%) vs grid spacing (cm). mu = [1, 0.6]. (c) shows what happens when half the cells respond with one rate and half another rate. (d) shows the same with three rates. (e-g) show spatial maps and autocorrelograms of grid cells that arise from the different rates in (d). [rate map, autocorelogram] vs [score [1.07, 0.5, 0.67], spacing (cm) [23.58, 41, 63.64]].
    ||
  • image p602fig16.37 Data showing the effect of hippocampal inactivation by muscimol on grid cell firing before, during, and six hours after the muscimol, reading from left to right.
    || Hippocampal inactivation disrupts grid cells (Bonnevie etal 2013). muscimole inactivation. spikes on trajectory: [before, after min [6-20, 20-40, 40-60, 6h]]. rate map (Hz) [18.6, 11.4, 9.5, 6.7, 10.8]. spatial autocorrelogram g=[1.12, 0.05, -0.34, 0.09, 1.27].
  • image p603fig16.38 Role of hippocampal feedback in maintaining grid fields. (a) Data showing the effect of hippocampal inactivation before and during muscimol inhibition of hippocampal cells, as in Figure 16.37. (b) Model simulation with normal grid fields. (c) Model simulation that emulates the effect of hippocampal inhibition on grid fields.
    || (a) Data: hippocampal inactivation [before, after] cart [spikes on trajectory (p: [18.6, 6.7] Hz), spatial autocorrelogram (g= [1.12, 0.09])]. (b) Model: noise-free path integration, [spikes on trajectory (p: 14.56 Hz), rate map, spatial autocorrelogram (g= 1.41), dynamic autocorrelogram (g=0.6)]. (c) Model: noisy path integration + non-specific tonic inhibition, [spikes on trajectory (p: 11.33 Hz), rate map, spatial autocorrelogram (g= 0.05), dynamic autocorrelogram (g=0.047)].
  • image p605fig16.39 Data showing effects of medial septum (MS) inactivation on grid cells and network theta oscillations in medial entorhinal cortex (MEC). (A) Examples of disruption in the spatial expression of the hexagonal grid structure for two grid cells (Brandon etal 2011). (B) Temporal reduction in the power and frequency of network theta oscillations (Koenig etal 2011). (C) Temporary reduction in the gridness score, mean firing rate, and patial stability of grid cells (Koenig etal 2011).
    || Disruptive effects of Medial Septum inactivation in Medial Entorhinal Cortex (Brandon etal 2011; Koenig etal 2011). (A) Rate map [rate map, spatial autocorrelations, trajectory] vs [baseline, sub-sampled, medial septum inactivation, 3-6 hour recovery, 24 hour recovery], [rate map (Hz- m, p), spatial autocorrelations (gridness)][ 1.2, 7.2, 1.1; 0.25, 1.7, 0.6; 0.25, 2.5, -0.53; 0.7, 5.1, 0.55; 1.0, 5.3, 1.3; 2.1, 15, 0.19; 1.7, 12, 0.71; 1.7, 3.2, -0.22; 1.8, 9.1, 0.68; 2.5, 13, 0.46]. (B) [normalized power at 7-9 Hz, frequency (Hz)] vs 5-minute periods. (C) [mean gridness score (+-SEM), mean firing rate (% of baseline), mean correlation coeff (+-SEM)] vs 10-minute periods.
  • image p607fig16.40 Effects of medial septum (MS) inactivation on grid cells. (a) Each row shows data and different data-derived measures of grid cell responsiveness, starting from the left with the baseline response to the middle column with maximal inhibition. (b) Data showing the temporary reduction in the gridness scores during MS inactivation, followed by recovery. (c) Simulation of the collapse in gridness, achieved by reduction in cell response rates to mimic reduced cholinergic transmission. (d,e) Simulations of the reduction in gridness scores in (d) by reduction of cell response rates, in (e) by changing the leak conductance. See the text for details.
    ||
  • Grossberg 2021 p229c2h0.60 SMART computer simulations demonstrate that a good enough match of a top-down expectation with a bottom-up feature pattern generates an attentive resonance during which the spikes of active cells synchronize in the gamma frequency range of 20-70 Hz (Figure 5.40). Many labs have reported a link between attention and gamma oscillations in the brain, including two articles published in 2001, one from the laboratory of Robert Desimone when he was at the the National Institute of Mental Health in Bethseda (Fries, Reynolds, Rorie, Desimone 2001), and the other from the laboratory of Wolf Singer in Frankfurt (Engel, Fries, Singer 2001). You
  • Grossberg 2021 p081c2h0.66 sub-section: How does one evolve a computational brain?
    The above discussion illustrates that no single step of theoretical derivation can derive a whole brain. One needs a method for deriving a brain in stages, or cycles, much as evolution has incrementally discovered ever more complex brains over many thousands of years. The following theoretical method has been successfully applied many times since I first used it in 1957. It embodies a kind of conceptual evolutionary process for deriving a brain.

    Because "brain evolution needs to achieve behavioural success", we need to start with data that embodiey indices of behavioral success. That is why, as illustrated in Figure 2.37 Modelling method and cycle, one starts with Behavioral Data from scores or hundreds of psychological experiments. These data are analyszed as the result of an individual adapting autonomously in real time to a changing world. This is the Arty of Modeling. It requires that one be able to infer from static data curves the dynamical processes that control individual behaviors occuring in real time. One of the hardest things that I teach to my students to do is "how to think in real time" to be able to carry out this speculative leap.

    Properly carried out, this analysis leads to the discovery of new Design Principles that are embodied by these behavioral processes. The Design Principles highlight the functional meaning of the data, and clarify how individual behaviors occurring in real time give rise to these static data curves.

    These principles are then converted into the simplest Mathematical Model using a method of minimal anatomies, which is a form of Occam
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
    bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :image p085fig02.38 Our models have been used in many large-scale applications to engineering and technology. Linking brain to behavior explains how brain mechanisms give rise to psychological functions, and do so autonomously. The combination of mechanism, function, and autonomy helps to explain their value in helping to solve outstanding problems in technology.
    || Modeling method and cycle.
    Behavioral data -(art of modeling)-> Design principles <- Neural data <-(brain predictions)- Mathematical model and analysis -(behavioral predictions)-> Behavioural data
    Technology: Mathematical model and analysis <-> Technological applications
    At every stage, spin off new model designs and mechanisms to technologist who need autonomous intelligent applications.
  • image p225fig05.33 Some early ARTMAP benchmark studies. These successes led to the use of ARTMAP, and many variants that we and other groups have developed, in many large-scale applications in engineering and technology that has not abated even today.
    || see Early ARTMAP benchmark studies
  • image p490fig13.15 The CogEM circuit is an ancient design that is found even in mollusks like Aplysia. See the text for details.
    || Aplysia (Buononamo, Baxter, Byrne, Neural Networks 1990; Grossberg, Behavioral and Brain Sciences 1983). Facilitator neuron ~ drive representation.
  • image p563fig15.33 The basal ganglia gate neural processing in many parts of the brain. The feedback loop through the lateral orbitofrontal cortex (blue arrow, lateral orbitofrontal) is the one that MOTIVATOR models.
    || MOTIVATOR models one of several thalamocortical loops through basal ganglia (Adapted from Fundamental Neuroscience. 2002 Copyright Elsevier). [cortex-> striatum-> pallidum S. nigra-> thalamus] vs [motor, oculomotor, dorsolateral prefrontal, lateral orbitofrontal, anterior cingulate]. thalamus-> [striatum, cortex].
  • image p563fig15.34 The colored regions are distinct parts of the basal ganglia in the loops depicted in Figure 15.33.
    || Distinct basal ganglia zones for each loop (Adapted from Fundamental Neuroscience. 2002 Copyright Elsevier).
  • p370 Chapter 11 means (Grossberg 2021) page 370, Chapter 11
    p002sec Illusion and realitymeans (Grossberg 2021) page 2, section Illusion and reality
    p013fig01.09means (Grossberg 2021) page 13, Figure 1.09 (1.9 as in book)
    p030tbl01.02 means (Grossberg 2021) page 30, Table 1.02 (1.2 as in book)
    p111c2h0.5means (Grossberg 2021) page 111, column 2, height from top as fraction of page height
    || text...Are notes in addition to [figure, table] captions, mostly comprised of text within the image, but also including quotes of text in the book. Rarely, it includes comments by Howell preceded by "Howell". The latter are distinct from "readers notes" (see, for example : reader Howell notes).
    p044 Howell: grepStr
  • p00I PrefacePreface - Biological intelligence in sickness, health, and technology
  • p001 Chapter 1 Overview - From Complementary Computing and Adaptive Resonance to conscious awareness
  • p050 Chapter 2 How a brain makes a mind - Physics and psychology split as brain theories were born
  • p086 Chapter 3 How a brain sees: Constructing reality - Visual reality as illusions that explain how we see art
  • p122 Chapter 4 How a brain sees: Neural mechanisms - From boundary completion and surface flling-in to figure-ground perception
  • p184 Chapter 5 Learning to attend, recognize, and predict the world -
  • p250 Chapter 6 Conscious seeing and invariant recognition - Complementary cortical streams coordinate attention for seeing and recognition
  • p280 Chapter 7 How do we see a changing world? - How vision regulates object and scene persistence
  • p289 Chapter 8 How we see and recognize object motion - Visual form and motion perception obey complementary laws
  • p337 Chapter 9 Target tracking, navigation, and decision-making - Visual tracking and navigation obey complementary laws
  • p353 Chapter 10 Laminar computing by cerebral cortex - Towards a unified theory of biologucal and artificial intelligence
  • p370 Chapter 11 How we see the world in depth - From 3D vision to how 2D pictures induce 3D percepts
  • p404 Chapter 12From seeing and reaching to hearing and speaking - Circular reaction, streaming, working memory, chunking, and number
  • p480 Chapter 13 From knowing to feeling - How emotion regulates motivation, attention, decision, and action
  • p517 Chapter 14 How prefrontal cortex works - Cognitive working memory, planning, and emotion conjointly achieved valued goals
  • p539 Chapter 15 Adaptively timed learning - How timed motivation regulates conscious learning and memory consolidation
  • p572 Chapter 16 Learning maps to navigate space - From grid, place, and time cells to autonomous mobile agents
  • p618 Chapter 17 A universal development code - Mental measurements embody universal laws of cell biology and physics
  • p370 Chapter 11 means (Grossberg 2021) page 370, Chapter 11
    p002sec Illusion and realitymeans (Grossberg 2021) page 2, section Illusion and reality
    p013fig01.09means (Grossberg 2021) page 13, Figure 1.09 (1.9 as in book)
    p030tbl01.02 means (Grossberg 2021) page 30, Table 1.02 (1.2 as in book)
    p111c2h0.5means (Grossberg 2021) page 111, column 2, height from top as fraction of page height
    || text...Are notes in addition to [figure, table] captions, mostly comprised of text within the image, but also including quotes of text in the book. Rarely, it includes comments by Howell preceded by "Howell". The latter are distinct from "readers notes" (see, for example : reader Howell notes).
    p044 Howell: grepStr
  • image pxvifig00.01 Macrocircuit of the visual system
  • image p002fig01.01 The difference between seeing and recognizing.
    || (W. Epstein, R. Gregory, H. von Helmholtz, G. Kanizsa, P. Kellman, A. Michote...) Seeing an object vs Knowing what it is. Seeing Ehrenstein illusion (See, recognize) va Recognizing offset grating Do not see, recognize). offset grating: some boundaries are invisible or amodal.
  • image p002fig01.02 Dalmation in snow
    || p002c2h0.55 "...This image reminds us that invisible boundaries can sometimes be very useful in helping us to recognize visual objects in the world. ... When we first look at this picture, it may just look like an array of black splotches of different sizes, desities, and orientations across the picture. Gradually, however, we can recognize the Dalmatian in it as new boundaries form in our brain between the black splotches. ..."
  • image p003fig01.03 Amodal completion
    || p00c1h0.75 "... Figure 1.3 illustrates what I mean by the claim that percepts derived from pictures are often illusions. Figure 1.3 (left column) shows three rectangular shapes that abut one another. Our percept of this image irresitably creates a different interpretation, however. We perceive a horizontal bar lying in front of a partially occluded vertical bar that is amodally completed behind it. ..."
  • image p004fig01.04 (top row) Kanizsa stratification; (botton row) transparency images
    || [top row images] "... are called stratification percepts... This simple percept can ... be perceived either as a white cross in front of a white outline square, or as a white outline square in front of a white cross. The former percept usually occurs, but the percept can intermittently switch between these two interpretations. ...it is said to be a bistable percept. ..."
  • image p008fig01.05 Noise-saturation dilemma.
    || cell activity vs cell number; [minimum, equilibrium, current, maximal] activity
  • image p009fig01.06 Primacy gradient of activity stored in working memory within a recurrent shunting on-center off-surround network. Rehersal is controlled by a nonspecific rehersal wave and self-inhibitory feedback of the item that is currently being rehearsed. Rehearsal is controlled by a nonspecific rehearsal wave and self-inhibitory feedback of the item that is currently being rehearsed. Green = excitatory, red = inhibitory
    || inputs? -> item and order WM storage -> competitive selection-> rehearsal wave -> outputs
  • image p011fig01.07 The choice of signal function f determines how an initial activity pattern will be transformed and stored in short-term memory (STM). Among [same, slower, faster]-than-linear signal functions, only the last one can suppress noise. It does so as it chooses the population that receives the largest input for storage, while suppressing the activities of all other population, thereby giving rise to a winner-take-all choice.
    || initial pattern (xi(0) vs i):
    fXi(∞)= xi(∞)/sum[j: xj(∞)]x(∞)
    linearperfect storage of any patternamplifies noise (or no storage)
    slower-than-linearsaturatesamplifies noise
    faster-than-linearchooses max [winner-take-all, Bayesian], categorical perceptionsuppresses noise, [normalizes, quantizes] total activity, finite state machine
  • image p012fig01.08 A sigmoidal signal function is a hybrid signal that combines the best properties of [faster, same, slower]-than linear signals. It can suppress noise and store a partially contrast-enhanced activity pattern. slower-than-linear saturates pattern; approximately linear- preserves pattern and normalizes; faster-than-linear- noise suppression and contrast-enhancement.
    || Sigmoidal signal: a hybrid. (upper) saturates pattern- slower-than-linear; (middle) preserves pattern and normalizes- approximately linear. (lower) noise suppression and contrast enhancement- faster-than-linear.
  • image p013fig01.09 A sigmoid signal function generates a quenching threshold below which cell activities are treated like noise and suppressed. Activities that are larger than the quenching threshold are contrast enhanced and stored in short-term memory.
    || Quenching threshold xi(o) vs i.
    fXi(∞)= xi(∞)/sum[j: xj(∞)]x(∞)
    sigmoidtunable filter
    stores infinitely many contrast-enhanced patterns
    suppresses noise
  • image p016fig01.10 The blocking paradigm shows how sensory cues that are conditioned to predict specific consequences can attentionally block other cues that do not change those predictions. On the other hand, if the total cue context is changed by adding a cue that does not change the predicted consequences, then the new cues can be conditioned to the direction of that change. They can hereby learn, for example, to predict fear if the shock level unexpectedly increases, or relief if the shock level unexpectedly decreases.
    || Minimal adaptive prediction. blocking- CS2 is irrelevant, unblocking- CS2 predicts US change. Learn if CS2 predicts a different (novel) outcome than CS1. CS2 is not redundant.
  • image p016fig01.11 A sufficiently big mismatch between a bottom-up input pattern and a top-down expectation can activate the orienting system, which triggers a burst of nonspecific arousal that can reset the recognition category that read out the expectation. In this way, unexpected events can reset short-term memory and initiate a search for a category that better represents the current situation.
    || [category- top-down (TD) expectation; Bottom-up (BU) input pattern] -> Feature pattern -> BU-TD mismatch -> orienting system -> non-specific arousal -> category.
  • image p018fig01.12 Peak shift and behavioural contrast. When a negative generalization gradient (in red) is subtracted from a positive generalization gradient (in green), the net gradient (in purple) is shifted way from the negative gradient and has a width that is narrower than any of its triggering gradients. Because the total activity of the network tends to be normalized, the renormalized peak of the net gradient is higher than that of the rewarded gradient, thereby illustrating that we can prefer experiences that we have never previously experienced over those for which we have previously been rewarded.
    ||
  • image p019fig01.13 Affective circuits are organized into opponent channels, such as fear vs. relief, and hunger vs. frustration. On a larger scale of affective behaviours, exploration and consummation are also opponent types of behaviour. Exploration helps to discover novel sources of reward. Consummation enables expected rewards to be acted upon. Exploration must be inhibited to enable an animal to maintain attention long enough upon a stationary reward in order to consume it.
    || exploration vs consummation
  • image p023fig01.14 A gated dipole opponent process can generate a transient antagonistic reboubnd from its OFF channel in response to offset of an input J to its ON channel. sustained on-response; transient off-response; opponent process; gates arousal: energy for rebound.
    ||
  • image p024fig01.15 A REcurrent Associative Dipole, or READ, circuit is a recurrent shunting on-center off-surround network with habituative transmitter gates. Sensory cues sample it with LTM traces and thereby become conditioned reinforcers.
    ||
  • image p025fig01.16 (left panel) The main processing stages of the Cognitive-Emotional-Motor (CogEM) model have anatomical interpretations in terms of sensory cortex, amygdala, and prefrontal cortex. Chapter 13 will describe in greater detail how CS cues activate invariant object categories in the sensory cortex, value categories in the amygdala, and object-value categories in the prefrontal cortex, notably the orbitofrontal cortex. The amygdala is also modulated by internal drive inputs like hunger and satiety. (right panel) Anatomical data support this circuit, as do many neurophysiological data.
    || drive -> amygdala -> prefrontal cortex <-> sensory cortex -> amygdala. [visual, somatosensory, auditory, gustatory, olfactory] cortex -> [amygdala, Orbital Prefrontal Cortex]. amygdala -> Lateral Prefrontal Cortex
  • image p025fig01.17 Sensory-drive heterarchy vs. drive hierarchy. How cues and drives interact to choose the drive and motivation that will control behavioral choices.
    || [drive inputs, sensory cue [before, after] cross-over] -> incentive motivation [eat, sex].
  • image p026fig01.18 Inverted U as a function of arousal. A Golden Mean at intermediate levels of arousal generates a combination of behavioral threshold, sensitivity, and activation that can support typical behaviors. Both underarousal and overarousal lead to symptoms that are found in mental disorders.
    || Behavior vs arousal.
    depressionunder-arousedover-aroused
    thresholdelevatedlow
    excitable above thresholdHyperHypo
    "UPPER" brings excitability "DOWN".
  • image p027fig01.19 The ventral What stream is devoted to perception and categorization. The dorsal Where stream is devoted to spatial representation and action. The Where stream is also often called the Where/How stream because of its role in the control of action.
    ||
    Spatial representation of actionPerception categorization
    WHERE dorsalWHAT ventral
    Parietal pathway "where"Temporal pathway "what"
    Posterior Parietal Cortex (PPC)Inferior temporal Cortex (IT)
    Lateral Prefrontal Cortex (LPFC)Lateral Prefrontal Cortex (LPFC)
  • image p029tbl01.01 Some pairs of complementary processing streams.
    ||
    visual boundary:
    interblob stream V1-V2-V4
    visual surface:
    blob stream V1-V2-V4
    visual boundary:
    interblob stream V1-V2-V4
    visual motion:
    magno stream V1-MT-MST
    WHAT streamWHERE stream
    perception & recognition:
    interferotemporal & prefrontal areas
    space & action:
    parietal & prefrontal areas
    object tracking:
    MT interbands & MSTv
    optic flow navigation:
    MT+ bands & MSTd
    motor target position:
    motor & parietal cortex
    volitional speed:
    basal ganglia
  • image p030tbl01.02 The What and Where cortical processing streams obey complementary laws. These laws enable the What stream to rapidly and stably learn invariant object categories without experiencing catastrophic forgetting, while the Where stream learns labile spatial and action representations to control actions that are aimed towards these objects.
    ||
    WHATWHERE
    spatially-invariant object learning and recognitionspatially-variant reaching and movement
    fast learning without catastrophic forgettingcontinually update sensory-motor maps and gains
    IT InterferoTemporal CortexPPC Posterior Parietal Cortex
    WhatWhere
    matchingexcitatoryinhibitory
    learningmatchmismatch
  • image p030fig01.20 A schematic cross-section of a slice of laminar neocortex whose cells are organized in a characteristic way in six layers, which themselves may be organized into distinct sublaminae. The computational paradigm of Laminar Computing attempts to show how different parts of neocortex can represent and control very different kinds of behavior - including vision, speech, can cognition - using specializations of the same canonical laminar cortical design.
    || Projection fibres: Cortico[spinal, bulbar, pontine, striate, reticulat, etc]; Thalamocortical fibres; Diffuse cortical afferent fibres: [nonspecific thalamocortical, Cholinergic, Monoaminergic]; Corticocortical efferents; Projection [cell, fibre]; Corticocortical efferent terminals.
  • image p032fig01.21 At least three parallel visual cortical streams respond to visual inputs that reach the retina. Two parvocellular streams process visual surfaces (blob stream) and visual boundaries (interblob stream). The magnocellular stream processes visual motion.
    || [Retina, LGNs, V[1,2,3,4], MT] to [What- inferotemporal areas, Where- parietal areas]: visual parallel streams [2x blob, 1x bound]
  • image p035fig01.22 A classical example of phonemic restoration. The spectrogram of the word "legislatures" is either excised, leaving a silent interval, or filled with broad-band noise. A percept of the restored phoneme is heard when it is replaced by noise, but not by silence.
    || [normal, silence, noise replaced] presentations. frequency (Hz) vs time (sec).
  • image p036fig01.23 As more items are stored in working memory through time, they can select larger chunks with which to represent the longer list of stored items.
    || [x, y, z] -> [xy, xyz]
  • image p037fig01.24 Only three processing stages are needed to learn how to store and categorize sentences with repeated words in working memory. See the text for more discussion.
    || IOR working memory (item chunk-> sequences) <-> IOR masking field: [item->list]<->[list->list] chunks. (<-> signifies <- expectation/attention, adaptive filter ->)
  • image p038fig01.25 The ART Matching Rule stabilizes real time learning using a [top-down, modulatory on-center, off-surround] network. Object attention is realized by such a network. See text for additional discussion.
    || ART Matching Rule [volition, categories, features]. [one, two] against one.
  • image p039tbl01.03 The link between consciousness and movement
    ||
    VISUALseeing, knowing, and reaching
    AUDITORYhearing, knowing, and speaking
    EMOTIONALfeeling, knowing, and acting
  • image p042tbl01.04 The six main kinds of resonances which support different kinds of conscious awareness that will be explained and discussed in this book.
    ||
    type of resonancetype of consciousness
    surface-shroudsee visual object or scene
    feature-categoryrecognize visual object or scene
    stream-shroudhear auditory object or stream
    spectral-pitch-and-timbrerecognize auditory object or stream
    item-listrecognize speech and language
    cognitive-emotionalfeel emotion and know its source
  • image p051fig02.01 Along the boundaries between adjacent shades of gray, laterial inhibition makes the darker area appear even darker, and the lighter areas appear even lighter. (Ernst Mach bands)
    ||
  • image p052fig02.02 Feature-category resonances enable us to rapidly learn how to recognize objects without experiencing catastrophic forgetting. Attentive matching between bottom-up feature pattern inputs and top-down expectations prevent catastrophic forgetting by focussing object attention upon expected patterns of features, while suppressing outlier features that might otherwise have caused catastophic forgetting if they were learned also.
    || Adaptive Resonance. Attended feature clusters reactivate bottom-up pathways. Activated categories reactivate their top-down pathways. Categories STM, Feature patterns STM. Feature-Category resonance [synchronize, amplify, prolong]s system response. Resonance triggers learning in bottom-up and top-down adaptive weights: adaptive resonance!
  • image p057fig02.03 Some basic anatomical and physiological properties of individual neurons. See the text for additional discussion.
    ||
    physiologycell body potentialaxonal signalchemical transmitter
    anatomynerve cell bodyaxonsynaptic knob, synapse
  • image p058fig02.04 Serial learning paradigm: Learning the temporal order of events by practicing them in the order that they occur in time.
    || Learning a global arrow in time. How do we learn to encode the temporal order of events in LTM? serial learning. [w=intra, W=inter]trial interval. "... data about serial verbal learning (Figure 2.4) seemed to suggest that events can go "backwards in time". ..."
  • image p059fig02.05 Bowed serial position curve. This kind of data emphasizes the importance of modelling how our brains give rise to our minds using nonlinear systems of differential equations.
    || Effects of [inter, intra]trial intervals (Hovland 1938). # of errors vs list position. [w (sec), W (sec)] = (2 6) (4 6) (2 126) (4 126). Nonoccurance of future items reduces the number of errors in response to past items. Thes data require a real-time theory for their explanation! that is, DIFFERENTIAL equations.
  • image p059fig02.06 The bowed serial position curve illustrates the sense in which "events can go backwards in time" during serial learning.
    || Bow due to backward effect in time. If the past influenced the future, but no conversely: # of errors vs list position; Data (Hoyland Hull, Underwood, etc).
  • image p060fig02.07 Position-specific-forward and backward error gradients illustrate how associations can form in both the forward and backward directions in time before the list is completely learned.
    || Error gradients: depend on list position. # of responses vs list position:
    list beginninganticipatory errorsforward in time
    list middleanticipatory and perseverative errorsforward and backward in time
    list endperseverative errorsbackward in time
  • image p061fig02.08 The existence of forward and backward associations, such as from A to B and from B to A is naturally explained by a network of neurons with their own activities or STM traces, and bidirectional connections between them with their own adaptive weights or LTM traces.
    || How these results led to neural networks (Grossberg 1957). Networks can learn forward and backward associations! Practice A->B also learn B<-A. Because learning AB is not the same as learning BA, you need STM traces, or activations, xp at the nodes, or cells, and LTM traces, or adaptive weights, zg, for learning at the synapses.
  • image p063fig02.09 The Additive Model describes how multiple effects add up influence the activities, or STM, traces of neurons.
    || STM: Additive model (Grossberg, PNAS 1967, 1968).
    Short-term memory (STM)
    trace activation
    signaladaptive weightLong-term memory (LTM)
    trace
    xi(j)fi(xi(t))*Bijzij(t)xj(t)
    learning rate?passive decaypositive feedbacknegative feedbackinput
    d[dt: xi(t)] = - Ai*xi + sum[j=1 to n: fj(xj(t))*Bji*zji] - sum[j=1 to n: gj(xj)*Cp*Zp] + Ii
    Special case : d[dt: xi(t)] = - Ai*xi + sum[j=1 to n: fj(xj(t))*zp] + Ii
  • image p064fig02.10 The Shunting Model includes upper and lower bounds on neuronal activities. These bound have the effect of multiplying additive terms by excitatory and inhibitory automatic gain terms that enable such models to preserve their sensitivity to inputs whose size may vary greatly in size through time, while also approximately normalizing their total activities.
    || STM: Shunting Model (Grossberg, PNAS 1967, 1968). Mass action in membrane equations. Bi/Ci -> xi(t) -> O -> -Fi/Ei. Bounded activations, automatic gain control. d[dt: xi(t)] = - Ai*xi + (Bi - Ci*xi)sum[j=1 to n: fj(xj(t))*Dji*yji*zji + Ii] - (Ei*Xi + Fi)*sum[j=1 to n: gj(xj)*Gji*Yji*Zji + Ji]. Includes the Additive Model.
  • image p064fig02.11 Medium-Term Memory (MTM) and Long-Term Memory (LTM) equations complement the Additive and Shunting Models of STM. MTM is typically defined by a chemical transmitter that is released from the synaptic knobs of a neuron (Figure 2.03). Its release or inactivation in an activity-dependent way is also called habituation. LTM defines how associative learning occurs between a pair of neurons whose activities are approximately correlated through time. See the text for details.
    || Medium and Long Term memory.
    MTMhabituative transmitter gated[dt: yki(t)] = H*(K - yki) - L*fk(xk)*yki
    LTMgated steepest descent learningd[dt: zki(t)] = Mk*fk(xk)*(hi(xi) - zki)
  • image p065fig02.12 Three sources of neural network research: [binary, linear, continuous nonlinear]. My own research has contributed primarily to the third.
    || Three sources of neural network research.
    BinaryLinearContinuous and non-Linear
    neural network signal processingSystems theoryNeurophysiology and Psychology
    McCullogh-Pitts 1943
    ... Xi(t+1) = sgn{sum[j: Aij*Xj(t) - Bi}
    Von Neumann 1945
    Calanielio 1961
    Rosenblatt 1962
    Widrow 1962
    Anderson 1968
    Kohonen 1971
    Hodgkin, Huxley 1952
    Hartline, Ratliff 1957
    Grossberg 1967
    Von der Malsburg 1973
    digital computerY-A*X
    cross-correlate
    steepest descent
  • image p068fig02.13 Hartline
  • image p068fig02.14 Hodgkin and Huxley developed a model to explain how spikes travel down the squid giant axon.
    || Neurophysiology (single cell): spike potentials in squid giant axon (Hodgekin, Huxley 1952, Nobel Prize). time -> (dendrites -> cell body -> axon).
    C*dp[dt: V] = α*dp^2[dX^2: V] + (V(+) - V)*g(+) + (V(-) - V)*g(-) + (V^p - V)*g^p
    g(+) = G(+)(m,h), g(-) = G(-)(n), G^p = const, [m, h, n] - ionic processes, V - voltage
    Precursor of Shunting network model (Rail 1962). (Howell: see p075fig02.24 Membrane equations of neurophysiology. Shunting equation
  • image p071fig02.15 The noise saturation dilemma: How do neurons retain their sensitivity to the relative sizes of input patterns whose total sizes can change greatly through time?
    || Noise-Saturation Dilemma (Grossberg 1968-1973). Bounded activities from multiple input sources.
    If activities xi are sensitive to SMALL inputs, then why don
  • image p071fig02.16 To solve the noise-saturation dilemma, individual neurons in a network that is receiving a distributed spatial patterns of inputs need to remain sensitive to the ratio of input to them divided by all the inputs in that spatial pattern. Although the inputs are delivered to a finite number of neurons, the input and activity patterns are drawn continuously across the cells for simplicity.
    || Noise-Saturation Dilemma. [Ii, xi] vs t. [Input, Activity] pattern [small -> noise, large -> saturation]. Problem: remain sensitive to input RATIOS θi = Ii / sum[j: Ij] as total input I = sum[j: Ij] -> ∞. Many kinds of data exhibit sensitivity to ratios of inputs.
  • image p072fig02.17 Brightness constancy.
    || Vision: brightness constancy, contrast normalization. Compute RATIOS of reflected light. Reflectance processing. p72c1h0.45 "... In other words, the perceived brightness of the gray disk is constant despite changes in the overall illumination. On the other hand, if only the gray disk were illuminated at increaing intensities, with the annulus illuminated at a constant intensity, then the gray disk would look progressively brighter.
  • image p072fig02.18 Vision: brightness contrast. Conserve a total quantity, Total activity normalization.
    LUCERatio scales in choice behavior
    ZEILERAdaptation level theory

    ||
  • image p073fig02.19 Computing with cells: infinity does not exist in biology!
    || Computing in a bounded activity domain, Gedanken experiment (Grossberg 1970). Vm sub-areas [xm, B - xm], I(all m)], m=[1, i, B].
    Bexcitable sites
    xi(t)excited sites (activity, potential)
    B - xi(t)unexcited sites
  • image p073fig02.20 Shunting saturation occurs when inputs get larger to non-interacting cells.
    || Shunting saturation. [xi(t), B - xi(t)].
    (a)(b)
    d[dt: xi] = -A*xi + (B - xi)*Ii
    (a) Spontaneous decay of activity xi to equilibrium
    (b) Turn on unexcited sites B - xo by inputs Ii (mass action)
    Inadequate response to a SPATIAL PATTERN of inputs: Ii(t) = θi*I(t)
    θirelative intensity (cf. reflectance)
    I(t)total intensity (cf. luminance)
  • image p073fig02.21 How shunting saturation turns on all of a cell
  • image p073fig02.22 An on-center off-surround network is capable of computing input ratios.
    || Computing with patterns.
    How to compute the pattern-sensitive variable: θi = Ii / sum[k=1 to n: Ik]?
    Needs interactions! What type? θi = Ii / sum[k ≠ i: Ik]
    Ii↑ ⇒ θi↑ excitation, Ik↑ ⇒ θk↓, k ≠ i inhibition
    On-center off-surround network.
  • image p074fig02.23 The equations for a shunting on-center off-surround network. Shunting terms lead to many beautiful and important properties of these networks, which are found ubiquitously, in one form or another, in all cellular tissues.
    || Shunting on-center off-surround network.
    Mass action: d[dt: xi] = -A*xi +(B - xi)*Ii -xi*sum[k≠i: Ik]
    Turn on unexcited sitesTurn off excited sites
    At equilibrium:
    0 = d[dt: xi] = -(A + Ii + sum[k≠i: Ik])*xi + B*Ii = -(A + I)*xi + B*Ii
    xi = B*Ii/(A + I) = B*θi*I/(A + I) = θi* B*I/(A + I)No saturation!
    Infinite dynamical range
    Automatic gain control
    Compute ratio scale
    Weber law
    x = sum[k-1 to n: xk] = B*I/(A + I) ≤ B Conserve total activity
    NORMALIZATION
    Limited capacty
    Real-time probability
  • image p075fig02.24 The membrane equations of neurophysiology describe how cell voltages change in response to excitatory, inhibitory, and passive input channels. Each channel is described by a potential difference multiplied by a conductance. With the special choices shown in the lower right-hand corner, this equation defines a feedforward shuntin on-center off-surround network.
    || Membrane equations of neurophysiology.
    C*dp[dt] = (V(+) - V)*g(+) +(V(-) - V)*g(-) +(V(p) - V)*g(p)
    Shunting equation (not additive)
    V Voltage
    V(+), V(-), V(p) Saturating voltages
    g(+), g(-), g(p) Conductances
    V(+) = B, C = 1; V(-) = V(p) = 0; g(+) = Ii; g(-) = sum[k≠i: Ik];
    lower V: V(+) = V(p) Silent inhibition, upper V: V(+). (Howell: see p068fig02.14 Grossberg
  • image p076fig02.25 An on-center off-surround network can respond to increasing on-center excitatory inputs without a loss of sensitivity. Instead, as the off-surround input increases, the region of a cell
  • image p076fig02.26 The mudpuppy retina exhibits the shift property that occurs in the feedforward shunting on-center off-surround network in Figure 2.25. As a result, its sensitivity also shifts in response to different background off-surrounds, and therefore exhibits no compression (dashed purple lines).
    || Mudpuppy retina neurophysiology.
    I center, J background
    a) Relative figure-to-ground
    b) Weber-Fechner I*(A + J)^(-I)
    c) No hyperpolarization, SHUNT: Silent inhibition
    d) Shift property(Werblin 1970) xi(K,J) vs K = ln(I)
    Adaptation- sensitivity shifts for different backgrounds. NO COMPRESSION.
  • image p077fig02.27 A schematic of the on-center off-surround network that occurs in the mudpuppy retina, including three main cell types: receptors, horizontal cells, and bipolar cells.
    || Mechanism: cooperative-competitive dynamics.
    On-center off-surround (Kuffler 1953) cat retina
    Subtractive lateral inhibition (Hartline, Ratcliff 1956/7+) limulus retina.
    R receptor -> H horizontal -> B bipolar (Werblin, Dowling, etal 1969+) mudpuppy retina.
  • image p077fig02.28 Silent inhibition is replaced by hyperpolarization when the inhibitory saturating potential is smaller than the passive saturating potential. Then an adpatation level is created that determines how big input ratios need to be to activate their cells.
    || Weber Law and adaptation level.
    Hyperpolarization vs Silent inhibition
    d[dt: xi] = -A*xi +(B - xi)*Ii -(xi + C)*sum[k≠i: Ik]
    At equilibrium:
    0 = d[dt: xi] = -(A + Ii + )*xi +B*Ii -C*sum[k≠i: Ik]
    = -(A + I)*xi +(B + C)*Ii -C*I
    = -(A + I)*xi +(B + C)*I*[θi -C/(B + C)]
    xi = (B + C)*I/(A + I)* [θi -C/(B + C)]
    Weber Law Reflectance Adaptation level
  • image p078fig02.29 How the adaptation level is chosen to enable sufficiently distinct inputs to activate their cells.
    || Weber Law and adaptation level.
    xi = (B + C)*I/(A + I)* [θi -C/(B + C)]
    Weber Law Reflectance Adaptation level
    V(+) >> V(-) ⇒ B >> C ⇒ C/(B + C) << 1
    Adaptation level theory (Zeiler 1963).
  • image p078fig02.30 Choosing the adaptation level to achieve informational noise suppression.
    || Noise suppression. Attenuate Zero Spatial frequency patterns: no information. Ii vs i (flat line), xi vs i (flat line at zero)
    B >> C: Try B = (n - 1)*C or C/(B + C) = 1/n
    Choose a uniform input pattern (no distinctive features): All θi = 1/n
    xi = (B + C)*I/(A + I)*[θi -C/(B + C)] = 0 no matter how intense I is.
  • image p078fig02.31 How noise suppression enables matching of bottom-up and top-down input patterns.
    || Noise suppression -> pattern matching. mismatch (out of phase) suppressed, match (in phase) amplifies pattern.
  • image p079fig02.32 Matching amplifies the matched pattern due to automatic gain control. See terms I and J in the equation.
    || Substrate of resonance. Match (in phase) of BU and TD input patterns AMPLIFIES matched pattern due to automatic gain control by shunting terms. J = sum[i: Ji], I = sum[i: Ii], θi = (Ii + Ji)/(I + J)
    xi = (B + C)*(I + J)/(A + I + J)*[θi -C/(B + C)]
    Need top-down expectations to be MODULATORY.
  • image p080fig02.33 An opposite-attracts rule during the development of intracellular connections can lead to a mature network that realizes informational noise suppression.
    || How do noise suppression parameters arise? Symmetry-breaking during morphogenesis? Opposites attract rule.
    Intracellular parameters C/B = 1/(1 - n) Intercellular parameters
    Predicts that:
    • Intracellular excitatory and inhibitory saturation points can control the growth during development of :
    • Intercellular excitatory and inhibitory connections.
  • image p080fig02.34 How to achieve informational noise suppression in a network with multiple parallel processing channels.
    || Symmetry-breaking: dynamics and anatomy.
    Dynamics:
    • excitatory range is amplified
    • inhibitory range is compressed
    Anatomy:
    • narrow on-center
    • broad off-surround
    Noise suppression: attenuates uniform patterns
    Contour direction: enhances pattern gradients
  • image p081fig02.35 The equilibrium activities of a shunting netwok with Gaussian on-center off-surround kernels are sensitive to the ratio-contrasts of the input patterns that they process. The terms in the denominator of the equilibrium activities accomplish this using the shunting on-center and off-surround terms.
    || Ratio-contrast detector. flat versus [Gaussian Cki, flattened Gaussian? Eki]
    d[dt: xi] = -A*xi +(B - xi)*sum[k≠i: Ik]*Cki -(xi + D)*sum[k=1 to n: Ik*Eki]
    Cki = C*e^(-μ*(k - i)^2), Eki = E*e^(-μ*(k - i)^2)
    At equilibrium: xi = I*sum[k=1 to n: θk*Fki] / (A + I*sum[k=1 to n: θk*Gki])
    Fki = B*Cki -D*Eki (weighted D.O.G)
    Gki = Cki +Eki (S,O,G)
    • Reflectance processing
    • Contrast normalization
    • Discount illuminant
  • image p081fig02.36 Informational noise suppression in network with Gaussian on-center and off-surround function as contour detectors that are sensitive to ratio-contrast.
    || Noise suppression and contour detection.
    If B*sum[k=1 to n: Cki] <= D*sum[k=1 to n: Eki] then:
    • uniform patterns are suppressed
    • contrasts are selectively enhanced
    • contours are detected
    Ii vs i, xi vs i
    Responses are selective to [REFLECTANCE, SPATIAL SCALE], eg color [feature, surface] contours.
  • image p082fig02.37 My models begin with behavioral data, since brains are designed to achieve behavioral success. The text explains how models evolve in stages, through a process of successive refinements, or unlumpings. These unlumpings together carry out a kind of conceptual evolution, leading to models that can explain and predict ever larger psychological and neurobiological databases.
    || Modelling method and cycle.
    Behavioral data -(art of modeling)-> Design principles <- Neural data <-(brain predictions)- Mathematical model and analysis -(behavioral predictions)-> Behavioural data
    Operationalizes "proper level of abstraction"
    Operationalizes that you cannot "derive a brain" in one step.
  • image p085fig02.38 Our models have been used in many large-scale applications to engineering and technology. Linking brain to behavior explains how brain mechanisms give rise to psychological functions, and do so autonomously. The combination of mechanism, function, and autonomy helps to explain their value in helping to solve outstanding problems in technology.
    || Modeling method and cycle.
    Behavioral data -(art of modeling)-> Design principles <- Neural data <-(brain predictions)- Mathematical model and analysis -(behavioral predictions)-> Behavioural data
    Technology: Mathematical model and analysis <-> Technological applications
    At every stage, spin off new model designs and mechanisms to technologist who need autonomous intelligent applications.
  • image p087fig03.01 A macrocircuit of key visual processes (in green) and the cortical areas in which they primarily occur (in red), from the retina to the Prefrontal Cortex (PFC), including both the What and Where cortical streams. The [bottom-up, horizontal, and top-down] interactions help each of these processes to overcome computationally complementary processing deficiencies that they would experience without them, and aso to read-out top-down expectations that help to stabilize learning while they focus attention on salient objects and positions.
    || Emerging unified theory of visual intelligence. [What, Where] streams. Bottom-up and top-down interactions overcome COMPLEMENTARY processing deficiencies.
  • image p089fig03.02 What do you think lies under the two grey disks? (on a checkers board)
    || p089c1h0.55 "... As your eye traverses the entire circular boundary (Howell: of a grey disk on a checkerboard), the contrast keeps flipping between light-to-dark and dark-to-light. Despite these contrast reversals, we perceive a single continuous boundary surrounding the gray disk. ...".
  • image p090fig03.03 Kanizsa square and reverse-contrast Kanizsa square precepts. The spatial arrangement of pac-men, lines, and relative contrasts determines the perceived brightness of the squares, and even if they exhibit no brightness difference from their backgrounds, as in (b). These factors also determine whether pac-men will appear to be amodally completed behind the squares, and how far behind them.
    || p089c2h0.65 "...
    a) The percept of the square that abuts the pac-men is a visual illusion that is called the Kanizsa square. The enhanced brightness of the square is also an illusion.
    c) shows that these boundaries can be induced by either collinear edges or perpendicular line ends, and that both kinds of inducers cooperate to generate an even stronger boundary.
    d) if the perpendicular lines cross the positions of the illusory contours, then they can inhibit the strength of these contours. ..."
  • image p091fig03.04 A cross-section of the eye, and top-down view of the retina, shao how the blind spot and retinal veins can occlude the registration of light signals at their positions on the retina.
    || Eye: [optic nerve, ciliary body, iris,lens, pupil, cornea, sclera, choroid, retina]. Human retina: [fovea, blind spot, optic nerve]. see alsi cross-section of retinal layer.
  • image p092fig03.05 A cross-section of the retinal layer. Note that light stimuli need to go through all retinal layers before they reach the photoreceptor layer at which the light signals are registered.
    || light stimuli ->
    retinal layerscellular composition
    inner limiting membrane
    retinal nerve fibreganglion nerve fibres
    ganglion cellganglion
    inner plexiformamacrine
    inner nuclearhorizontal
    outer plexiform
    outer limiting membrane
    photoreceptorrod
    photoreceptorcone
    retinal pigment epithelium
    <- signal transduction. http://brain.oxfordjournals.org/content/early/2011/01/20/brain.awq346
  • image p093fig03.06 Every line is an illusion because regions of the line that are occluded by the blind spot or retinal veins are completed at higher levels of brain processing by boundary completion and surface filling-in.
    || Every line is an illusion!
    Boundary completionWhich boundaries to connect?
    Surface filling-inWhat color and brightness do we see?
  • image p094fig03.07 The processes of boundary completion and surface filling-in are computationally complementary.
    ||
    Boundary completionSurface filling-in
    outwardinward
    orientedunoriented
    insensitive to direction of contrastsensitive to direction-of-contrast
  • image p095fig03.08 Computer simulation of a Kanizsa square percept. See the text for details.
    || p094c2h0.2 "...
    b) shows the feature contours that are induced just inside the pac-man boundaries.
    c) feature contours fill-in within the square boundary
    d) create a percept of enhanced brightness throughout the square surface ..."
  • image p095fig03.09 Simulation of a reverse-contrast Kanizsa square percept. See the text for details.
    || p094c2h0.5 "...
    b) whereas bright feature contours are induced just inside the boundaries of the two black pac-men at the bottom of the figure, dark feature contours are induced inside the boundaries of the two white pac-man at the top of the figure
    c) the square boundary is recognized
    d) Because these dark and bright feature contours are approximately balanced, the filled-in surface color is indistinguishable from the filled-in surface color outside of the square, ... but [the square boundary is] not seen ..."
  • image p096fig03.10 The visual illusion of eon color spreading. Neither the square nor the blue color that are percieved within it are in the image that defines a neon color display. The display consists only of black and blue arcs.
    ||
  • image p096fig03.11 Another example of neon color spreading. The image is composed of black and blue crosses. See the text for details.
    || Howell: note the appearance of illusory red squares
  • image p100fig03.13 The Ehrenstein percept in the left panel is significantly weakened as the orientations of the lines that induce it deviate from being perpendicular deviate from being perpendicular to the illusory circle.
    ||
  • image p100fig03.14 Boundaries are completed with the orientations that receive the largest total amount of evidence, or support. Some can form in the locally preferred orientations that are perpendicular to the inducing lines, while others can form through orientations that are not locally preferred, thus showing that there is initially a fuzzy band of almost perpendicular initial grouping orientations at the end of each line.
    || Perpendicular induction at line ends wrt [circular, square] boundaries
    line ends localglobal
    perpendicular, crisppreferredpreferred
    NOT perpendicular, fuzzyunpreferredpreferred
  • image p100fig03.15 A fuzzy band of possible initial grouping orientations allows grouping to get started. Cooperative-competitive feedback via a hierarchical resolution of uncertainty chooses a sharp final grouping that has the most evidence to support it.
    || before choice: transient; after choice: equilibrium
  • image p102fig03.16 T
  • image p102fig03.17 The relative positions of the squares give rise to a percept of three regions. In the middle region, emergent diagonal groupings form, despite the fact that all the orientations in the image are verticals and horizontals.
    ||
  • image p103fig03.18 Computer simulations in [b, c, e, f] of groupings in response to different spatial arrangements in [a,c, e, g] of inducers that are composed of short vertical boundaries. Note the emergent horizontal groupings in [d, f, h] and the diagonal groupings in h, despite the fact that all its inducers have vertical orientations.
    ||
  • image p103fig03.19 As in Figure 3.18, emergent groupings can form whose orientations differ from thos of the inducing stimuli.
    || Thats how multiple orientations can induce boundary completion of an object. [diagonal, perpendicular, parallel]
  • image p104fig03.20 Sean Williams: how boundaries can form
    ||
  • image p104fig03.21 Four examples of how emergent boundaries can form in response to different kinds of images. These examples show how boundary webs can shape themselves to textures, as in (c), and shading, as in (d), in addition to lines, as in (a). In all these cases, the boundaries are invisible, but reveal themselves by supporting filling-in of surface brightness and color within their form-sensitive webs.
    ||
  • image p105fig03.22 Depth-selective boundary representations capture brightness and colors in surface filling-in domains. See the text for details.
    || 3D vision and figure-ground separation. multiple-scale, depth-selective boundary webs. refer to Figure 3.21(d)
    depth increasing ↓boundariessurfaces
    BC inputsurface capture!
    FC input
  • image p105fig03.23 The pointillist painting A Sunday on la Grande Jatte by Georges Seurat illustrates how we group together both large-scale coherence among the pixels of the painting, as well as forming small groupings around the individual dabs of color.
    ||
  • image p106fig03.24 In response to the Synthetic Aperture image (upper corner left), a shunting on-center off-surround network "discounts the illiminant" and thereby normalizes cell activities to compute feature contours, without causing saturation (upper right corner). Multiple-scale boundaries form in response to spatially coherent activities in the feature contours (lower left corner) and create the webs, or containers, into which the feature contours fill-in the final surface representations (lower right corner).
    || Do these ideas work on hard problems? SAR!
    input imagefeature contoursboundary contoursfilled-in surface
    Synthetic Aperture Radar: sees through weather 5 orders of magnitude of power in radar returndiscounting the illuminant
    • normalizes the image: preseves RELATIVE activities without SATURATION
    • shows individual PIXELS
    boundaries complete between regions where normalized feature contrasts changefilling-in averages brightnesses within boundary compartments
  • image p107fig03.25 The Roofs of Collioure by Matisse. See the text for details
    || p107c1h0.6 "... [Matisse] showed how patches of pure color, when laid down properly on a canvas, could be grouped by the brain into emergent boundarues, without the intervention of visible outlines. ... The trick was that these emergent boundaries, being invisible, or amodal, did not darken the colors in the surface representations. In this sense, Matisse intuitively realized that "all boundaries are invisible" through the masterful way in which he arranged his colors on canvas to generate boundaries that could support compelling surface representations. ..."
  • image p107fig03.26 How "drawing directly in color" leads to colored surface representations. Amodal boundary webs control the filling-in of color within these surface representations. See the text for details.
    || color patches on canvas -> [surface color and form, Amodal boundary web]. Amodal boundary web -> surface color and form.
  • image p108fig03.27 Matisse
  • image p108fig03.28 The watercolor illusion of Baingio Pinna 1987 can be explained using spatial competition betweeen like-oriented boundary signals. This occurs at what I have called the First Competitive Stage. This is one stage in the brain
  • image p109fig03.29 The 3D percepts that are generated by chiaroscuro and trompe l
  • image p109fig03.30 The triptych of Joe Baer, called Primary Light Goup: Red, Green, and Blue 1964-1965, generates watercolor illusion percepts which, when displayed side by side in a museum, create a striking impression.
  • image p110fig03.31 Henry Hensche
  • image p110fig03.32 Claude Monet
  • image p112fig03.33 Various ways that spatial gradients in boundary webs can cause self-luminous percepts. See the text for details.
    || Boundary web gradient can cause self luminosity. Similar to watercolor illusion. Gloss by attached highlight (Beck, Prazdny 1981), glare. (Bresan 2001) Double brilliant illusion, (Grossberg, Hong 2004) simulation. p111c2h0.5 "... This effect may be explained as the result of the boundary webs that are generated in response to the luminance gradients and how they control the filling-in of lightness within themselves and abutting regions. ... Due to the mutually inhibitory interactions across the boundaries that comprise these boundary webs, more lightness can spread into the central square as the steepness of the boundary gradients increases. ...".
  • image p113fig03.35 The Highest Luminance As White (HLAW) rule of (Hans Wallach 1948) works in some cases (top row) but not others (bottom row).
  • image p113fig03.36 The Blurred Highest Luminance As White (BHLAW) rule that I developed with my PhD student, Simon Hong, works in caseswhere the rule of Hans Wallach fails, as can be seen by comparing the simulation in Figure 3.35 with the one in this figure.
    || Blurred Highest Luminance As White (BHLAW) rule (Grossberg, Hong 2004, 2006). Spatial integration (blurring) adds spatial context to lightness perception.
  • image p114fig03.37 How the Blurred Highest Luminance as White rule sometimes normalizes the highest luminance to white (left panel) but at other times normalizes it to be self-luminous (right panel). See the text for details.
    || perceived reflectance vs cross-section of visual field. [white level, anchored lightness, self-luminous*, BHCAW]. *self-luminous only when conditions are right.
  • image p114fig03.38 Four color-field spray paintings of Jules Olitski. The text explains why they generate surfaces percepts with such ambiguous depth.
    || Jules and his friends (1967), Lysander-1 (1970), Instant Loveland (1968), Comprehensive Dream (1965). p114c2h0.4 "... it is impossible to visually perceive descrete colored units within the boundary webs in Olitski
  • image p115fig03.39 Two of Gene Davis
  • image p116fig03.40 A combination of T-junctions and perspective cues can create a strong percept of depth in response to 2D images, with a famous example being Leonardo da Vinci
  • image p117fig03.41 End gaps, or small breaks or weakenings of boundaries, can form where a stronger boundary abuts a weaker, like-oriented, boundary, as occurs where black boundaries touch red boundaries in the neon color spreading image of Figure 3.11.
    || Boundary contours - lower contrast boundary signals are weakened. feature contours- no inhibition, feature signals survive and spread. MP -> [BCS, FCS]. BCS -> FCS.
  • image p117fig03.42 Two paintings by Frank Stella. See the text for details.
    || Firzubad (top row) ... and Khurasan Gate (variation) (bottom row). p117c1h0.75 "... The luminance and color structure within a painting affects how it groups and stratifies the figures within it. These processes, in turn, affect the formation of attentional shrouds that organize how spatial attention is is allocated as we view them. ..." "... Stella wrote Furzabad is a good example of of lookng for stability and trying to create as much instability as possible.
  • image p120fig03.43 Four paintings by Monet of the Rouen cathedral under different lighting conditions (top row) and their monochromatic versions (bottom row). See the text for details.
    || p119c2h0.25 "... Monet uses nearby colors that are nearly equiluminant, and sharp, high-contrast luminance defined edges are sparse. He hereby creates weaker boundary signals within and between the parts of many forms, and stronger boundary signals between the forms. This combination facilitates color spreading within the forms and better separation of brightness and collor differences between forms. ... The grayscale versions of these paintings demonstrate the near equiluminance of the brushstrokes within forms, and places in which brightness and color differences significantly influence the groupings that differentiate between forms, including the differentiation between the cathedral and the sky. ..."
  • image p120fig03.44 The Rouen cathedral at sunset generates very different boundary webs than it does in full sunlight, as illustrated by Figure 3.45.
    || Rouen Cathedral at sunset (Monet 1892-1894).
    • Lighting almost equiluminant
    • Most boundaries are thus caused by color differences, not luminance differences
    • Fine architectural details are obscured, leading to...
    • Coarser and more uniform boundary webs, so...
    • Less depth in the painting.
  • image p121fig03.45 The Rouen cathedral in full sunlight.
    || Rouen Cathedral full sunlight (Monet 1892-1894).
    • Lighting is strongly non-uniform across most of the painting
    • Strong boundaries due to both luminance and color differences
    • Fine architectural details are much clearer, leading to...
    • Finer and more non-uniform boundary webs, so...
    • Much more detail and depth
  • image p121fig03.46 The Rouen cathedral in full sunlight contains T-Junctions that are not salient in the painting of it at sunset. These are among the painting
  • image p123fig04.01 A classical example of how boundaries are barriers to filling-in.
    || Combining stabilized images with filling-in (Krauskopf 1963, Yarbus 1967). Image: Stabilize these boundaries with suction cup attached to retina or electronic feedback circuit. Percept: A visible effect of an invisible cause!
  • image p124fig04.02 The verical cusp of lesser and greater illuminance is the same in both images, but the one on the left prevents brightness from flowing around it by creating closed boundaries that tighly surround the cusp.
  • image p126fig04.03 A McCann Mondrian is an excellent display with which to illustrate how our brains discount the illuminant to compute the "real" colors of objects. See the text for details.
    || Color constancy: compute ratios. McCann Mondrian. Biological advantage: never see in bright light, eg tropical fish
    Discount the illuminantCompute lightness
    Different colors seen from the same spectrum
    ... similar to those seen in white light
    Physical basis: reflectance RATIOS!
  • image p128fig04.04 When a gradient of light illuminates a McCann Mondrian, there is a jump in the total light that is reflected at nearby positions where the reflectances of the patches change,
    || Compute reflectance changes at contours. Fill-in illuminant-discounted surface colors.
    leftright
    I + εI - ε
    A*(I + ε)B*(I - ε)
    A*(I + ε)/(B*(I - ε)) - 1 = A/B - 1
  • image p129fig04.05 Multiple-scale balanced competition chooses color contours where the reflectance of the patches change. These color contours discount the illuminant.
    || Compute reflectance changes at contours. Fill-in illuminant-discounted surface colors. Discount illuminant: compute color contours.
  • image p129fig04.06 Filling-in of color contours restores a surface percept with colors that substantially discount the illuminant.
    || Compute reflectance changes at contours. Fill-in illuminant-discounted surface colors. Fill-in surface color: hierarchical resolution of uncertainty.
  • image p130fig04.07 Simulation of brightness constancy under uniform illumination.
    || Simulation of brightness constancy (Grossberg & Todorovic 1988). Uniform illumination. [stimulus (S), feature (F), boundary (B), output]. B -> F -> S -> B: Veridical! Boundary peaks are spatially narrower than feature peaks.
  • image p131fig04.08 Simulation of brightness constancy under an illimination gradient. Note that the feature content pattern (F) is the same in both cases, so too is the boundary contour (B) pattern that is derived from it, and the final filled-in surface.
    || Simulation of brightness constancy. Discount the illuminant. [stimulus (S), feature (F), boundary (B), output]. B -> F -> S -> B: not veridical, but useful! Ratio-sensitive feature contours (F).
  • image p131fig04.09 Simulation of brightness contrast
    || Simulation of brightness contrast. [stimulus (S), feature (F), boundary (B), output].
  • image p132fig04.10 Simulation of brightness assimilation. Note how the equal steps on the left and right sides of the luminance profile are transformed into different brightness levels.
    || Simulation of brightness assimilation. [stimulus (S), feature (F), boundary (B), output].
  • image p132fig04.11 Simulations of a double step (left panel) and the Craik-O
  • image p133fig04.12 Simulation of the 2D COCE.
    || (Todorovic, Grossberg 1988). p132c2h0.6 "... 2D Craik-O
  • image p134fig04.13 Contrast constancy shows how the relative luminances when a picture is viewed in an illumination gradient can even be reversed to restore the correct reflectances due to discounting the illuminant.
  • image p134fig04.14 The kinds of displays that Michael Paradiso and Ken Nakayamas used to catch filling-in "in the act" and which Karl Arrington then simulated using the Grossberg and Todorovic 1988 model.
    || Experiments on filling-in. Catching "filling0in" in the act (Paradiso, Nakayama 1991). (Arrington 1994 Vision Research 34, 3371-3387) simulated these data using the model of Grossberg and Todorovic 1988.
  • image p138fig04.15 Simple cells are oriented contrast detectors, not edge detectors.
    || From oriented filtering to grouping and boundary completion (Hubei, Weisel 1968). Oriented receptive fields: SIMPLE CELLS. Sensitive to : orientation, [amount, direction] of contrast, spatial scale. Oriented local contrast detectors, not edge detectors!
  • image p139fig04.16 The simplest way to realize an odd simple cell receptive field and firing threshold.
    || "Simplest" simple cell model. need more complexity for processing natural scenes. Difference-of-Gaussian or Gabor filter (J. Daugman, D. Pollen...). Output signal vs cell activity. Threshold linear signal, half-wave rectification.
  • image p140fig04.17 Complex cells pool inputs from simple cells that are sensitive to opposite contrast polarities. Complex cells hereby become contrast invartiant, and can respond to contrasts of either polarity.
    || Complex cells: pool signals from like-oriented simple cells of opposite contrast polarity at the same position. They are "insensitive to contrast polarity". Half-wave rectification of inputs from simple cells.
  • image p141fig04.18 The images formed on the two retinas in response to a single object in the world are displaced by different amounts with respect to their foveas. This binocular disparity is a powerful cue for determining the depth of the object from an observer.
    || Binocular Disparity. Binocular disparities are used in the brain to reconstruct depth from 2D retinal inputs, for relatively near objects.
  • image p141fig04.19 A laminar cortical circuit for computing binocular disparities in layer 3B of V1 at binocular simple cells. These cells add positionally disparate inputes from like polarized monocular simple cells (layer 4 of V1). Binocular simple cells at each position that is sensitive to opposite polarities then add their outputs at complex cells in layer 2/3. Chapter 10 will explain how these laminar circuits work in greater detail.
    || Laminar cortical circuit for complex cells. [left, right] eye.
    V1 layerdescription
    2/3Acomplex cells
    3Bbinocular simple cells
    4monocular simple cells
  • image p142fig04.20 A Glass pattern and a reverse-contrast Glass pattern give rise to a different boundary groupings because simple cells can only pool signals from like-polarity visual features. See the text for details.
  • image p143fig04.21 Oriented simple cells can respond at the ends of thick enough bar ends, but not at the ends of thin enough lines. See the text for an explanation of why this is true, and its implications for visual system design.
    || Hierarchical resolution of uncertainty. For a given field size. Different responses occur at bar ends and line ends. For a thin line no detector perpendicular to line end can respond enough to close the boundary there. Network activity.
  • image p144fig04.22 Computer simulation of how simple and complex cells respond to the end of a line (gray region) that is thin enough relative to the receptive field size (thick dashed region in the left panel). These cells cannot detect the line end, as indicated by the lack of responses there in the left panel (oriented short lines denote the cells
  • image p145fig04.23 If end gaps were not closed by end cuts, then color would flow out of every line end!
    || A perceptual disaster in the feature contour system. feature contour, line boundary. input -> [boundary, surface]. boundary -> surface. Color would flow out of every line end! as it does during neon color spreading.
  • image p145fig04.24 A brain
  • image p146fig04.25 Networks of simple, complex, and hypercomplex cells can create end cuts as an example of hierarchical resolution of uncertainty. See the text for details.
    || How are end cuts created? (Grossberg 1984) Two stages of short-range competition. 1st stage: Simple cells -> complex cells -> hypercomplex - endstopped complex. First competitive stage- across position, same orientation; Second competitive stage- same position, across orientation. -> cooperation.
  • image p148fig04.26 End cuts are formed during neon color spreading in the same way that they are formed at line ends.
    || End cut during neon color spreading.
    FIRST competitive stageSECOND competitive stage
    within orientationacross orientation
    across positionwithin position
    to generate end cuts.
  • image p149fig04.27 Bipole cells can form boundaries that interpolate end cuts, and use their cooperative-competitive interactions to choose the boundary groupings that have the most support from them.
    || Bipole cells: boundary completion. long-range cooperation & short-range inhibition: complete winning boundary groupings and suppress weaker boundaries.
  • image p150fig04.28 Bipole cells have two branches (A and B), or poles, in their receptive fields. They help to carry out long-range boundary completion.
    || Bipole property. Boundary completion via long-range cooperation. Completing boundaries inwardly between pairs or great numbers of inducers in an oriented way. fuzzy "AND" gate.
  • image p151fig04.29 Experimental evidence of bipole cells in cortical area V2 was reported by Von der Heydt, Peterhans, and Baumgarter (1984).
    || Bipoles: first neurophysiological evidence (V2) (von der Heydt, Peterhans, Baumgartner 1984, Peterhans, von der Heydt 1988). (Grossberg 1984) prediction.
    Ordering:
    Stimulus (S)
    probe location *
    cells in V2
    response?
    ...(S)*...YES
    ...*...(S)NO
    (S)...*...NO
    (S)...*...(S)YES
    (S)...*...
    (more contrast)
    NO
    (S)...*.....(S)YES
    Evidence for receptive field.
  • image p151fig04.30 Anatomical evidence for long-range horizontal connections has also been reported, as illustrated by the example above from (Bosking etal 1997).
    || Anatomy: horizontal connections (V1) (Bosking etal 1997). tree shrew. [10, 20]*[20, 10, 0, -10, -20] (degrees).
  • image p152fig04.31 The predicted bipole cell receptive field (upper left corner) has been supported by both neurophysiological data and psychophysical data, and used in various forms by many modelers. See the text for details.
    || Bipoles through the ages. (Grossberg 1984; Grossberg, Mongolla 1985). (Field, Hayes, Hess 1993) "association field". (Heitger, von der Heydt 1993). (Williams, Jacobs 1997). cf. "relatability" geometric constraints on which countours get to group (Kellman & Shipley 1991). Also "tensor voting" (Ullman, Zucker, Mumford, Guy, Medioni, ...).
  • image p153fig04.32 The double filter network embodies simple, complex, and hypercomplex (or endstopped complex) cells. It feeds into a network of bipole cells that can complete boundaries when it properly interacts with the double filter.
    || Double filter and grouping network. Cells : simple -> complex -> hypercomplex (endstopping) -> bipole
    Grouping networkbipole cells
    Double filterhypercomplex cells
    endstopping
    complex cells
    simple cells
  • image p156fig04.33 A tripartite texture (top row) and two bipartite textures (bottom row) that illustrate how emergent boundary groupings can segregate textured regions from one another.
  • image p157fig04.34 Some textures that were simulated with mixed success by the complex channels model. In particular, the model gets the wrong answer for the textures in (g) and (i). The Boundary Contour System model of Figure 4.32, which includes both a double filter and a bipole grouping network, simulates the observed results.
  • image p159fig04.35 Spatial impenetrability prevents grouping between the pac-men figures in the left figure, but not in the figure on the right.
    || p158c2h0.75 "... In the image shown in the left panel, the horizontal boundaries of the background squares interfere with vertical boundary completion by vertically-oriented bipole cells, again by spatial impenetrability. In contrast, the vertical boundaries of the background squares are collinear with the vertical pac-man inducers, thereby supporting formation of the square boundaries. Finer aspects of these percepts, such as why the square ... (right panel) appears to lie in front of four partially occuded circular discs, as regularly occurs when the Kanizsa square can form (eg Figure 3.3), can be understood using FACADE theory mechanism that will shown below to explain many figure-ground percepts using natural extensions to the three dimensional world of boundary and and surface mechanisms that we have already discussed. ..."
  • image p159fig04.36 Graffiti art by Banksy exploits properties of amodal boundary completion and spatial impenetrability.
    || p159c1h0.75 perceptual psychologist Nava Rubin "... When the wall is smooth, Banksy leaves the regions previously covered by stencil unpainted, relying of observers
  • image p161fig04.37 Kanizsa squares that form either collinearly to their inducers (left panel) or perpendicular to them (right panel) confirm predictions of the BCS boundary completion model.
    || Analog-sensitive boundary completion. contour strength vs Kanizsa square image. Increases with "support ratio" (Shipley, Kellman 1992). Inverted-U (Lesher, Mingoloa 1993; cf Soriano, Spillmann, Bach 1994)(shifted gratings). p370h0.6 BCS = Boundary Contour System, FCS = Feature Contour System. p161c1h0.85 "... As predicted by the BCS, they found an Inverted-U in contour strength as a function of line density. ... This effect may be explained by the action of the short-range competition that occurs before the stage of long-range cooperative grouping by bipole cells (Figure 4.32). It is thus another example of the balance between cooperative and competitive mechanisms. ..."
  • image p162fig04.38 How long-range cooperation among bipole cells and short-range competition by hypercomplex cells work together to generate the inverted-U in boundary strength that is found in the data of Figure 4.37 (right panel).
    || Cooperation and competition during grouping.
    few lineswide spacing, inputs outside spatial range of competition, more inputs cause higher bipole activity
    more linesnarrower spacing, slightly weakens net input to bipoles from each inducer
    increasing line densitycauses inhibition to reduce net total input to bipoles
  • image p163fig04.39 A schematic of the LAMINART model that explains key aspects of laminar visual cortical anatomy and dynamics. LGN -> V1 [6, 4, 2/3] -> V2 [6, 4, 2/3]
    || p163c1h0.6 "... The first article abount laminar computing ... proposed how the laminar cortical model could process 2D pictures using bottom-up filtering and horizontal bipole grouping interactions (Grossbeerg, Mingolla, Ross 1997). In 1999, I was able to extend the model to also include top-down circuits for expectation and attention (Grossberg 1999)(right panel). Such a synthesis of laminar bottom-up, horizontal, and top-down circuits is characteristic of the cerebral cortex (left panel). I called it LAMINART because it began to show how properties of Adaptive Resonance Theory, or ART, notably the ART prediction about how top-down expectations and attention work, and are realized by identical cortical cells and circuits. You can immediately see from the schematic laminar circuit diagram ... (right panel) that circuits in V2 seem to repeat circuits in V1, albeit with a larger spatial scale, despite the fact that V1 and V2 carry out different functions, How this anatomical similarity can coexist with functional diversity will be clarified in subsequent sections and chapters. It enable dfifferent kinds of biological intelligence to communicate seamlessly while carrying out their different psychological functions. ..."
  • image p164fig04.40 The Koffka-Benussi ring. See the text for details.
    || p164c2h0.25 "... [left image] The luminance of the ring is intermediate between the luminances of the two background regions. Its perceived brightness is also between the brightnesses of the two background regions, and appears to be uniform throughout. The right image differs from the left only in that a vertical line divides the two halves of the ring where it intersects the two halves in the background. Although the luminance of the ring is still uniform throughout, the two halves of the rig now have noticeably different brightnesses, with the left half of the ring looking darker than the right half. How can drawing a line have such a profound effect on the brightnesses of surface positions that are so far away from the line? ..."
  • image p165fig04.41 The Kanizsa-Minguzzi ring. See the text for details.
    || p165c1h0.6 "... (left panel), the annulus is divided by two line segments into annular sectors of unequal area. Careful viewing shows that the smaller sector looks a little brighter than the larger one. (Kanizsa, Minguzzi 1986) noted that "this unexpected effect is not easily explained. In fact, it cannot be accounted for by any simple psychological mechanism such as lateral inhibition or freuency filtering. Furthermore, it does not seem obvious to invoke oganizational factors, like figural belongingness or figure-ground articulation."". p165c2h0.35 "... (Grossberg, Todorovic 1988). Our main claim is that the two radial lines play two roles, one in the formation of boundaries with which to contain the filling-in process, and the other as a source of feature contour signals that are filled-in within the annular regions to create a surface brightness percept. ..."
  • image p166fig04.42 Computer simulation of Kanizsa-Minguzzi ring percept. See the text for details.
  • image p167fig04.43 (a) How bipole cells cause end cuts. (b) The Necker cube generates a bistable percept of two 3D parallelopipeds. (c) Focusing spatial attention on one of the disks makes it look both nearer and darker, as (Tse 1995) noted and (Grossbert, Yazdanbakhsh 1995) explained.
    || T-junction sensitivity. image -> bipole cells -> boundary. (+) long-range cooperation, (-) short-range competition.
  • image p168fig04.44 Macrocircuit of the main boundary and surface formation stages that take place from the lateral geniculate nucleus, or LGN, through cortical areas [V1, V2, V4]. See the text for details.
    || image p168fig04.45 How ON and OFF feature contour (FC) activities give rise to filled-in surface regions when they are adjacent to a like oriented boundary, but not otherwise.
  • image p170fig04.46 Surface regions can fill-in using feature contour inputs (+ and - signs) if they are adjacent to, and collinear with, boundary contour inputs (solid) line, as in (a), but not otherwise, as in (b).
  • image p170fig04.47 A double-opponent network processes output signals from opponent ON and OFF Filling-In DOmains, or FIDOs.
    || OFF FIDO -> shunting networks -> ON FIDO -> shunting networks-> opponent interation -> FIDO outputs
  • image p171fig04.48 How closed boundaries contain filling-in of feature contour signals, whereas open boundaries allow color to spread to both sides of the boundary.
    || Before filling-in: boundary contour, illuminant-discounted feature contour; After filling-in: no gap, gap
  • image p171fig04.49 An example of DaVinci stereopsis in which the left eye sees more of the wall between A and C than the right eye does. The region between B and C is seen only by the left eye because the nearer wall between C and D occludes it from the right eye view.
  • image p173fig04.50 This figure illustrates how a closed boundary can be formed in a prescribed depth due to addition of binocular and monocular boundaries, but not at other depths.
    || How are closed 3D boundaries formed? V1 Binocular, V2 boundary, V2 surface; Prediction: monocular and horizontal boundaries are added to ALL binocular boundaries along the line of sight. Regions that are surrounded by a CLOSED boundary can depth-selectively contain filling-in of lightness and colored signals.
  • image p174fig04.51 The same feedback circuit that ensures complementary consistency between boundaries and surfaces also, automatically, initiates figure-ground separation! See the text for details.
    || before feedback: [V1 -> V2 pale stripe -> V2 thin stripe, "attention pointers" (Cavanagh etal 2010)]; after feedback: [V1 + V2 thin stripe] -> V2 pale stripe via contrast sensitive [exhitation, inhibition] for depths [1, 2] -> object recognition
  • image p174fig04.52 An example of how the 3D LAMINART model can transform the two monocular images of the random dot stereogream in the top row into the three depth-separated surfaces representations in the bottom row.
    || Stereogram surface percepts: surface lightnesses are segregated in depth (Fang and Grossberg 2009). [left, right] inputs, [far, fixation, near] planes. Contrast algorithms that just compute disparity matches and let computer code build the surface eg (Marr, Poggio, etal 1974).
  • image p176fig04.53 The on-center off-surround network within position and across depth helps to explain why brighter Kanizsa squares look closer.
    || inhibition vs. depth. p176c1h0.25 "... to qualitatively understand how this example of proximity-luminance covariance works. It follows directly from the boundary pruning by surface contour feedback signals (Figure 4.51) that achieves complementary consistency and initiates figure-ground perception. ...". p176c1h0.45 "... these inhibitory sigals are part of an off-surround network whose strength decreases as the depth difference increases between the surface that generates the signal and its recipient boundaries. ...". p176c1h0.8 "... Within FACADE theory, the perceived depth of a surface is controlled by the boundaries that act as its filling-in generators and barriers (Figure 3.22), since these boundaries select the depth-sselective FIDOs within whin filling-in can occur, and thereby achieve surface capture. These boundaries, in turn, are themselves strengthened after surface-to-boundary contour feedback eliminates redundant boundaries that cannot support sucessful filling-in (Figure 4.51). These surface contour feedback signals have precisely the properties that are needed to explain why brighter Kanizsa squares look closer! ..."
  • image p178fig04.54 Initial steps in figure-ground separation. See the text for details.
    ||
  • topLeftrepeats the image in Figure 1.3
    topRightshows again the long-range cooperation and short-range compeition that are controlled by the bipole grouping process (Figure 4.43a middle panel)
    bottomLeftshows the end gaps that are caused by these bipole grouping mechanisms
    bottomRightshows how surface filling-in is contained within the closed horizontal rectangular boundary, but spills out of the end gaps formed in the other two rectangles
  • image p178fig04.55 Amodal completion of boundaries and surfaces in V2.
    || Separated V2 boundaries: near, far (amodal boundary completion); Separated V2 surfaces: ?horizonal, vertical? (amodal surface filling-in).
  • image p179fig04.56 Final steps in generating a visible, figure-ground separated, 3D surface representation in V4 of the unoccluded parts of opaque surfaces.
    || Visible surface perception.
    Boundary enrichment:nearfarasymmetry between near & far
    V4horizontal rectanglehorizontal & vertical rectanglescannot use these (overlapping?) boundaries for occluded object recognition
    V2horizontal rectanglevertical rectangleuse these boundaries for occluded object recognition
    Visible surface filling-in:filling-in of entire vertical rectanglepartial filling in of horizontal rectanglevisible percept of unoccluded [vertical] surface
  • image p181fig04.57 Percepts of unimodal and bistable transparency (top row) as well as of a flat 2D surface (bottom row, left column) can be induced just by changing the relative contrasts in an image with a fixed geometry.
    || X junction
  • image p182fig04.58 LAMINART model processing stage that are sufficient to explain many percepts of transparency, including those summarized in Figure 4.57.
    || [left, right] eye, [LGN, V1 [6, 4, 3B, 2/3 A], V2 [4, 2/3]], [mo, bi]nocular cart [simple, complex] cells, [excita, inhibi]tory cart [connection, cell]s.
  • image p186fig05.01 Humans and other autonomous adaptive intelligent agents need to be able to learn both many-to-one and one-to-many maps.
    || Learn many-to-one (compression, naming) and one-to-many (expert knowledge) maps
  • image p186fig05.02 Learning a many-to-one map from multiple visual fonts of a letter to the letter
  • image p186fig05.03 Many-to-one maps can learn a huge variety of kinds of predictive information.
    || Many-to-one map, two stage compression: IF-THEN rules: [symptom, test, treatment]s; length of stay in hospital
  • image p189fig05.04 The hippocampus is one of several brain regions that are important in learning and remembering about objects and events that we experience throughout life. The book will describe several hippocampal processes that contribute to this achievement in different ways.
    || hypothalmic nuclei, amygdala, hippocampus, cingulate gyrus, corpus callosum, thalamus
  • image p192fig05.05 ON and OFF cells in the LGN respond differently to the sides and ends of lines.
    || [ON, OFF]-center, [OFF, ON]-surround (respectively). OFF-center cells maximum response at line end (interior), ON-center cells maximum response along sides (exterior)
  • image p192fig05.06 Bottom-up and top-down circuits between the LGN and cortical area V1. The top-down circuits obey the ART Matching Rule for matching with bottom-up input patterns and focussing attention on expected critical features.
    || Model V1-LGN circuits, version [1, 2]. retina -> LGN relay cells -> interneurons -> cortex [simple, endstopped] cells -> cortex complex cells
  • image p193fig05.07 A more detailed description of the connections between retinal ganglion cells, the LGN, and V1.
    ||
  • image p193fig05.08 The patterns of LGN activation and inhibition on the sides and ends of a line without the top-down feedback (A) and with it (C). The top-down distribution of excitation (+) and inhibition (-) are shown in (B).
    ||
  • image p194fig05.09 A computer simulation of the percept (D) that is generated by feature contours (B) and boundary contours (C) in response to an Ehrenstein disk stimulus (A).
    ||
  • image p198fig05.10 A competitive learning circuit learns to transform distributed feature patterns into selective responses of recognition categories.
    || Competitive learning and Self-Organized Maps (SOMs). input patterns -> feature level (F1) -> adaptive filter (T=ZS) ->
  • image p199fig05.11 Instar learning enables a bottom-up adaptive filter to become selectively tuned to particular feature patterns. Such pattern learning needs adaptive weights that can either increase or decrease to match the featural activations that they filter.
    || Instar learning STM->LTM: need both increases and decreases in strength for the LTM pattern to learn the STM pattern
  • image p200fig05.12 The duality of the outstar and instar networks is evident when they are drawn as above.
    ||
  • image p200fig05.13 Instar and outstar learning are often used to learn the adaptive weights in the bottom-up filters and top-down expectations that occur in ART. The ART Matching Rule for object attention enables top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features.
    || Expectations focus attention: feature pattern (STM), Bottom-Up adaptive filter (LTM), Category (STM), competition, Top-Down expectation (LTM); ART Matching Rule: STM before top-down matching, STM after top-down matching (attention!)
  • image p200fig05.14 Outstar learning enables individual sampling cells to learn distributed spatial patterns of activation at the network of cells that they sample. Again, both increases and decreases in LTM traces must be possible to enable them to match the activity pattern at the sampled cells.
    || Outstar learning, need both increases and decreases in ????
  • image p201fig05.15 An outstar can learn an arbitrary spatial pattern of activation at its sampled nodes, or cells. The net pattern that is learned is a time average of all the patterns that are active at the sampled nodes when the sampling node is active.
    || Spatial learning pattern, outstar learning.
  • image p202fig05.16 In the simplest example of category learning, the category that receives the largest total input from the feature level is chosen, and drives learning in the adaptive weights that abut it. Learning in this "classifying vector", denoted by zi, makes this vector more parallel to the input vector from the feature level that is driving the learning (dashed red arrow).
    || Geometry of choice and learning
  • image p202fig05.17 This figure summarizes the simplest equations whereby the adaptive weights of a winning category learn the input pattern that drove it to win, or more generally a time-average of all the input patterns that succeeded in doing so.
    || Geometry of choice and learning, learning trains the closest LTM vector
  • image p205fig05.18 How catastrophic forgetting can occur in a competitive learning or self-organizing map model due to basic properties of competition and associative learning.
    || Learning from pattern sequences, practicing a sequence of spatial patterns can recode all of them! When is learning stable? Input patterns cannot be too dense relative to the number of categories; Either: not to many distributed inputs relative to the number of categories, or not too many input clusters
  • image p207fig05.19 The ART hypothesis testing and learning cycle. See the text for details about how the attentional system and orienting system interact in order to incorporate learning of novel categories into the corpus of already learned categories without causing catastophic forgetting.
    ||
  • image p211fig05.20 The PN and N200 event-related potentials are computationally complementary events that are computed within the attentional and orienting systems.
    || PN and N200 are complementary waves. PN [top-down, conditionable, specific] match; N200 [bottom-up, unconditionable, nonspecific] mismatch
  • image p211fig05.21 Sequences of P120, N200, and P300 event-related potentials occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    || ERP support for mismatch-mediated reset: event-related potentials: human scalp potentials. ART predicted correlated sequences of P120-N200-P300 Event Related Potentials during oddball learning. P120 mismatch; N200 arousal/novelty; P300 STM reset. Confirmed in (Banquet and Grossberg 1987)
  • image p213fig05.22 Suppose that a very different exemplar activates a category than the one that originally learned how to do this.
    || By prior learning, X1 at F1 is coded at F2, Suppose that X2 incorrectly activates the same F2 code. How to correct the error? The problem occurs no matter how you define an "error"
  • image p213fig05.23 A category, symbol, or other highly compressed representation cannot determine whether an error has occurred.
    || Compression vs error correction. past vs present. Where is the knowledge than an error was made? Not at F2! The compressed code cannot tell the difference! X2 is at F1 when (green right triangle GRT) is at F2 defines the error. There is a mismatch between X1 and X2 at F1. How does the system know this?
  • image p214fig05.24 Learning of a top-down expectation must occur during bottom-up learning in the adaptive filter in order to be able to match the previously associated feature pattern with the one that is currently active.
    || Learning top-down expectations. When the code (green right triangle GRT) for X1 was learned at F2, GRT learned to read-out X1 at F1. [Bottom-Up, Top-Down] learning
  • image p214fig05.25 The sequence of events whereby a novel input pattern can activate a category which, in turn, reads out its learned top-down expectation to be matched against the input pattern. Error correction thus requires the use of a Match Detector that has properties of the Processing Negativity ERP.
    || How is an error corrected. During bottom-up learning, top-down learning must also occur so that the pattern that is read out top-down can be compared with the pattern that is activated by bottom-up inputs. Match detector: Processing Negativity ERP. 1. top-down, 2. conditionable, 3. specific, 4. match
  • image p214fig05.26 When a big enough mismatch occurs, the orienting system is activated and sends a burst of nonspecific arousal to the category level. This Mismatch Detector has properties of the N200 ERP.
    || Mismatch triggers nonspecific arousal. Mismatch at F1 eleicits a nonspecific event at F2. Call this event nonspecific arousal. N200 ERP Naatanen etal: 1. bottom-up, 2. unconditionable, 3. nonspecific, 4. mismatch
  • image p215fig05.27 Every event activates both the attentional system and the orienting system. This text explains why.
    || Attentional and Orienting systems. Every event has a cue (specific) and an arousal (nonspecific) function
  • image p215fig05.28 How a mismatch between bottom-up and top-down input patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level.
    || Mismatch -> inhibition -> arousal -> reset. BU input orienting arousal, BU+TD mismatch arousal and reset. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
  • image p220fig05.29 Vigilance is a gain parameter on inputs to the orienting system that regulates whether net excitation from bottom-up inputs or inhibition from activated categories will dominate the orienting system. If excitation wins, then a memory search for a better matching will occur. If inhibition wins, then the orienting system will remain quiet, thereby enabling resonance and learning to occur.
    || Vigilance control [resonate and learn, reset and search]. ρ is a sensitivity or gain parameter
  • image p221fig05.30 When a predictive disconfirmation occurs, vigilance increases enough to drive a search for a more predictive category. If vigilance increases just enough to exceed the analog match between features that survive top-down matching and the entire bottom-up input pattern, then minimax learning occurs. In this case, the minimum amount of category generalization is given up to correct the predictive error.
    || Match tracking realizes minimax learning principle. Given a predictive error, vigilance increases just enough to trigger search and thus acrifices the minimum generalization to correct the error ... and enables expert knowledge to be incrementally learned. predictive error -> vigilance increase just enough -> minimax learning
  • image p221fig05.31 A system like Fuzzy ARTMAP can learn to associate learned categories in one ART network with learned categories in a second ART network. Because both bottom-up and top-down interactions occur in both networks, a bottom-up input pattern to the first ART network can learn to generate a top-down output pattern from the second ART network.
    || Fuzzy ARTMAP. Match tracking realizes minimax learning principle: vigilence increases to just above the match ratio of prototype / exemplar, thereby triggering search
  • image p224fig05.32 Learning the alphabet with two different levels of vigilance. The learning in column (b) is higher than in column (a), leading to more concrete categories with less abstract prototypes. See the text for details.
    ||
  • image p225fig05.33 Some early ARTMAP benchmark studies. These successes led to the use of ARTMAP, and many variants that we and other groups have developed, in many large-scale applications in engineering and technology that has not abated even today.
    || see Early ARTMAP benchmark studies
  • image p225fig05.34 ARTMAP was successfully used to learn maps of natural terrains with many advantages over those of mapping projects that used AI expert systems. The advantages are so great that many mapping projects started to use this technology.
    || AI expert system - 1 year: field identification of natural regions; derivation of ad hoc rules for each region by expert geographers; correct 80,000 of 250,000 site labels; 230m (site-level) scale. ARTMAP system - 1 day: rapid, automatic, no natural regions or rules; confidence map; 30m (pixel-level) scale can see roads; equal accuracy at test sites
  • image p226fig05.35 I had shown in 1976 how a competitive learning or self-organizing map model could undergo catastrophic forgetting if the input environment was sufficiently dense and nonstationary, as illustrated by Figure 5.18. Later work with Gail Carpenter showed how, if the ART Matching Rule was shut off, repeating just four input patterns in the correct order could also casue catastrophic forgetting by causing superset recoding, as illustrated in Figure 5.36.
    || Code instability input sequences. D C A; B A; B C = ; |D|<|B|<|C|; where |E| is the number of features in the set E. Any set of input vectors that satisfy the above conditions will lead to unstable coding if they are periodically presented in the order ABCAD and the top-down ART Matching Rule is shut off.
  • image p226fig05.36 Column (a) shows catastrophic forgetting when the ART Matching Rule is not operative. It is due to superset recoding. Column (b) shows how category learning quickly stabilizes when the ART Martching Rule is restored.
    || Stabel and unstable learning, superset recoding
  • image p228fig05.37 A macrocircuit of the neurotrophic Spectrally Timed ART, or nSTART, model. I developed nSTART with my PhD student Daniel Franklin. It proposes how adaptively timed learning in the hippocampus, bolstered by Brain Derived Neurotrophic Factor, or BDNF, helps to ensure normal memory consolidation.
    || habituative gates, CS, US, Thalamus (sensory cortex, category learning, conditioned reinforcer learning, adaptively time learning and BDNF), Amygdala (incentive motivation learning), Hippocampus (BDNF), Prefrontal Cortex (attention), Pontine nuclei, Cerebellum (adaptively timed motor learning)
  • image p230fig05.38 The Synchronous Matching ART, or SMART, model includes spiking neurons in a laminar cortical hierarchy. I developed SMART with my PhD student Massimiliano Versace. By unlumping LAMINART to include spiking neurons, finer details of neurodynamics, such as the existence of faster gamma oscillations during good enough matches, and slower beta oscillations during bad enough mismatches, could be shown as emergent properties of network interactions.
    || Second order thalamus -> specific thalamic nucleus -> Thalamic reticulate nucleus -> neocortical laminar circuit [6ll, 6l, 5, 2/3, 1] -> Higher order cortex. Similar for First order thalamus -> First order cortex, with interconnection to Second order, nonspecific thalamic nucleus
  • image p231fig05.39 The SMART hypothesis testing and learning cycle predicts that vigilance increases when a mismatch in subcortical regions like the nonspecific thanlamus activates the nucleus basalis of Meynert which, in turn, broadcasts a burst of the neurotransmitter acetylcholine, or ACh, to deeper cortical layers. Due to the way in which LAMINART proposes that cortical matching and mismatching occurs, this ACh burst can increase vigilance and thereby trigger a memory search. See the text for details.
    || [BU input, [, non]specific thalamic nucleus, thalamic reticulate nucleus, neocortical laminar circuit] cart [Arousal, Reset, Search, Vigilance]
  • image p232fig05.40 Computer simulation of how the SMART mode generates (a) gamma oscillations if a good enough match occurs, or (c) beta oscillations if a bad enough match occurs. See the text for details.
    || Brain oscillations during match/mismatch, data, simulation. (a) TD corticothalamic feedback increases synchrony (Sillito etal 1994) (b) Match increases γ oscillations (c) Mismatch increases θ,β oscillations
  • image p232fig05.41 (a)-(c). The sequence of interlaminar events that SMART predicts during a mismatch reset. (d) Some of the compatible neurophysiological data.
    || Mismatch causes layer 5 dendritic spike that trigger reset. (a) Arousal causes increase in nonspecific thalamic nuclei firing rate and layer 5 dendritic and later somatic spikes (Karkum and Zhu 2002, Williams and Stuart 1999) (b) Layer 5 spikes reach layer 4 via layer 6i and inhibitory neurons (Lund and Boothe 1975, Gilbert and Wiesel 1979) (c) habituative neurotransmitters in layer 6i shift the balance of active cells in layer 4 (Grossberg 1972, 1976) (d) Dendritic stimulation fires layer 5 (Larkum and Zhu 2002) stimulation apical dendrites of nonspecific thalamus
  • image p233fig05.42 Mismatch-induced beta oscillations have been reported in at least three parts of the brain: V1, V4, and hippocampus. Althpough there may be other reasons for beta oscillations in the brain, those that are caused by a mismatch should be studied in concert with the gamma oscillations that occur during a good enough match. See tyhe text for details.
    || Is there evidence for the [gamma, beta] prediction? Yes, in at least three parts of the brain, (Buffalo EA, Fries P, Ladman R, Buschman TJ, Desimone R 2011, PNAS 108, 11262-11267) Does this difference in average oscillation frequencies in the superficial and deep layers reflect layer 4 reset? Superficial recording γ (gamma), Deep recording β (beta) (Berke etal 2008, hippocampus; Buschman and Miller 2009, FEF)
  • image p236fig05.43 The activation of the nucleus basalis of Meynert, and its subsequent release of ACh into deeper layers of neocortex, notably layer 5, is assumed to increase vigilance by reducing afterhyperpolarization (AHP) currents.
    || Vigilance control: mismatch-mediated acetylcholine release (Grossberg and Versace 2008). Acetylcholine (ACh) regulation by nonspecific thalamic nuclei via nucleus basalis of Meynert reduces AHP in layer 5 and causes a mismatch/reset thereby increasing vigilance. HIGH vigilance ~ sharp code, LOW vigilance ~ coarse code
  • image p240fig05.44 When an algebraic exemplar model is realized using only local computations, it starts looking like an ART prototype model.
    || How does the model know which exemplars are in category A? BU-TD learning. How does a NOVEL test item access category A?
  • image p241fig05.45 The 5-4 category structure is one example of how an ART network learns the same kinds of categories as human learners. See the text for details.
    || 5-4 Category structure. A1-A5: closer to the (1 1 1 1) prototype; B1-B4: closer to the (0 0 0 0) prototype
  • image p242fig05.46 Computer simulations of how two variants of Distributed ARTMAP incrementally learn the 5-4 category structure. See the text for details.
    || Distributed ARTMAP with [self-supervised learning, post-training LTM noise]
  • image p245fig05.47 How long-range excitatory connections and short-range disynaptic inhibitory connections realize the bipole grouping law.
    || stimulus -> boundary representation -> layer 2/3
  • image p246fig05.48 Microcircuits of the LAMINART model that I developed with Rajeev Raizada. See the text for details of how they integrate bottom-up adaptive filtering, horizontal bipole grouping, and top-down attentional matching that satisfied the ART Matching Rule.
    ||
  • image p248fig05.49 This circuit of the LAMINART model helps to explain properties of Up and Down states during slow wave sleep, and how disturbances in ACh dynamics can disrupt them.
    ||
  • image p252fig06.01 A surface-shroud resonance begins to form when the surface representations of objects bid for spatial attention. In addition to these topographic excitatory inputs, there is long-range inhibition of the spatial attention cells that determines which inputs will attract spatial attention.
    || Bottom-up spatial attention competition. [more, less] luminous perceptual surfaces -> competition -> spatial attention
  • image p253fig06.02 After bottom-up surface inputs activate spatial attentional cells, they send top-down topographic excitatory signals back to the surface representations. This recurrent shunting on-center off-surround network contrast enhances larger attentional activities while approximately normalizing the total spatial attentional activity. A surface-shroud resonance hereby forms that selects an attentional shroud, enhances the perceived contrast of the attended surface (light blue region), and maintains spatial attention on it.
    || Surface-shroud resonance. perceptual surfaces -> competition -> spatial attention. (Carrasco, Penpeci-Talgar, and Eckstein 2000, Reynolds and Desimone 2003)
  • image p254fig06.03 These interactions of the ARTSCAN Search model enable it to learn to recognize and name invariant object categories. Interactions between spatial attention in the Where cortical stream, via surface-shroud resonances, and object attention in the What cortical stream, that obeys the ART Matching Rule, coordinate these learning, recognition, and naming processes.
    || Retinal image -> On & OFF cell contrast normalization (retina/LGN) -> polarity [, in]sensitive contrast enhancement (V1) -> object [boundary (V2), surface (V2/V4)] -> surface contour (V2); What stream categories: volition control (BG). object boundary (V2) <-> view (ITp) <-> view integrator (ITp) <-> object (ITa) <-> [object-value (ORB), value (Amyg)] <-> name (PFC)
  • image p255fig06.04 The ARTSCAN Search model can also search for a desired target object in a scene, thereby clarifying how our brains solve the Where
  • image p257fig06.05 A curve tracing task with monkeys was used by Roelfsema, Lamme, and Spekreijse in 1998 to demonstrate how spatial attention can flow along object boundaries. See the text for details.
    || Attention flows along curves: Roelfsema etal 1998: Macaque V1. fixation (300ms) -> stimulus (600ms RF - target curve, distractor) -> saccade. Crossed-curve condition: attention flows across junction between smoothly connected curve segments, Gestalt good continuation
  • image p258fig06.06 Neurophysiological data and simulation of how attention can flow along a curve. See the text for details.
    || Simulation of Roelfsema etal 1998, data & simulation. Attention directed only to far end of curve. Propagates along active layer 2/3 grouping to distal neurons.
  • image p258fig06.07 A top-down spotlight of attention can also be converted into a shroud. This process begins when the spotlight triggers surface filling-in within a region. Figure 6.8 shows how it is completed.
    || Reconciling spotlights and shrouds: top-down attentional spotlight becomes a shroud. spotlight of attention, surface filling-in
  • image p259fig06.08 The distributed ARTSCAN, or dARTSCAN, model includes spatial attention in both PPC and PFC, and both fast-acting attention, triggered by transient cells in Where cortical areas such as MT, and slower-acting surface-shroud resonances in What cortical areas such as V4 and PPC. See the text for details.
    || dARTSCN spatial attention hierarchy, Fast (Where stream) Slow (What stream) (Foley, Grossberg, and Mingolia 2012). [transient cells (MT) ->, object surfaces (V4) <->] [object shrouds (PPC) <-> spatial shrouds (PPC/PFC)]
  • image p260fig06.09 Crowding in the periphery of the eye can be avoided by expanding the size and spacing of the letters to match the cortical magnification factor.
    || Crowding: visible objects and confused recognition. Accurate target recogition requires increased flanker spacing at higher eccentricity
  • image p260fig06.10 The cortical magnification factor transforms (A) artesian coordinates in the retina into (B) log polar coordinates in visual cortical area V1.
    ||
  • image p261fig06.11 If the sizes and distances between the letters stays the same as they are received by more peripheral parts of the retina, then all three letters may be covered by a single shroud, thereby preventing their individual perception and recognition.
    || Crowding: visible objects and confused recognition. log compression and center-surround processing cause... input same eccentricity, surface, object shroud, crowding threshold. object shrouds merge!
  • image p261fig06.12 Pop-out of the L among T
  • image p265fig06.13 The basal ganglia gate perceptual, cognitive, emotional, and more processes through parallel loops.
    || [motor, ocularmotor, dorsolateral, ventral-orbital, anterior cingulate] vs. [Thalamus, pallidum-subs, nigra, Striatum, Cortex]
  • image p267fig06.14 Feedback from object surfaces to object boundaries uses surface contours. This feedback assures complementary consistency and enables figure-ground separation. A corollary discharge of the surface contours can be used to compite salient object feature positions.
    || Perceptual consistency and figure-ground separation.
  • image p268fig06.15 The largest salient feature signal is chosen to determine the next target position of a saccadic eye movement. This This target position signal self-inhibits to enable the next most salient position to be foveated. In this way, multiple feature combinations of the object can be foveated and categorized. This process clarifies how the eyes can explire even novel objects before moving to other objects. These eye movements enable invariant categories to be learned. Each newly chosen target position is, moreover, an "attention pointer" whereby attention shifts to the newly foveated object position.
    || How are saccades within an object determined? Figure-ground outputs control eye movements via V3AA! Support for prediction (Theeuwes, Mathot, and Kingstone 2010), More support: "attention pointers" (Cavanaugh etal 2010), Even more support (Backus etal 2001, Caplovitz and Tse 2006, Galletti and Battaglia 1989, Nakamura and Colby 2000)
  • image p270fig06.16 The same target position signal that can command the next saccade also updates a gain field that predictively maintains the attentional shroud in head-centered coordinates, even before the eye movement is complete. This process keeps the shroud invariant under eye movements, so that it can continue to inhibit reset of an emerging invariant category as t is associated with multiple object views, even while the conscious surface representation shifts with each eye movement in retinotopic coordinates. This pdating process is often called predictive re mapping.
    || Predictive remapping of eye movements! From V3A to LIP. [spatial attention, object attention, figure-ground separation, eye movement remapping, visual search]. (Beauvillaib etal 2005, Carlson-Radvansky 1999, Cavanaugh etal 2001, Fecteau & Munoz 2003, Henderson & Hollingworth 2003, Irwin 1991)
  • image p271fig06.17 Persistent activity in IT cells is just what is needed to enable view-invariant object category learning by ARTSCAN to be generalized to [view, position, size]-invariant category learning by positional ARTSCAN, or pARTSCAN. See the text for details.
    || Persistent activity in IT. Physiological data show that persistent activity exist in IT (Fuester and Jervey 1981, Miyashita and Chang 1988, Tomita etal 1999). Adapted from (Tomita etal 1999 Nature)
  • image p272fig06.18 The pARTSCAN model can learn [view, position, size]-invariant categories by adding view category integrator cells that have the properties of persistent neurons in IT. These integrator cells get reset with the invariant object category, not the view category.
    || pARTSCAN: positionally-invariant object learning. (Cao, Grossberg, Markowitz 2011). IT cells with persistent activities are modeled by view category integrators in ITp. View-specific category cells are RESET as the eyes move within the object. View category integrator cells are NOT RESET when the view-specific category is reset. They are RESET along with invariant object category cells when a spatial attention shift occurs.
  • image p272fig06.19 The various parts of this figure explain why persistent activity is needed in order to learn positionally-invariant object categories, and how this fails when persistent activity is not available. See the text for details.
    ||
  • image p273fig06.20 pARTSCAN can simulate the IT cell recoding that Li and DiCarlo reported in their swapping experiments because the swapping procedure happens without causing a parietal reset burst to occur. Thus the originally activated invariant category remains activated and can get associated with the swapped object features.
    || Simulation of Li and DiCarlo swapping data. data (Li and DiCarlo 2008), model (Cao, Grossberg, Markowitz 2011). normalized response vs. exposure (swaps and/or hours)
  • image p274fig06.21 pARTSCAN can also simulate the trade-off in IT cell responses between position invariance and selectivity that was reported by Zoccolan etal 2007. This trade-off limits the amount of position invariance that can be learned by a cortical area like V1 that is constrained by the cortical magnification factor.
    || Trade-off in IT cell response properties. Inferotemporal cortex cells with greater position invariance respond less selectively to natural objects. invariance-tolerance, selectivity-sparseness. data (Zoccolan etal 2007) model (Grossberg, Markowitzm, Cao 2011). position tolerance (PT, degrees) vs sparseness (S)
  • image p274fig06.22 pARTSCAN can simulate how IT cortex processes image morphs, when it learns with high vigilance. See the text for details.
    || Akrami etal simulation: a case of high vigilance. tested on morphs between image pairs
  • image p275fig06.23 Data from (Akrami etal 2009) and our simulation of it. See the text for details.
    || IT responses to image morphs. data vs model
  • image p275fig06.24 Left and right eye stereogram inputs are constructed to generate percepts of objects in depth. These percepts include the features of the objects, not only their relative depths, a property that is not realized in some other models of steriopsis. See the text for details.
    || Sterogram surface percepts: surface lightnesses are segregated in depth (Fand, Grossberg 2009). Contrast algorithms that just compute disparity matches and let computer code build the surface, eg (Marr, Poggio 1974)
  • image p276fig06.25 In addition to the gain field that predictively maintains a shroud in head-centered coordinates during saccades, there are gain fields that predictively maintain binocular boundaries in head-centered coordinates so that they can maintain binocular fusion during saccades and control the filling-in of surfaces in retinotopic coordinates.
    || Surface-shroud resonance.
  • image p277fig06.26 Gain fields also enable predictive remapping that maintain binocular boundary fusion as the eyes move betweeen objects. See the text for details.
    || Predictive remapping maintains binocular boundary fusion even as eyes move between objects. retinotopic boundary -> invariant boundary (binocular)
  • image p278fig06.27 A surface-shroud resonance through the Where stream enables us to consciously see an object while a feature-category resonance into the What stream enables us to recognize it. Both kinds of resonances can synchronize via visual cortex so that we can know what an object is when we see it.
    || What kinds of resonances support knowing vs seeing? What stream [knowing, feature-prototype resonance], Where stream [seeing, surface-shroud resonance]
  • image p278fig06.28 If the feature-category resonances cannot form, say due to a lesion in IT, then a surface-shroud resonance can still support conscious seeing of an attended object, and looking at or reaching for it, even if the individual doing so knows nothing about the object, as occurs during visual agnosia. The surface-shroud resonance supports both spatial attention and releases commands that embody the intention to move towards the attended object.
    || What kinds of resonances support knowing vs seeing? visual agnosia: reaching without knowing Patient DF (Goodale etal 1991). Attention and intention both parietal cortical functions (Anderson, Essick, Siegel 1985; Gnadt, Andersen 1988; Synder, Batista, Andersen 1997, 1998)
  • image p283fig07.01 The usual boundary processing stages of [simple, complex, hypercomplex, bipole] cells enable our brains to correct uncontrolled persistence of previously excited cells just by adding habituative transmitter gates, or MTM traces, at appropriate places in the network.
    || Boundary processing with habituative gates. spatial competition with habituative gates, orientational competition: gated dipole, bipole grouping
  • image p284fig07.02 Psychophysical data (top row) and simulation (bottom row) of how persistence decreases with flash illuminance and duration.
    || Persistence data and simulations. (Francis, Grossberg, Mingolia 1994 Vision Research, 34, 1089-1104). Persistence decreases with flash illuminance and duration (Bowen, Pola, Matin 1974; Breitmeyer 1984; Coltheart 1980). Higher luminance or longer duration habituates the gated dipole ON channel more. Causes larger and faster rebound in the OFF channel to shut persisting ON activity off.
  • image p285fig07.03 Persistence decreases with flash illuminance and duration due to the way in which habituative transmitters regulate the strength of the rebound in response to offset of a stimulating input, and how this rebound inhibits previously activated bipole cells.
    || Persistence data and simulations (Francis, Grossberg, Mingolia 1994 Vision Research, 34, 1089-1104). Persistence decreases with flash illuminance and duration. Horizontal input excites a horizontal bipole cell, which supports persistence. Offset of the horizontal input causes a rebound of activity in the vertical pathway, which inhibits the horizontal bipole cell, thereby terminating persistence.
  • image p286fig07.04 Illusory contours persist longer than real contours because real contours have more inducers whose rebound at contour offset can cause faster boundary reset. Illusory contours also take longer to form than real contours, which explains the increasing portion of the curve.
    || Persistence data and simulations (Meyer, Ming 1988; Reynolds 1981). Increasing portion of curve is due to formation time of the illusory contour. Longer persistence is due to fewer bottom-up inducers of an illusory contour that has the same length as a real contour: only illuminance-derived edges generate reset signals. When bottom-up inducers are inhibited by OFF cell rebounds, their offset gradually propagates to the center of the illusory contour.
  • image p286fig07.05 This figure shows the propagation through time of illusory contour offset from the rebounded cells that got direct inputs to the center of the contour.
    || Persistence data and simulations. Illusory contours persist longer than real contours (Meyer, Ming 1988; Reynolds 1981). When bottom-up inducers are inhibited by OFF cell rebounds, their offset gradually propagates to the center of the illusory contour.
  • image p287fig07.06 The relative durations of persistence that occur due to an adaptation stimulus of the same or orthogonal orientation follow from the properties of the habituative gated dipoles that are embedded in the boundary completion system.
    || Persistence data and simulations. Change in persistence depends on whether adaptation stimulus has same or orthogonal orientation as test grating (Meyer, Lawson, Cohen 1975). If adaptation stimulus and test stimulus have the same orientation, they cause cumulative habituation, which causes a stronger reset signal, hence less persistence. When they are orthogonal, the competition on the ON channel is less, hence more persistence.
  • image p287fig07.07 Persistence increases with distance between a target and a masking stimulus due to weakening of the spatial competition in the first competitive stage of hypercomplex cells.
    || Persistence data and simulations. Persistence increases with distance between a target and a masking stimulus (Farrell, Pavel, Sperling 1990). There is less spatial competition from the masker to the target when they are more distant, hence the target is more persistent.
  • image p290fig08.01 Motion in a given direction pools all possible contrast-sensitive sources of information that are moving in that direction.
    ||
  • image p291fig08.02 Complex cells can respond to motion in opposite directions and from features with opposite contrast polarities.
    ||
  • image p292fig08.03 The MacKay and waterfall illusion aftereffects dramatically illustrate the different symmetries that occur in the orientational form stream and the directional motion stream.
    || Form and motion aftereffects. different inhibitory symmetries govern orientation and direction. illusions: [Form- MacKay 90°, Motion- waterfall 180°]. stimulus, aftereffect percept
  • image p293fig08.04 Most local motion signals on a moving object (red arrows) may not point in the direction of the object
  • image p295fig08.05 The perceived direction of an object is derived either from a small subset of feature tracking signals, or by voting among ambiguous signals when feature tracking signals are not available.
    || Aperture problem. Barberpole illusion (Wallach). How do sparse feature tracking signals capture so many ambiguous motion signals to determine the perceived motion direction?
  • image p296fig08.06 In the simplest example of apparent motion, two dots turning on and off out of phase in time generate a compelling percept of continuous motion between them.
    || Simplest long-range motion paradigm. ISI- interstimulus interval, SOA- stimulus onset synchrony
  • image p296fig08.07 When two flashes turn on and off out of phase with the correct range of interstimulus intervals, and not too far from one another, then either beta motion of phi motion are perceived.
    || Beta and Phi motion percepts. Beta motion: percepts of continuous motion of a well-defined object across empty intervening space. Phi motion: sense of "pure" motion without a concurrent percept of moving object. (Exner 1875) http://www.yorku.ca/eye/balls.htm
  • image p297fig08.08 When a second flash is more intense than the first flash, then apparent motion may occur from the second to the first flash.
    || Delta motion: motions from the second to the first flash. Data: (Kolers 1972; Korte 1915). Simulation: (Grossberg, Rudd 1992). This occurs when the luminance or contrast of the second flash is large compared to that of the first flash. Sustained and transient cells obey shunting dynamics whose averaging rates speed up with output intensity. The first flash to wane is the one that will be the source of the G-wave.
  • image p297fig08.09 Simulation of motion in opposite directions that is perceived when two later flashes occur on either side of the first flash.
    || Split motion. Data: (H.R. Silva 1926), Simulation: (Grossberg, Rudd 1992)
  • image p298fig08.10 Simulation of the motion speed-up that is perceived when flash duration decreases.
    || "The less you see it, the faster it moves". Data: (Giaschi, Anstis 1989), Simulation: (Grossberg, Rudd 1992). ISI = 0, flash duration decreases; SOA = constant, flash duration decreases
  • image p298fig08.11 This formotion percept is a double illusion due to boundary completion in the form stream followed by long-range apparent motion using the completed bioundaries in the motion stream.
    || Form-motion interactions. Apparent motion of illusory contours (Ramachandran 1985). Double illusion! Illusory contour is created in form stream V1-V2. Apparent motion of illusory contours occurs in motion stream due to a V2-MT interaction.
  • image p300fig08.12 A single flash activates a Gaussian receptive field across space whose maximum is chosen by a winner-take-all recurrent on-center off-surround network.
    || Gaussian receptive fields are sufficient! (Grossberg, Rudd 1992). Single flash. Suppose that a single flash causes a narrow peak of activity at the position where it occurs. It generates output signals through a Gaussian filter that produces a Gaussian activity profile at the next processing stage., A recurrent on-center off-surround network chooses the maximum activity and suppresses samaller activities. Winner-take-all
  • image p300fig08.13 As a flash waxes and wanes through time, so too do the activities of the cells in its Gaussian receptive field. Because the maximum of each Gaussian occurs at the same position, nothing is perceived to move.
    || Temporal profile of a single flash. Suppose that a single flash quickly turns on to maximum activity, stays there for a short time, and then shuts off. It causes an increase in activity, followed by an exponential decay of activity. The corresponding Gaussian profile waxes and wanes through time. Since the peak position of the Gaussian does not change through time, nothing moves.
  • image p300fig08.14 Visual inertia depicts how the effects of a flash decay after the flash shuts off.
    || Inertia (%) vs ISI (msec)
  • image p301fig08.15 If two flashes occur in succession, then the cell activation that is caused by the first one can be waning while the activation due to the second one is waxing.
    || Temporal profile of two flashes. Of two flashes occur in succession, the waning of the activity due to the first flash may overlap with the waxing of the activity due to the second flash.
  • image p301fig08.16 The sum of the waning Gaussian activity profile due to the first flash and the waxing Gaussian activity profile due to the second flash has a maximum that moves like a travelling wave from the first to the second flash.
    || Travelling wave (G-wave): long-range motion. If the Gaussian activity profiles of two flashes overlap sufficiently in space and time, then the sum of Gaussians produced by the waning of the first flash added to the Gaussian produced by the waxing of the second flash, can produce a single-peaked travelling wave from the position of the first flash to that of the second flash. The wave is then processed through a WTA choice network (Winner Take All). The resulting continuous motion percept is both long-range and sharp.
  • image p302fig08.17 An important constraint on whether long-rang apparent motion occurs is whether the Gaussian kernel is broad enough to span the distance between successive flashes.
    || Motion speed-up with increasing distance: For a fixed ISI, how does perceived velocity increase with distance between the flashes? Gaussian filter : Gp = exp{ -(j-i)^2 / (2*K^2) }. The largest separation, L_crit, for which sufficient spatial overlap between two Gaussians centered at locations i and j will exist to support a travelling wave of summed peak activity is : L_crit = 2*K
  • image p302fig08.18 This theorem shows how far away (L), given a fixed Gaussian width, two flashes can be to generate a wave of apparent motion between them.
    || G-wave properties (Grossberg 1977). Let flashes occur at positions i=0 and i=L. Suppose that d[dt: x0] = -A*x0 + J0; d[dt: xL] = -A*xL + JL; Define G(w,t) ...; Theorem 1 max_w G(w,t) moves continuously through time from w=0 to w=L if and only if L <= 2*K.
  • image p303fig08.19 The dashed red line divides combinations of flash distance L and Gaussian width K into two regions of no apparent motion (above the line) and apparent motion (below the line).
    || No motion vs motion at multiple scales.
  • image p303fig08.20 The G-wave speeds up with the distance between flashes at a fixed delay, and has a consitent motion across multiple spatial scales.
    || G-wave properties (Grossberg 1977). Theorem 2 (Equal half-time property) The time at which the motion signal reaches position w=L/2. Apparent motion speed-up with distance: this half-time is independent of the distance L between the two flashes. Consistent motion across scales: half-time is independent of the scale size K. Method of proof: elementary algebra and calculus (Grossberg, Rudd 1989 appendix)
  • image p304fig08.21 A computer simulation of the equal half-time property whereby the apparent motions within different scales that respond to the same flashes all reach the half-way point in the motion trajectory at the same time.
    || Equal half-time property: how multiple scales cooperate to generate motion percept. Travelling waves from Gaussian filters of different sizes bridge the same distance in comparable time. The time needed to bridge half the distance between flashes is the same.
  • image p304fig08.22 Data (top image) and simulation (bottom image) of Korte
  • image p305fig08.23 Despite its simplicity, the Terus display can induce one of four possible percepts, depending on the ISI.
    || Ternus motion. ISI [small- stationary, intermediate- element, larger- group] motion http://en.wikipedia.org/wiki/Ternus_illusion
  • image p305fig08.24 When each stimulus has an opposite contrast relative to the background, element motion is eliminated and replaced by group motion at intermediate values of the ISI.
    || Reverse-contrast Ternus motion. ISI [small- stationarity, intermediate- group (not element!), larger- group] motion.
  • image p306fig08.25 The Motion BCS model can explain and simulate all the long-range apparent motion percepts that this chapter describes.
    || Motion BCS model (Grossberg, Rudd 1989, 1992) Level 1: discount illuminant; Level 2: short-range filter, pool sustained simple cell inputs with like-oriented receptive fields aligned in a given direction. Sensitive to direction-of-contrast; Level 3: Transient celss with unoriented receptive field. Sensitive to direction-of-change
  • image p306fig08.26 The 3D FORMOTION model combines mechanisms for determining the relative depth of a visual form with mechanisms for both short-range and long-range motion filtering and grouping. A formotion interaction from V2 to MT is predicted to enable the motion stream to track objects moving in depth.
    || 3D Formotion model (Chey etal 1997; Grossberg etal 2001; Berzhanskaya etal 2007). Form [LGN contours -> simple cells orientation selectivity -> complex cells (contrast pooling, orientation selectivity, V1) -> hypercomplex cells (end-stopping, spatial sharpening) <-> bipole cells (grouping, cross-orientation competition) -> depth-separated boundaries (V2)], Motion: [LGN contours -> transient cells (directional stability, V1) -> short-range motion filter -> spatial competition -> long-range motion filter and boundary selection in depth (MT) <-> directional grouping, attentional priming (MST)]
  • image p307fig08.27 The distribution of transients through time at onsets and offsets of Ternus display flashes helps to determine whether element motion or group motion will be perceived.
    || Ternus motion. Element motion: zero or weak transients at positions 2 and 3; Group motion: strong transients at positions 2 and 3. Conditions that favor visual persistence and thus perceived stationarity of element (2,3) favor element motion (Braddick, Adlard 1978; Breitmeyer, Ritter 1986; Pantle, Peteresik 1980)
  • image p308fig08.28 The Gaussian distributions of activity that arise from the three simultaneous flashes in a Ternus display add to generate a maximum value at their midpoint. The motion of this group gives rise to group motion.
    || Ternus group motion simulation. If L < 2*K, Gaussian filter of three flashes forms one global maximum.
  • image p310fig08.29 When the individual component motions in (A) and (B) combine into a plaid motion (C), both their perceived direction and speed changes.
    ||
  • image p311fig08.30 The data of (Castet etal 1993) in the left image was simulated in the right image by the 3D FORMOTION model that I developed with my PhD student Jonathan Chey. These data provide insight into how feature tracking signals propagate from the ends of a line to its interior, where they capture consistent motion directional signals and inhibit inconsistent ones.
    || Solving the aperture problem. A key design problem: How do amplified feature tracking signals propagate within depth to select the cirrect motion directions at ambiguous positions? This propagation from feature tracking signals to the line interior determines perceived speed in Castet etal data, which is why speed depends on line tilt and length. Data: (Castet etal 1993), Simulation: (Chey etal 1997)
  • image p311fig08.31 Processing stages of the Motion BCS convert locally ambiguous motion signals from transient cells into a globally coherent percept of object motion, thereby solving the aperture problem.
    || Why are so many motion processing stages needed? change sensitive receptors -> directional transient cells -> directional short-range filter -> spatial and directional competition -> directional long-range filter (MT) <-> Directional grouping network
  • image p312fig08.32 Schematic of motion filtering circuits.
    || Level 1: Change sensitive units -> Level 2: transient cells -> Level 3: short-range spatial filters -> Level 4: intra-scale competition -> Level 5: inter-scale competition
  • image p312fig08.33 Processing motion signals by a population of speed-tuned neurons.
    ||
  • image p314fig08.34 The VISTARS model for visually-based spatial navigation. It uses the Motion BCS as a front end and feeds it output signals into two computationally complementary cortical processing streams for computing optic flow and target tracking information.
    || VISTARS navigation model (Browning, Grossberg, Mingolia 2009). Use FORMOTION model as front end for higher level navigational circuits: input natural image sequences -> estimate heading (MT+)-MSTd -> additive processing -> estimate object position (MT-)-MSTv direction and speed subtractive processing -> Complementary Computing. [optic flow navigation, object tracking]
  • image p315fig08.35 The output signals from the directional grouping network obey the ART Matching Rule. They thereby select consistent motion directional signals while suppressing inconsistent ones, and do not distort that the spared cells code. The aperture problem is hereby solved by the same mechanism that dynamically stabilizes the learning of directional grouping cells.
    || How to select correct direction and preserve speed estimates? Prediction: Feedback from MSTv to MT- obeys ART Matching Rule; Top-down, modulatory on-center, off-surround network (Grossberg 1976, 1980; Carpenter, Grossberg 1987, 1991); Explains how directional grouping network can stably develop and how top-down directional attention can work. (Cavanough 1992; Goner etal 1986; Sekuer, Ball 1977; Stelmach etal 1994). Directional grouping network (MSTv) <-> Directional long-range filter (MT). Modulatory on-center selects chosen direction and preserves speed. Off-surround inhibits incompatible directions.
  • image p316fig08.36 How the directional grouping network, notably properties of the ART Matching Rule, enables a small set of amplified feature tracking signals at the ends of a line to select consistent directions in the line interior, while suppressing inconsistent directions.
    || Motion capture by directional grouping feedback. Directional grouping network (MSTv) <-> Directional long-range filter (MT). It takes longer to capture ambiguous motion signals in the line interior as the length of the line increases cf (Castet etal 1993)
  • image p317fig08.37 Processing stages that transform the transient cell inputs in response to a tilted moving line into a global percept of the object
  • image p319fig08.38 The neurophysiological data from MT (left image) confirms the prediction embodied in the simulation of MT (right image) concerning the fact that it takes a long time for MT to compute an object
  • image p320fig08.39 Simulation of the barberpole illusion direction field at two times. Note that the initial multiple directions due to the feature tracking signals at the contiguous vertical and horizontal sides of the barberpole (upper image) get supplanted by the horizontal direction of the two horizontal sides (lower image).
    || Barberpole illusion (one line) simulation
  • image p321fig08.40 Visible occluders capture the boundaries that they share with moving edges. Invisible occluders do not. Consequently, the two types of motions are influenced by different combinations of feature tracking signals.
    || Motion grouping across occluders (J. Lorenceau, D. Alais 2001). Rotating contours observed through apertures. Determine direction of a circular motion. [, in]visible occluders http://persci.mit.edu/demos/square/square.html
  • image p322fig08.41 A percept of motion transparency can be achieved by using motion grouping feedback that embodies the "asymmetry between near and far" along with the usual opponent competition between opposite motion directions.
    || Motion transparency. near: big scale; far: small scale MSTv, "Asymmetry between near and far" Inhibition from near (large scales) to far (small scales) at each position
  • image p323fig08.42 The chopsticks illusion not only depends upon how feature tracking signals are altered by visible and invisible occluders, but also upon how the form system disambiguates the ambiguous region where the two chopsticks intersect and uses figure-ground mechanisms to separate them in depth.
    || Chopsticks: motion separation in depth (Anstis 1990). [, in]visible occluders [display, percept]
  • image p324fig08.43 Attention can flow along the boundaries of one chopstick and enable it to win the orientation competition where the two chopsticks cross, thereby enabling bipole grouping and figure-ground mechanisms to separate them in depth within the form cortical stream.
    || The ambiguous X-junction. motion system. Attention propagates along chopstick and enhances cell activations in one branch of a chopstick. MT-MST directional motion grouping helps to bridge the ambiguous position.
  • image p325fig08.44 Attentional feedback from MST-to-MT-to-V2 can strengthen one branch of a chopstick (left image). Then bipole cell activations that are strengthened by this feedback can complete that chopstick
  • image p325fig08.45 The feedback loop between MT/MST-to-V1-to-V2-to-MT/MST enables a percept of two chopsticks sliding one in front of the other while moving in opposite directions.
    || Closing formotion feedback loop. [formotion interaction, motion grouping] V1 -> V2 -> (MT <-> MST) -> V1
  • image p326fig08.46 How do we determine the relative motion direction of a part of a scene when it moves with a larger part that determines an object reference frame?
    || How do we perceive relative motion of object parts?
  • image p327fig08.47 Two classical examples of part motion in a moving reference frame illustrate the general situation where complex objects move while their multiplie parts may move in different directions relative to the direction of the reference frame.
    || Two kinds of percepts and variations (Johansson 1950). Symmetrically moving inducers: each do moves along a straight path, each part contributes equally to common motion; Duncker wheel (Duncker 1929): one dot moves on a cycloid, the other dot (the "center") moves stright, unequal contributipon from parts; If the dot is presented alone: seen as cycloid; if with center: seen as if it were on the rim of a wheel.
  • image p328fig08.48 How vector subtraction from the reference frame motion direction computes the part directions.
    || How vector decomposition can explain them. Common motion subtracted from retinal motion gives part motion: [retinal, common, part] motion
  • image p328fig08.49 A directional peak shift in a directional hypercolumn determines the part directions relative to a moving reference frame.
    || What is the mechanism of vector decomposition? (Grossberg, Leveille, Versace 2011). Prediction: directional peak shift! ...specifically, a peak shift due to Gaussian lateral inhibition. [retinal, part, common, relative] motion. shunting dynamics, self-normalization, contrast gain control
  • image p329fig08.50 The common motion direction of the two dots builds upon illusory contours that connect the dots as they move through time. The common motion directin signal can flow along these boundaries.
    || How is common motion direction computed? retinal motion. Bipole grouping in the form stream creates illusory contours between the dots. V2-MT formotion interaction injects the completed boundaries into the motion stream where they capture consistent motion signals. Motion of illusory contours is computed in the motion stream: cf. Ramanchandran
  • image p329fig08.51 Large and small scale boundaries differentially form illusory contours between the dots and boundaries that surround each of them respectively. These boundaries capture the motion signals that they will support via V2-to-MT formotion interaction. The MST-to-MT directional peak shift has not yet occurred.
    || Large scale: near. Can bridge gap between dots to form illusory contours. Spatial competition inhibits inner dot boundaries.; Small scale: far. Forms boundaries around dots.
  • image p330fig08.52 Direction fields of the object frame (left column) and of the two dot "parts" (right column) show the correct motion directions after the peak shift top-down expectation acts.
    || Simulation of motion vector decomposition. [Larger scale (nearer depth), Small scale (farther depth)] vs [Down, Up]
  • image p330fig08.53 Simulation of the various directional signals of the left dot through time. Note the amplification of the downward directional signal due to the combined action of the short-range and long-range directional signals.
    ||
  • image p331fig08.54 The simulated part directions of the rotating dot through time after the translational motion of the frame does its work via the top-down peak shift mechanism.
    || Cycloid. Motion directions of a single dot moving slowly along a cycloid curve through time.
  • image p331fig08.55 The rightward motion of the dot that determines the frame propagates along the illusory contour between the dots and thereby dominates the motion directions along the rim as well, thereby setting the stage for the peak shift mechanism.
    || Duncker Wheel: large scale. [cyc;oid, center] velocity -> rightward common velocity. Stable rightward motion at the center captures motion at the rim.
  • image p332fig08.56 Simulation of the Duncker Wheel motion through time. See the text for details.
    || Duncker Wheel: small scale. Temporal procession of activity in eight directions. Wheel motion as seen when directions are collapsed.
  • image p332fig08.57 The MODE model uses the Motion BCS as its front end, followed by a saccadic target selection circuit in the model LIP region that converts motion directions into movement directions. These movement choices are also under basal ganglia (BG) control. More will be explained about the BG in Chapters 13 and 15.
    || MODE (MOtion DEcision) model (Grossberg, Pilly 2008, Vision Research). Change sensitive receptors -> directional transient cells -> directiponal short-range filter -> spatial and directional competition -> directional long-range filter (MT) <-> directional grouping network (MSTv) -> saccadic target selection <-> gsting mechanism (BG). Representation of problem that solves the aperture problem (change sensitive receptors (CSR) -> directional grouping network (DGN, MSTv)). Gated movement choice (saccadic target selection & gating mechanism)
  • image p333fig08.58 Neurophysiological data (left image) and simulation (right image) of LIP data during correct trials on the RT task. See the text for details.
    || LIP responses during RT task correct trials (Roltman, Shadlen 2002). More coherence in favored direction causes faster cell activation. More coherence in opposite direction causes faster cell inhibition. Coherence stops playing a role in the final stages of LIP firing.
  • image p334fig08.59 Neurophysiological data (left column) and simulations (right column) of LIP responses for the FD task during both [correct, error] trials. See the text for details.
    || LIP responses for the FD task during both [correct, error] trials (Shadlen, Newsome 2001). LIP encodes the perceptual decision regardless of the true direction of the dots. Predictiveness of LIP responses on error trials decreases with increasing coherence.
  • image p334fig08.60 Behavioral data (left image) and simulation (right image) about accuracy in both the RT and FD tasks. See text for details
    || Behavioral data: % correct vs % coherence (Mazurek etal 2003; Roltman, Shadien 2002). More coherence in the motion causes more accurate decisions. RT task accuracy at weaker coherence levels is slightly better than FD task accuracy.
  • image p335fig08.61 Behavioral data (left image) and simulation (right image) about speed in correct and error trials of the RT task. See text for details.
    || Behavioral data: speed, correct and error trials (RT task) (Roltman, Shadien 2002). More coherence in the motion causes faster reaction time.
  • image p335fig08.62 More remarkable simulation fits (right column) to LIP neurophysiology data (left column) about where and when to move the eyes.
    || LIP encodes not only where, but also when, to move the eyes. ...No Bayes(Roltman, Shadien 2002). Firing rate (sp/s) vs time (ms). Slope of firing rate (sp/s^2) vs % correct.
  • image p338fig09.01 The brain regions that help to use visual information for navigating in the world and tracking objects are highlighted in yellow.
    || How does a moving observer use optic flow to navigate while tracking a moving object? [What ventral, Where dorsal] retina -> many locations -> PFC
  • image p338fig09.02 Heading, or the direction of self-motion (green dot), can be derived from the optic flow (red arrows) as an object, in this case an airplane landing, moves forward.
    || Heading and optic flow (Gibson 1950). Optic flow: scene motion generates a velocity field. Heading: direction of travel- self-motion direction. Heading from optic flow, focus of expansion (Gibson 1950). Humans determine heading accurately to within 1-2 degrees.
  • image p339fig09.03 When an observer moves forward, an expanding optic flow is caused. Eye rotations cause a translating flow. When these flows are combined, a spiral flow is caused. How do our brains compensate for eye rotations to compute the heading of the expanding optic flow?
    || Optic flow during navigation (adapted from Warren, Hannon 1990) [observer, retinal flow]: [linear movement, expansion], [eye rotation, translation], [combined motion, spiral]
  • image p339fig09.04 This figure emphasizes that the sum of the expansion and translation optic flows is a spiral optic flow. It thereby raises the question: How can the translation flow be subtracted from the spiral flow to recover the expansion flow?
    || Eye rotations add a uniform translation to an flow field. Resulting retinal patterns are spirals. Expansion + translation = spiral
  • image p340fig09.05 An outflow movement command, also called efference copy or corollary discharge, is the souce ot the signals whereby the commanded eye movement position is subtracted from spiral flow to recover expansion flow and, with it, heading.
    || Subtracting efference copy. Many experiments suggest that the brain internally subtracts the translational component due to eye movements. Efference copy subtracts the translational component using pathways that branch from outflow movement commands to the eye muscles.
  • image p340fig09.06 Corollary discharges are computed using a branch of the outflow movement commands that move their target muscles.
    ||
  • image p340fig09.07 Log polar remapping from the retina to cortical area V1 and beyond converts expansion, translation, and spiral flows on the retina into parallel flows, with different orientations, on the cortical map.
    || Log polar remapping of optic flow. retina -> cortex. Any combination of expansion and circular motion centered on the fovea maps to cortex as a single direction. Retinal Cartesian coordinates (x,y) map to cortical polar coordinates (r,theta). This makes it easy to compute directional receptive fields in the cortex!
  • image p341fig09.08 How the various optic flows on the retina are mapped through V1m MT, and MSTd to then compute heading in parietal cortex was modeled by (Grossberg, Mingolia, Pack 1999), using the crucial transformation via V1 log polar mapping into parallel cortical flow fields.
    || MSTd model (Grossberg, Mingolia, Pack 1999). Retinal motion -> V1 log polar mapping -> Each MT Gaussian RF sums motion in preferred direction -> Each MSTd cell sums MT cell inputs with same log polar direction -> Efference copy subtracts rotational flow from MSTd cells.
  • image p341fig09.09 Responses of MSTd cells that are used to compute heading. See the text for details.
    || Cortical area MSTd (adapted from Graziano, Anderson, Snowden 1994). MSTd cells are sensitive to spiral motion as combinations of rotation and expansion.
  • image p342fig09.10 Model simulations of how the peak of MSTd cell activation varies with changes of heading.
    || Heading in log polar space: Retina -> log polar -> MSTd cell. Log polar motion direction correlates with heading eccentricity.
  • image p342fig09.11 Psychophysical data (left panel) and computer simulation (right column) of the importance of efference copy in real movements. See the text for details.
    || Heading: move to wall and fixate stationary object (adapted from Warren, Hannon 1990). Inaccurate for simulated eye rotation, accurate for real eye rotation, need confirmation by efference copy!
  • image p343fig09.12 Transforming two retinal views of the Simpsons into log polar coordinates dramatizes the problem that our brains need to solve in order to separate, and recognize, overlapping figures.
    || View 1 cortical magnification. View 2 How do we know if we are still fixating on the same object?!
  • image p343fig09.13 When one scans the three different types of pears in the left image, as illustrated by the jagged blue curve with red movement end positions, and transforms the resulting retinal images via the cortical magnification factor, or log polar mapping, the result is the series of images in the right column. How do our brains figure out from such confusing data which views belong to which pear?
    || View-invariant object learning and recognition Three pears: Anjou, Bartlett, Comice. Which is the Bartlett pear? During unsupervised scanning and learning about the world, no one tells the brain what views belong to which objects while it learns view-invariant object categories. Cortical magnificantion in V1.
  • image p344fig09.14 (top row, left column) By fitting MT tuning curves with Gaussian receptive fields, a tuning width of 38° is estimated, and leads to the observed standard spiral tuning of 61° in MSTd. (bottom row, left column) The spiral tuning estimate in Figure 9.16 maximizes the position invariant of MSTd receptive fields. (top row, right column) Heading sensitivity is not impaired by these parameter choices.
    || [Spiral tuning (deg), position invariance (deg^(-1)), heading sensitivity] versus log polar direction tuning σ (deg)
  • image p345fig09.15 Double opponent directional receptive fields in MT are capable of detecting the motion of objects relative to each other and their backgrounds.
    || Motion opponency in MT (Born, Tootell 1992). Motion opponent (Grossberg etal), Differential motion (Royden etal), Subtractive motion cells (Neumann etal). ON center directionally selective: [excit, inhibit]ed by motion in [one, opponent] direction. OFF surround directionally selective: [excit, inhibit]ed by motion in [opponent, center] direction.
  • image p346fig09.16 A macrocircuit of some of the main brain regions that are used to move the eyes. Black boxes denote areas belonging to the saccadic eye movement systes (SAC), white boxes the smooth pursuit eye system (SPEM), and gray boxes, both systems. The abbreviations for the different brain regions are: LIP - Lateral Intra-Parietal area; FPA - Frontal Pursuit Area; MST - Middle Superior Temporal area; MT - Middle Temporal area; FEF - Frontal Eye Fields; NRPT - Nucleus Reticularis Tegmenti Pontis; DLPN - Dorso-Lateral Pontine Nuclei; SC - Superior Colliculus; CBM - CereBelluM; MVN/rLVN - Medial and Rostro-Lateral Vestibular Nucleii; PPRF - a Peri-Pontine Reticular Formation; TN - Tonic Neurons
    ||
  • image p347fig09.17 The leftward eye movement control channel in the model that I developed with Christopher Pack. See the text for details.
    || retinal image -> MT -> MST[v,d] -> pursuit
  • image p347fig09.18 These circuits between MSTv and MSTd enable predictive target tracking to be achieved by the pursuit system, notably when the eyes are successfully foveating a moving target. Solid arrows depict excitatory connections, dashed arrows depict inhibitory connections.
    ||
  • image p348fig09.19 How a constant pursuit speed that is commanded by MSTv cells starts by using target speed on the retina and ends by using backgound speed on the retina in the reverse direction during successful predictive pursuit.
    || target speed on retina, background speed on retina, pursuit speed command by MSTV cells
  • image p349fig09.20 Using virtual reality displays (left image), (Fajen, Warren 2003) collected data (right two images) about how observers avoid obstacles (open circular disks) as a function of their distance and angular position as they navigate towards a fixed goal (x). These data illustrate how goals act as attractors while obstacles act as repellers.
    || Steering from optic flow (Fajen, Warren 2003). goals are attractors, obstacles are repellers. Damped spring model explains human steering data.
  • image p349fig09.21 How attractor-repeller dynamics with Gaussians change the net steering gradient as the goal is approached.
    || Steering dynamics: goal approach. body-centered coordinates [obstacle, goal, heading] -> steering
  • image p350fig09.22 How the negative Gaussian of an obstacle causes a peak shift to avoid the obstacle without losing sight of how to reach the goal.
    || Steering dynamics: obstacle avoidance. body-centered coordinates [obstacle, goal, heading] -> steering
  • image p350fig09.23 Unidirectional transient cells respond to changes in all image contours as an auto navigates and urban scene while taking a video of it.
    || Unidirectional transient cells (Baloch, Grossberg 1997; Berzhanskaya, Grossberg, Mingolia 2007). Transient cells respond to leading and trailing boundaries. Transient cells response, driving video
  • image p351fig09.24 Directional transient cells respond most to motion in their preferred directions.
    || Directional transient cells. 8 directions, 3 speeds
  • image p351fig09.25 By the time MT+ is reached, directional transient cells and directional filters have begun to extract more global directional information from the image.
    || M+ computes global motion estimate. Estimate global motion from noisy local motion estimates.
  • image p352fig09.26 The final stage of the model computes a beautiful expansion optic flow that permits an easy estimate of the heading direction, with an accuracy that matches that of human navigators.
    || The model generates accurate heading (Warren, Hannon 1990; Royden, Crowell, Banks 1994). Maximally active MSTd cell = heading estimate. Accuracy matches human data. Random dots [mean +-1.5°, worst +-3.8°], Random dots with rotation [accurate with rotations <1°/s, rotation increases, error decreases], OpenGL & Yosemite benchmark +-1.5°, Driving video +-3°.
  • image p354fig10.01 The lamiar cortical circuit that realizes how we pay attention to an object sends signals from layer 6 of a higher cortical level to layer 6 of a lower cortical level and then back up to layer 4. This "folded feedback" circuit realizes a top-down, modulatory on-center, off-surround circuit that realizes the ART Matching Rule.
    || Top-down attention and folded feedback. Attentional signals also feed back into 6-to-4 on-center off-surround. 1-to-5-to-6 feedback path: Macaque (Lund, Booth 1975) cat (Gilbert, Wiesel 1979). V2-to-V1 feedback is on-center off-surround and affects layer 6 of V1 the most (Bullier etal 1996; Sandell, Schiller 1982). Attended stimuli enhanced, ignored stimuli suppressed. This circuit supports the predicted ART Matching Rule! [LGN, V[1,2][6->1]]
  • image p355fig10.02 Distinguishing processes of seeing vs knowing has been difficult because they interact so strongly.
    || Seeing vs. Knowing. Seeing and knowing [operate at different levels of the brain, use specialized circuits], but they [interact via feedback, use similar cortical designs, feedback is needed for conscious perception]. Cerebral Cortex: Seeing [V1-V4, MS-MST], Knowing [IT, PFC].
  • image p356fig10.03 Laminar computing achieves at least three basic properties of visual processing that have analogs in all biologically intelligent behaviors. These properties may be found in all cortical circuits in specialized form.
    || What does Laminar Computing achieve? 1. Self-stabilizing development and learning; 2. Seamless fusion of a) pre-attentive automatic bottom-up processing, b) attentive task-selective top-down processing; 3. Analog coherence: Solution of Binding Problem for perceptual grouping without loss of analog sensitivity. Even the earliest visual cortical stages carry out active adaptive information processing: [learn, group, attention]ing
  • image p357fig10.04 Laminar Computing achieves its properties by computing in a new way that sythesizes the best properties of feedforward and feedback interactions, analog and digital computations, and preattentive and attentive learning. The property of analog coherence enables coherent groupings and decisions to form without losing sensitivity to the amount of evidence that supports them.
    || Laminar Computing: a new way to compute. 1. Feedforward and feedback: a) Fast feedforward processing when data are unambiguous (eg Thorpe etal), b) slower feedback chooses among ambiguous alternatives [self-normalizing property, real-time probabiligy theory], c) A self-organizing system that trades certainty against speed: Goes beyond Bayesian models! 2. Analog and Digital: Analog Coherence combines the stability of digital with the sensitivity of analog. 3. Preattentive and Attentive Learning: Reconciles the differences of (eg) Helmholtz and Kanizsa, "A preattentive grouping is its own
  • image p359fig10.05 Activation of V1 is initiated, in part, by direct excitatory signals from the LGN to layer 4 of V1.
    || How are layer 2/3 bipole cells activated? Direct bottom-up activation of layer 4. LGN -> V1 layer 4. Strong bottom-up LGN input to layer 4 (Stratford etal 1996; Chung, Ferster 1998). Many details omitted.
  • image p359fig10.06 Another, albeit indirect, pathway from LGN exists that can also excite layer 4 of V1. Why are not these two pathways redundant? The answer, ultimately, how to do with how cortex learns, as well as with how it pays attention. See the text for details.
    || Another bottom-up input to layer 4: Why?? Layer 6-to-4 on-center off-surround (Grieve, Sillito 1991, 1995; Ahmedetal 1994, 1997). LGN projects to layers 6 and 4. Layer 6 excites spiny stellates in column above it. Medium range connections onto inhibitory neurons. 6-t-4 path acts as on-center off-curround.
  • image p359fig10.07 The two bottom-up pathways from LGN to layer 4 of V1 can together activate layer 4 and contrast-normalize layer 4 responses.
    || Bottom-up contrast normalization (Grossberg 1968, 1973; Sperling, Sondhi 1968; Heeger 1992; Douglas etal 1995; Shapley etal 2004). Together, direct LGN-to-4 path and 6-to-4 on-center off-surround provide contrast normalization if cells obey shunting or membrane equation dynamics.
  • image p360fig10.08 The bottom-up on-center off-surround from LGN-to-6-to-4 has a modulatory on-center because of its role in realizing the ART Matching Rule and, with it, the ability of the cortex to dynamically stabilize its learned memories.
    || Modulation of priming by 6-to-4 on-center (Stratford etal 1996; Callaway 1998). On-center 6-to-4 excitation is inhibited down to being modulatory (priming, subthreshold). On-center 6-to-4 excitation cannot activate layer 4 on its own. Clarifies need for direct path. Prediction: plays key role in stable grouping, development and learning. ART Matching Rule!
  • image p360fig10.09 Perceptual grouping is carried out in layer 2/3 by long-range horizontal excitatory recurrent connections, supplemented by short-range disynaptic inhibitory connections that together realize the bipole grouping properties that are diagrammed in Figure 10.10.
    || Grouping starts in layer 2/3. LGN-> 6-> 4-> 2/3: 1. Long-range horizontal excitation links collinear, coaxial receptive fields (Gilbert, Wiesel 1989; Bosking etal 1997; Schmidt etal 1997) 2. Short-range disynaptic inhibition of target pyramidal via pool of intraneurons (Hirsch, Gilbert 1991) 3. Unambiguous groupings can form and generate feedforward outputs quickly (Thorpe etal 1996).
  • image p361fig10.10 Bipole grouping is achieved by long-range horizontal recurrent connections that also give rise to short-range inhibitory interneurons which inhibit nearby bipole cells as well as each other.
    || Bipole property controls perceptual grouping. Collinear input on both sides. Excitatory inputs summate. Inhibitory inputs normalize, Shunting inhibition! Two-against-one. Cell is excited.
  • image p362fig10.11 Feedback between layer 2/3 to the layer 6-to-4-to-2/3 feedback loop chooses the strongest grouping in cases where there is more than one. If only one grouping exists, then the circuit can function very quickly in a feedforward manner. When multiple groupings exist, the cortex "runs as fast as it can" to select the one with the most evidence to support it using the self-normalizing inhibition in the layer 6-to-4 off-surround.
    || How is the final grouping selected? Folded feedback LGN-> 6-> 4-> 2/3. 1. Layer 2/3 groupings feed back into 6-to-4 on-center off-surround: a) direct layer 2/3 -to-6 path; b) can also go via layer 5 (Blasdel etal 1985; Kisvarday etal 1989). 2. Strongest grouping enhanced by its on-center. 3. Inputs to weaker groupings suppressed by off-surround. 4. Interlaminar feedback creates functional columns. Activities of conflicting groupings are reduced by self-normalizing inhibition, slowing processing; intracortical feedback selects and contrast-enhances the winning grouping, speeding processing.
  • image p363fig10.12 The same laminar circuit design repeats in V1 and V2, albeit with specializations that include longer horizontal grouping axoms and figure-ground separation interactions.
    || V2 repeats V1 circuitry at larger spatial scale, LGN-> V1[6,4,2/3]-> V2[6,4,2/3]. V2 layer 2/3 horizontal axons longer-range than in V1 (Amir etal 1993). Therefore, longer-range groupings can form in V2 (Von der Heydt etal 1984)
  • image p364fig10.13 The bottom-up adaptive filter, intracortical grouping circuit, and intercortical top-down attentional circuit all use the same competitive decision circuit between layers 6 and 4, called the attention-preattention interface, with which to select the featural patterns that will be processed.
    || Bottom-up filters and intracortical grouping feedback use the same 6-to-4 decision circuit, LGN-> Vx[6,4,2/3]. competitive decision circuit, modulatory on-center off-surround network. Top-down intercortical attention also uses the same 6-to-4 decision circuit!
  • image p364fig10.14 This figure emphasizes how preattentive intracortical groupings and top-down intercortical attention share the same modulatory on-center, off-surround layer 4-to-6 decision circuit.
    || Explanation: grouping and attention share the same modulatory decision circuit. Layer 6-6-4-2/3 pathway shown; also a layer 6-1-2/3 path. intercortical attention, both act via a modulatory on-center off-surround decision circuit, intracortical feedback from groupings
  • image p367fig10.15 Data (left column) and simulation (right column) of how attention prevents a masking stimulus from inhibiting the response to the on-center of the cell from which the recording was made.
    || Attention protects target from masking stimulus (Reynolds etal 1999; Grossberg, Raizada 2000).
  • image p367fig10.16 Neurophysiological data (left image) and simulation (right image) of how a low-contrast target can be facilitated if it is surrounded by a paid (31May2023 Howell - is word correct?) of collinear flankers, and suppresssed by them if it has high contrast.
    || Flankers can enhance or suppress targets (Polat etal 1998; Grossberg, Raizada 2000). target alone, target + flankers, flankers alone.
  • image p368fig10.17 Neurophysiological data (left image) and simulation (right image) showing that attention has a greater effect on low contrast than high contrast targets.
    || Attention has greater effect on low contrast targets (DeWeerd etal 1999; Raizada, Grossberg 2001). Threshold increase (deg) vs Grating contrast (%), [no, with] attention
  • image p368fig10.18 Neurophysiological data (left image) and simulation (right image) of relative on-cell activities when the input to that cell may also be surroubded by iso-orientation or perpendicular textures.
    || Texture reduces response to a bar: iso-orientation suppression (Knierim, van Essen 1992), perpendicular suppression (Raizada, Grossberg 2001)
  • image p369fig10.19 Data from (Watanabe etal 2001) showing perceptual learning of the coherent motion direction, despite the lack of extra-foveal attention and awareness of the moving stimuli.
    || Unconscious perceptual learning of motion direction, % correct for two tests, compared to chance level results.
  • image p371fig11.01 FACADE theory explains how the 3D boundaries and surfaces are formed with which we see the world in depth.
    || 3D Vision and figure-ground perception (Grossberg 1987, 1994, 1997). How are 3D boundaries and 3D surfaces formed? How the world looks without assuming naive realism. Form And Color And DEpth theory (FACADE). Prediction: Visible figure-ground-separated Form-And-Color-And-DEpth are represented in cortical area V4.
  • image p372fig11.02 FACADE theory explains how multiple depth-selective boundary representations can capture the surface lightnesses and colors at the correct depths. The fact that both surface qualia and depth are determined by a single process implies that, for example, a change in brightness can cause a change in depth.
    || 3D surface filling-in. From filling-in of surface lightness and color to filling-in of surface depth. Prediction: Depth-selective boundary-gated filling-in defines the 3D surfaces that we see. Prediction: A single process fills-in lightness, color, and depth. Can a change in brightness cause a change in depth? YES! eg proximity-luminance covariance (Egusa 1983, Schwartz, Sperling 1983). Why is depth not more unstable when lighting changes? Prediction: Discounting the illuminant limits variability.
  • image p373fig11.03 Both contrast-specific binocular fusion and contrast-invariant boundary perception are needed to properly see the world in depth.
    || How to unify contrast-specific binocular fusion with contrast-invariant boundary perception? Contrast-specific binocular fusion: [Left, right] eye view [, no] binocular fusion. Contrast-invariant boundary perception: contrast polarity along the gray square edge reverses; opposite polarities are pooled to form object boundary.
  • image p374fig11.04 The three processing stages of monocular simple cells, and complex cells accomplish both contrast-specific binocular fusion and contrast-invariant boundary perception.
    || Model unifies contrast-specific binocular fusion and contrast-invariant boundary perception (Ohzawa etal 1990; Grossberg, McLoughlin 1997). [Left, right] eye V1-4 simple cells-> V1-3B simple cells-> V1-2/3A complex cells. Contrast-specific stereoscopic fusion by disparity-selective simple cells. Contrast-invariant boundaries by pooling opposite polarity binocular simple cells at complex cells layer 2/3A.
  • image p374fig11.05 The brain uses a contrast constraint on binocular fusion to help ensure that only contrasts which are derived from the same objects in space are binoculary matched.
    || Contrast constraint on binocular fusion. Left and right input from same object has similar contrast, Percept changes when one contrast is different. Fusion only occurs between bars of similar contrast (McKee etal 1994)
  • image p375fig11.06 The contrast constraint on binocular fusion is realized by obligate cells in layer 3B of cortical area V1.
    || Model implements contrast constraint on binocular fusion (cf. "obligate" cells Poggio 1991). An ecological constraint on cortical development. [left, right] eye cart V1-[4 monocular simple, 3B binocular simple, complex2/3A] cells. Inhibitory cells (red) ensure that fusion occurs when contrasts in left and right eye are approximately equal.
  • image p375fig11.07 The 3D LAMINART model uses both monocular and binocular simple cells to binocularly fuse like image contrasts. The remainder of the model generates 3D boundary and surface representations of multiple kinds of experiments as well as of natural scenes.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A], V2 thin stripe [4->2/3A], V4]. V1 blob [V1-4 monocular, V1 interior binocular] simple cells. [complex, simple, inhibitory] cells, on-center off-surround
  • image p376fig11.08 The contrast constraint on binocular fusion is not sufficient to prevent many of the false binocular matches that satisfy this constraint.
    || How to solve the correspondance problem? How does the brain inhibit false matches? Contrast constraint is not enough. [stimulus, multiple possible binocular matches] - Which squares in the two retinal images must be fused to form the correct percept?
  • image p376fig11.09 The disparity filter in V2 helps to solve the correspondence problem by eliminating spurious contrasts using line-of-sight inhibition.
    || Model V2 disparity filter solves the correspondence problem. An ecological constraint on cortical development. [left, right] eye view: False matches (black) suppressed by line-of-sight inhibition (green lines). "Cells that fire together wire together".
  • image p376fig11.10 The 3D LAMINART model shows how the disparity filter can be integrated into the circuit that completes 3D boundary representations using bipole grouping cells. It also explains how surface contours can strengthen boundaries that succeed in generating closed filling-in domains.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A] surface contour, V2 thin stripe (monocular surface) [4->2/3A], V2 interior [disynaptic inhibitory interneurons, bipole grouping cells, disparity filter, V4 binocular surface]. [complex, simple, inhibitory] cells, on-center off-surround
  • image p377fig11.11 DaVinci stereopsis phenomena occur when only one eye can receive visual inputs from part of a 3D scene due to occlusion by a nearer surface.
    || How does monocular information contribute to depth perception? DaVinci steropsis (Gillam etal 1999). Only by utilizing monocular information can visual system create correct depth percept. [left, right] eye view
  • image p378fig11.12 How monocular and binocular information are combined in V1 and V2 in the 3D LAMINART model.
    || Model utilizes monocular information. [left, right] eye cart V1-[4 monocular simple, 3B binocular simple, complex2/3A [mo,bi]nocular] cells, V2-4 binocular complex cells. black = monocular cells, blue = binocular cells. In V2, monocular inputs add to binocular inputs along the line of sight and contribute to depth perception.
  • image p379fig11.13 How the 3D LAMINART model explains DaVinci stereopsis. All the stages of boundary and surface formation are color coded to clarify their explanation. Although each mechanism is very simple, when all of them act together, the correct depthful surface representation is generated. See the text for details.
    || DaVinci stereopsis (Nakayama, Shimojo 1990). An emergent property of the previous simple mechanisms working together. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] pair [Binocular match: boundaries of thick bar -> Add monocular boundaries along lines-of-sight -> Line-of-sight inhibition kills weaker vertical boundaries -> 3D surface percept not just a disparity match!] pair [Binocular match: right edge of thin and thick bars -> Strongest boundaries: binocular and monocular boundaries add -> Vertical boundaries from monocular left edge of thin bar survive -> Filling-in contained by connected boundaries]. cart [very near, near, fixation plane, far, very far]
  • image p380fig11.14 The model explanation of DaVinci stereopsis when the input stimuli have opposite contrast polarities.
    || Polarity-reversed Da Vinci stereopsis (Nakayama, Shimojo 1990). Same explanation! (... as Figure 11.13 ...) [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p381fig11.15 The same model mechanisms explain the surface percept that is generated by the variant of DaVinci stereopsis that Gillam, Blackburn, and Nakayama studied in 1999.
    || DaVinci stereopsis (Gillam, Blackburn, Nakayama 1999). same model mechanisms. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p382fig11.16 The version of DaVinci steropsis wherein three narrow rectangles are binocularly matched with one thick rectangle can also be explained is a similar way.
    || DaVinci stereopsis of [3 narrow, one thick] rectangles (Gillam, Blackburn, Nakayama 1999). [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p383fig11.17 The bars in the left and right images that are in the same positions are marked in red to simplify tracking how they are processed at subsequent stages.
    || The Venetian blind effect (Howard, Rogers 1995). Every second bar on L in same position as every third bar on R. These bars are marked in red; see them match in Fixation Plane. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p384fig11.18 Surface and surface-to-boundary surface contour signals that are generated by the Venetian blind image.
    || Venetian blind effect (Howard, Rogers 1995). Every second bar on L in same position as every third bar on R. PERCEPT: 3-bar ramps sloping up from L to R with step returns. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p385fig11.19 Dichoptic masking occurs when the bars in the left and right images have sufficiently different contrasts.
    || Dichoptic masking (McKee, Bravo, Smallman, Legge 1994). [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p387fig11.22 Simulation of the boundaries that are generated by the Julesz stereogram in Figure 4.59 (top row) without (second row) and with (third row) surface contour feedback.
    || Boundary cart [V2-2, V2, V1] cart [near, fixation, far]
  • image p388fig11.23 Simulation of the surface percept that is seen in response to a sparse stereogram. The challenge is to assign large regions of ambiguous white to the correct surface in depth.
    || [left, right] retinal input. Surface [near, fixation, far] V4
  • image p388fig11.24 Boundary groupings capture the ambiguous depth-ambiguous feature contour signals and lift them to the correct surface in depth.
    || [surface, boundary] cart [near, fixation, far] V2.
  • image p389fig11.25 Boundaries are not just edge detectors. If they were, a shaded ellipse would look flat, and uniformly gray.
    || 3D vision and figure-ground separation. Multiple-scale, depth-selective boundary webs. [dark-light, light-dark] boundaries -> complex cells! If boundaries were just edge detectors, there would be just a bounding edge of the ellipse. After filling-in, it would look like this:.
  • image p390fig11.26 Although larger scales sometimes look closer (left image), that is not always true, as the right image of (Brown, Weisstein 1988) illustrates. The latter percept is, moreover, bistable. These images show the importance of interactions between groupings and multiple scales to determine perceived surface depths.
    || Multiple-scale depth-selective groupings determine perceived depth (Brown, Weisstein 1988). As an object approaches, it gets bigger on the retina. Does a big scale (RF) always signal NEAR? NO! The same scale can signal either near or far. Some scales fuse more than one disparity.
  • image p391fig11.27 (left image) Each scale can binocularly fuse a subset of spatial scales, with larger scales fusing more scales and closer ones than small scales. (right image) Cortical hypercolumns enable binocular fusion to occur in a larger scale even as rivalry occurs in a smaller scale.
    || Multiple-scale grouping and size-disparity correlation. Depth-selective cooperation and competition among multiple scales determines perceived depth: a) Larger scales fuse more depth; b) Simultaneous fusion and rivalry. Boundary prining using surface contours: Surface-to-boundary feedback from the nearest surface that is surrounded by a connected boundary eliminates redundant boundaries at the same position and further depths.
  • image p391fig11.28 (left image) Ocular dominance columns respond selectively to inputs from one eye or the other. (right image) Inputs from the two eyes are mapped into layer 4C of V1, among other layers.
    || Cortex V1[1, 2/3, 4A, 4B, 4C, 5, 6], LGN
  • image p392fig11.29 Boundary webs of the smallest scales are closer to the boundary edge of the ellipse, and progressively larger scale webs penetrate ever deeper into the ellipse image, due to the amount of evidence that they need to fire. Taken together, they generate a multiple-scale boundary web with depth-selective properties that can capture depth-selective surface filling-in.
    || 3D vision and figure-ground separation. Multiple-scale, depth-selective boundary webs. Instead, different size detectors generate dense boundary webs at different positions and depths along the shading gradient. Small-far, Larger-nearer, Largest-nearest. Each boundary web captures the gray shading in small compartments at its position and depths. A shaded percept in depth results.
  • image p392fig11.30 Multiple scales interact with bipole cells that represent multiple depths, and conversely. See the text for details.
    || How multiple scales vote for multiple depths. Scale-to-depth and depth-to-scale maps. Smallest scale projects to, and receives feedback from, boundary groupings that represent the furthest depths. Largest scale connects to boundary groupings that represent all depths. multiple-[depth, scale] dot [grouping, filter] cells. [small <-> large] vs [far <-> near]
  • image p393fig11.31 (Todd, Akerstrom 1987) created a series of 2D images from discrete black patches on a white disk and showed how the perceived depth varies with the factors summarized in the figure. The LIGHTSHAFT model quantitatively simulated their data.
    || Factors determining depth-from-texture percept. Perceived depth varies with texture element width, but only when elements are elongated and sufficiently aligned with one another to form long-range groupings. Data of (Todd, Akerstrom 1987) simulated by the LIGHTSHAFT model of (Grossberg, Kuhlmann 2007). [HP, LP, CCE, CCS, RO]
  • image p393fig11.32 Kulikowski stereograms involve binocular matching of out-of-phase (a) Gaussians or (b) rectangles. The latter can generate a percept of simultaneous fusion and rivalry. See the text for why.
    ||
  • image p394fig11.33 The Kaufman stereogram also creates a percept of simultaneous fusion and rivalry. The square in depth remains fused and the perpendicular lines in the two images are pervceived as rivalrous.
    || 3D groupings determine perceived depth, stereogram (Kaufman 1974). Vertical illusory contours are at different disparities than those of bounding squares. Illusory square is seen in depth. Vertical illusory contours are binocularly fused and determine the perceived depth of the square. Thin, oblique lines, being perpendicular, are rivalrous: simultaneous fusion and rivalry.
  • image p395fig11.34 A comparison of the properties of other rivalry models with those of the 3D LAMINART model (surrounded by red border). Significantly, only 3D LAMINART explains both stable vision and rivalry (green border).
    || Comparison of rivalry models
  • image p396fig11.35 Three properties of bipole boundary grouping in V2 can explain how boundaries oscillate in response to rivalry-inducing stimuli. Because all boundaries are invisible, however, these properties are not sufficient to generate a conscious percept of rivalrous surfaces.
    || 3 V2 boundary properties cause binocular rivalry. 1. Bipole grouping, 2. Orientational competition, 3. Actovity-dependent habituation
  • image p397fig11.36 Simulation of the temporal dynamics of rivalrous, but coherent, boundary switching.
    || Simulation of 2D rivalry dynamics. [Inputs, Temporal dynamics of V2 layer 2/3 boundary cells] cart [left, right]
  • image p398fig11.37 Simulation of the no swap baseline condition of (Logothetis, Leopold, Sheinberg 1996).
    || [Binocular, [left, right] eye] activity
  • image p399fig11.38 Simulation of the swap condition of (Logothetis, Leopold, Sheinberg 1996).
    || [Binocular, [left, right] eye] activity
  • image p399fig11.39 Simulation of the eye rivalry data of (Lee, Blake 1999).
    || [Binocular, [left, right] eye] activity
  • image p400fig11.40 When planar 2D parallelograms are justaposed, the resultant forms generate 3D percepts that are sensitive to the configuration of angles and edges in the fugure. See the text for why.
    || 3D representation of 2D images, Monocular cues (eg angles) can interact together to yield 3D interpretation. Monocular cues by themselves are often ambiguous. Same angles and shapes, different surface slants. How do these ambiguous 2D shapes contextually define a 3D object form?
  • image p401fig11.41 The 3D LAMINART model proposes how angle cells and disparity-gradient interact through learning to generate 3D representations of slanted objects.
    || 3D LAMINART model. [LGN, V1, V2, V4] Four key additions: 1. Angle cells - tuned to various angles; 2. Disparity-gradient cells - tuned to disparity gradients in the image; 3. weights from [angle to disparity-gradient] cells - learned while viewing 3D image; Colinear grouping between [angle to disparity-gradient] cells - disambiguates ambiguous groupings.
  • image p401fig11.42 A hypothetical cortical hypercolumn structure proposes how angle cells and disparity-gradient cells, including bipole cells that stay within a given depth, may self-organize during development.
    || Hypercolumn representation of angles [leftm right] cart [far-to-near, zero, near-to-far]
  • image p402fig11.43 A pair of disparate images of a scene from the University of Tsukuba. Multiview imagre database.
    || input [left, right]
  • image p402fig11.44 3D scenic reconstruction of the image in Figure 11.43 by the 3D LAMINART model.
    || Disparity [5, 6, 8, 10, 11, 14]: images of objects in common depth planes
  • image p403fig11.45 The multiple boundary and surface scales that were used to simulate a reconstruction of the SAR image in Figure 3.24.
    || SAR processing by multiple scales. [boundaries before completion, boundaries after completion, surface filling-in] versus scale [small, medium, large]. large scale bipole
  • image p405fig12.01 A What ventral cortical stream and Where/How dorsal cortical stream have been described for audition, no less than for vision.
    || Partietal lobe: where; Temporal lobe: what. V1-> [[what: IT], [where: PPC-> DLPFC]]. A1-> [[what: [ST-> VLPFC], VLPFC], [where: [PPC-> DLPFC], DLPFC]].
  • image p407fig12.03 Neurophysiological data showing how motor cortical cells code different vectors that are sensitive to both the direction of the commanded movement and its length.
    || (a) Single primary motor cortex neuron, onset of movement -> on..., radial architecture... (b) Motor cortex neuronal population, radial architecture...
  • image p409fig12.04 (top half) Neurophysiological data of vector cell responses in motor cortex. (bottom half) VITE model simulations of a simple movement in which the model
  • image p410fig12.05 VITE simulation of velocity profile invariance if the same GO signal gates shorter (a) or longer (b) movements. Note the higher velocities in (b).
    || [[short, long] cart [G, dP/dt]] vs time. G = GO signal, dP/dt = velocity profile.
  • image p411fig12.07 The left column simulation by VITE shows the velocity profile when the GO signal (G) starts with the movement. The right signal column shows that the peak velocity is much greater if a second movement begins when the GO signal is already positive.
    || Higher peak velocity due to target switching. VITE simulation of higher peak speed if second target rides on first GO signal. [[first, second] target cart [G, dP/dt]] vs time. Second target GO is much higher. G = GO signal, dP/dt = velocity profile.
  • image p411fig12.08 Agonist-antagonist opponent organization of difference vector (DV) and present position vector (PPV) processing stages and how GO signals gate them.
    ||
  • image p412fig12.09 How a Vector Associative Map, or VAM, model uses mismatch learning during its development to calibrate inputs from a target position vector (T) and a present position vector (P) via mismatch learning of adaptive weights at the difference vector (D). See the text for details.
    || Vector Associative Map model (VAP). During critical period, Endogenous Random Generator (ERG+) tirns on, activates P, and causes random movements that sample workspace. When ERG+ shuts off, posture occurs. ERG- then turns on (rebound) and opens Now Print (NP) gate, that dumps P into T. Mismatch learning enables adaptive weights between T and D to change until D (the mismatch) appoaches 0. Then T and P are both correctly calibrated to represent the same positions.
  • image p413fig12.10 Processing stages in cortical areas 4 and 5 whereby the VITE model combines outflow VITE trajectory formation signals with inflow signals from the spinal cord and cerebellum that enable it to carry out movements with variable loads and in the presence of obstacles. See the text for details.
    || area 4 (rostral) <-> area 5 (caudal).
  • image p414fig12.11 Neurophysiological data from cortical areas 4 and 5 (every other column) and simulations thereof (other columns) during a reach.
    || activation vs time. (a) area 4 phasic RT (IFV) (b) area 4 tonic (OPV) (c) area 4 phasic-tonic (OFPV) (d) area 4 phasic MT (DVV) (e) area 5 phasic (DV) (f) area 5 tonic (PPV)
  • image p415fig12.12 The combined VITE, FLETE, cerebellar, and multi-joint opponent muscle model for trajectory formation in the presence of variable forces and obstacles.
    ||
  • image p416fig12.13 The DIRECT model learns, using a circular reaction that is energized by an Endogenous Random Generator, or ERG, to make motor-equivalent volitionally-activated reaches. This circular reaction learns a spatial representation of a target in space. It can hereby make accurate reaches with clamped joints and on its first try using a tool under visual guidance; see Figure 12.16.
    || DIRECT model (Bulloch, Grossberg, Guenther 1993). learns by circular reaction. learns spatial reresentation to me4diate between vision and action. motor-equivalent reaching. can reach target with clamped joints. can reach target with a TOOL on the first try under visual guidance. How did tool use arise?!
  • image p416fig12.14 Computer simulations of DIRECT reaches with (b) a tool, (c) a clamped elbow, and (d) with a blindfold, among other constraints.
    || Computer simulationsd of direct reaches [unconstrained, with TOOL, elbow clamped at 140°, blindfolded]
  • image p417fig12.15 The DIRECT and DIVA models have homologous circuits to learn and control motor-equivalent reaching and speaking, with tool use and coarticulation resulting properties. See the text for why.
    || From Seeing and Reaching to Hearing and Speaking, Circular reactions (Piaget 1945, 1951, 1952). Homologous circuits for development and learning of motor-equivalent REACHING and SPEAKING. DIRECT TOOL use (Bullock, Grossberg, Guenther 1993), DIVA Coarticulation (Guenther 1995)
  • image p418fig12.16 Anatomical interpretations of the DIVA model processing stages.
    || [Feedforward control system (FF), Feedback control subsystem (FB)]. Speech sound map (Left Ventral Premotor Cortex (LVPC)), Cerebellum, Articulatory velocity and position maps (Motor Cortex (MC)), Somatosensory Error Map (Inferior Parietal Cortex (IPC)), Auditory Error Map (Superior Temporal Cortex (STC)), Auditory State Map (Superior Temporal Cortex)), Somatosensory State Map (Inferior Parietal Cortex)), articulatory musculature via subcortical nuclei, auditory feedback via subcortical nuclei
  • image p419fig12.17 The auditory continuity illusion illustrates the ART Matching Rule at the level of auditory streaming. Its "backwards in time" effect of future context on past conscious perception is a signature of resonance.
    || Auditory continuity illusion. input, percept. Backwards in time - How does a future sound let past sound continue through noise? Resonance! - It takes a while to kick in. After it starts, a future tone can maintain it much more quickly. Why does this not happen if there is no noise? - ART Matching Rule! TD harmonic filter is maodulatory without BU input. It cannot create something out of nothing.
  • image p420fig12.18 The ARTSTREAM model explains and simulates the auditory continuity illusion as an example of a spectral-pitch resonance. Interactions of ART Matching Rule and asymmetric competition mechanisms in cortical strip maps explain how the tone selects the consistent frequency from the noise in its own stream while separating the rest of the noise into another stream.
    || ARTSTREAM model (Grossberg 1999; Grossberg, Govindarajan, Wyse, Cohen 2004). SPINET. Frequency and pitch strips. Bottom Up (BU) harmonic sieve. Top Down (TD) harmonic ART matching. Exclusive allocation. Learn pitch categories based on early harmonic processing. A stream is a Spectral-Pitch Resonance!
  • image p422fig12.19 The ARTSTREAM model includes mechanisms for deriving streams both from pitch and from source direction. See the text for details.
    || [left, right] cart Peripheral processing = [input signal-> outer & middle ear preemphasis-> basilar membrane gammatone filterbank-> energy measure]. Spectral stream layer-> spectral summation layer-> delays-> [f-, tau] plane-> pitch stream layer-> pitch summation layer.
  • image p423fig12.20 The Spatial Pitch Network, or SPINET, model shows how a log polar spatial representation of the sound frequency spectrum can be derived from auditory signals occuring in time. The spatial representation allows the ARTSTREAM model to compute spatially distinct auditory streams.
    || SPINET model (Spatial Pitch Network) (Cohen, Grossberg, Wyse 1995). 1. input sound 2. Gamma-tone filter bank 3. Shaort-term average energy spectrum 4. MAP transfer function 5. On-center off-surround and rectification 6. Harmonic weighting 7. Harmonic summation and competition -> PITCH
  • image p424fig12.21 One of the many types of data about pitch processing that are simulated by the SPINET model. See the text for details.
    || Pitch shifts with component shifts (Patterson, Wightman 1976; Schouten 1962). Pitch vs lowest harmonic number.
  • image p424fig12.22 Decomposition of a sound (bottom row) in terms of three of its harmonics (top three rows).
    ||
  • image p425fig12.23 ARTSTREAM simulations of the auditory continuity illusion and other streaming properties (left column, top row). When two tones are separated by silence (Input), a percept of silence also separates them in a spectral-pitch resonance. (left column, bottom row). When two tones are separated by broadband noise, the percept of tone continues through the noise in one stream (stream 1) while the remainder of the noise occurs in a different stream (stream 2). (right column) Some of the other streaming properties that have been simulated by the ARTSTREAM model.
    || Auditory continuity does not occur without noise. Auditory continuity in noise. Other simulated streaming data.
  • image p426fig12.24 Spectrograms of /ba/ and /pa/ show the transient and sustained parts of their spectrograms.
    ||
  • image p428fig12.25 (left architecture) Auditory-articulatory feedback loop whereby babbled sounds active learning in an imitative map that is later used to learn to reproduce the sounds of other speakers. An articulatory-to-auditory expectation renders learning possible by making the auditory and motor data dimensionally consistent, as in the motor theory of speech. (right architecture) Parallel streams in the ARTSPEECH model for learning speaker-independent speech and language meaning, including a mechanism for speaker normalization (right cortical stream) and for learning speaker-dependent vocalic qualities (left cortical stream).
    || left: Speaker-dependent vocalic qualities; right: Speaker-independent speech and language meaning
  • image p430fig12.26 The NormNet model shows how speaker normalization can be achieved using specializations of the same mechanisms that create auditory streams. See the text for how.
    || [Anchor vs Stream] log frequency map. -> diagonals-> Speaker-independent acoustic item information-> [BU adaptive filter, TD learned expectation]-> leaned item recognition categories
  • image p431fig12.27 The strip maps that occur in ARTSTREAM and NormNet are variants of a cortical design that aalso creates ocular dominance columns in the visual cortex.
    || Adult organization of V1 (Grinvald etal http://www.weizmann.ac.il/brain/images/cubes.html). (1) Occular dominance columns (OCDs): Alternating strips of cortex respond preferentially to visual inputs of each eye (R/L corresponds to Right and Left eye inputs in the figure); Orientation columns: A smooth pattern of changing orientation preference within each ODC. Organized in a pinwheel like fashion.
  • image p432fig12.28 (left image) The SpaN model simulates how spatial representations of numerical quantities are generated in the parietal cortex. (right image) Behavior numerosity data and SpaN model simulations of it.
    || (Left) preprocessor-> spatial number map-> Comparison wave. (Right) data axis: number of lever presses; model axis: node position in the spatial number axis
  • image p433fig12.29 Learning of place-value number maps language categories in the What cortical stream into numerical strip maps in the Where cortical stream. See the text for details.
    || (1) spoken word "seven"-> (2) What processing stream- learned number category <-> (3) What-Where learned assoociations <- (4) Where processing stream- spatial number map <-(5) visual clues of seven objects
  • image p436fig12.30 The conscious ARTWORD, or cARTWORD, laminar cortical speech model simulates how future context can disambiguate noisy past speech sounds in such a way that the completed percept is consciously heard to proceed from past to future as a feature-item-list resonant wave propagates through time.
    || cARTWORD: Laminar cortical model macrocircuit (Grossberg, Kazerounian 2011) Simulates PHONEMIC RESTORATION: Cognitive Working Memory (processed item sequences) - [Excitatory-> inhibitory-> habituative-> adaptive filter-> adaptive filter-> adaptive filter with depletable synapse-> Acoustic [item, feature]
  • image p436fig12.31 Working memories do not store longer sequences of events in the correct temporal order. Instead, items at the beginning and end of the list are oftem called first, and with the highest probability.
    || Working memory. How to design a working memory to code "Temporal Order Information" in STM before it is stored in LTM. Speech, language, sensory-motor control, cognitive planning. eg repeat a telephone number unless you are distracted first. Temporal order STM is often imperfect, eg Free Recall. [probability, order] of recall vs list position. WHY?
  • image p437fig12.32 Data from a free recall experiment illustrate the bowed serial position curve.
    || Serial position function for free recall Data: (Murdock 1962 JEP 64, 482-488). % correct vs position of word on a 40-word list. Primacy gradient can be a mixture of STM and LTM read-out.
  • image p437fig12.33 Item and Order working memory models explain free recall data, as well as many other psychological and neurobiological data, by simulating how temporal series of events are stored as evolving spatial patterns of activity at content-addressable item categories. The categories with the largest activities are rehearsed first, and self-inhibit their activity as they do so in order to prevent tem from being rehearsed perseveratively. The laws whereby the items are stored in working memory obey basic design principles concerning list categories, or chunks, of sequences of stored items can be stably remembered.
    || Working memory models: item and order, or competitive queuing (Grossberg 1978; Houghton 1990; Page, Norris 1998). Event sequence in time stored as an evolving spatial pattern of activity. Primacy gradient of working memory activation stores correct temporal order at content-addressable cells. Maximally activated cell populations is performed next when a rehearsal wave is turned on. Output signal from chosen cell population inhibits its own activity to prevent perseveration: inhibition of return. Iterate until entire sequence is performed.
  • image p438fig12.34 The LTM Invariance Principle insists that words being stored in working memory for the first time (eg MYSELF) do not cause catastrophic forgetting of the categories that have already been learned for their subwords (eg MY, SELF, and ELF) or other subset linguistic groups.
    || LTM invariance principle. unfamiliar STM -> LTM familiar. How does STM storage of SELF influence STM storage of MY? It should not recode LTM of either MY or SELF!
  • image p439fig12.35 The Normalization Rule insists that the total activity of stored items in working memory has an upper bound that is approximately independent of the number of items that are stored.
    || Normalization Rule (Grossberg 1978). Total STM activity has a finite bound independent of the number of items (limited capacity of STM). Activity vs Items for [slow, quick] asymptotic energy growth.
  • image p439fig12.36 (1) Inputs to Item and Order working memories are stored by content-addressable item categories. (2) The relative activities of the item categories code the temporal order of performance. (3) In addition to excitatory recurrent signals from each working memory cell (population) to itself, there are also inhibitory recurrent signals to other working memory cells, in order to solve the noise-saturation dilemma. (4) A nonspecific rehearsal wave allows the most active cell to be rehearsed first. (5) As an item is being rehearsed, it inhibits its own activity using a feedback inhibitory interneuron. Persevervation performance is hereby prevented.
    || Item and order working memories. (1) Content-addressable item codes (2) Temporal order stored as relative sizes of item activities (3) Competition between working memory cells: Competition balances the positive feedback that enables the cells to remain active. Without it, cell activities may all saturate at their maximal values-> Noise saturation dilemma again! (4) Read-out by nonspecific reheasal wave- Largest activity is the first out (5) STM reset self-inhibition prevents perseveration: [input/self-excitatory, rehearsal wave]-> [output, self-inhibition]
  • image p440fig12.37 Simulation of a primacy gradient for a short list (left image) being transformed into a bowed gradient for a longer list (right image). Activities of cells that store the longer list are smaller die to the Normalization Rule, which follows from the shunting inhibition in the working memory network.
    || Primacy bow as more items stored. [activities, final y] (Left) Primacy gradient 6 items (Right) Bowed gradient 20 items
  • image p441fig12.38 The LTM Invariance Principle is realized if the relative sizes of the inputs to the list chunk level stay the same as more items are stored in working memory. This property, in turn, follows from shunting previously stored working memory activities when a ne4w item occurs.
    || LTM Invariance principle. Choose STM activities so that newly stored STM activities may alter the size of old STM activities without recoding their LTM patterns. In particular: New events do not change the relative activities of past event sequences, but may reduce their absolute activites. Why? Bottom-up adaptive filtering uses dot products: T(j) = sum[i=1 to n: x(i)*z(i,j) = total input to v(j). The relative sizes of inputs to coding nodes v(j) are preserved. x(i) -> w*x(i), 0 < w <= 1, leaves all past ratios T(j)/T(k) unchanged.
  • image p442fig12.39 (left column, top row) How a shunt plus normalization can lead to a bow in the stored working memory spatial pattern. Time increases in each row as every item is stored with activity 1 before it is shunted by w due to each successive item
  • image p442fig12.40 Given the hypothesis in Figure 12.39 (right column, bottom row) and a generalized concept of steady, albeit possibly decreasing, attention to each item as it is stored in working memory, only a primacy, or bowed gradient of activity across the working memory items can be stored.
    || LTM Invariance + Normalization. (... given conditions ...) Then the x(i) can ONLY form: [primacy gradient, recency gradient, unimodal bow]
  • image p443fig12.41 Neurophysiological data from the Averbeck etal sequential copying experiments show the predicted primacy gradient in working memory and the self-inhibition of activity as an item is stored. When only the last item remains stored, it has the highest activity becasuse it has been freed from inhibition by earlier items.
    || Neurophysiology of sequential copying
  • image p444fig12.42 The LIST PARSE laminar cortical model of working memory and list chunking that I published with Lance Pearson in 2008 simulated the Averbeck etal data in Figure 12.41, as in the left column of the figure. It also simulated cognitive data about working memory storage by human subjects. See the text for details.
    || LIST PARSE: Laminar cortical model of working memory and list chunking (Grossberg, Pearson 2008). Simulates data about: [immediate, delayed, continuous] distractor free recall; immediate serial recall; and variable-speed sequential perfomance of motor acts. [velocity, acceleration] vs time (ms) from recall cue.
  • image p445fig12.43 The LIST PARSE laminar cortical Cognitive Working Memory circuit, that is proposed to occur in ventrolateral prefrontal cortex, is homologous to the LAMINART circuit circuit that models aspects of how visual cortex sees. The Motor Working Memory, VITE Trajectory Generator, and Variable-Rate Volitional Control circuits model how other brain regions, including dorsolateral prefrontal cortex, motor cortex, cerebellum, and basal ganglia, interact with the Cognitive Working Memory to control working memory storage and variable-rate performance of item sequences.
    || List parse circuit diagram. Connectivity convention. sequence chunks [<- BU filter, TD expectation ->] working memory. Working memory and sequence chunking circuit is homologous to visual LAMINART circuit!
  • image p446fig12.44 (left column, top row) LIST PARSE can model linguistic data from human subjects. In this figure, model parameters are fixed to enable a close fit to data about error-type distributions in immediate free recall experiments, notably transposition errors. (right column, top row) Simulation and data showing bowing of the serial position curve, including an extended primacy gradient. (left column, bottom row) The simulation curve overlays data about list length effects, notably the increasing recall difficulty of longer lists during immediate serial recall (ISR). (right column, bottom row) Simulation (bottom image) and data (top image) of the limited temporal extent for recall.
    || (1. TL) Error-type distributions in immediate serial recall (Hanson etal 1996). % occurence vs serial position. Graph convention: Data- dashed lines; Simulations- solid lines. Six letter visual ISR. Order errors- transpositions of neighboring items are the most common. Model explanation: Noisy activation levels change relative oorder in primacy gradient. Similar activation of neighboring items most susceptible to noise. Model paramenters fitted on these data. (2. TR) Bowing of serial position curve (Cowan etal 1999). % correct vs serial position. Auditory ISR with various list lengths (graphs shifted rightward): For [, sub-]span lists- extended primacy, with one (or two) item recency; Auditory presentation- enhanced performance for last items. LIST PARSE: End effects- first and last items half as many members; Echoic memory- last presented item retained in separate store. (3. BL) List length effects, circles (Crannell, Parrish 1968), squares (Baddeley, Hitch 1975), solid line- simulation. % list correct vs list length. Variable list length ISR: longer lists are more difficult to recall. LIST PARSE: More items- closer activation levels and lower absolute activity level with enough inputs; Noise is more likely to produce order errors, Activity levels more likely to drop below threshold;. (4. BR) Limited temporal extent for recall (Murdoch 1961). % recalled vs retention interval (s). ISR task with distractor-filled retention intervals (to prevent rehersal): Increasing retention interval - decreases probabilty of recalling list correctly; Load dependence- longer list more affected by delays; Performance plateau- subjects reach apparent asymptote. LIST PARSE: Increase convergence of activities with time; loss of order information;.
  • image p447fig12.45 (left column) LIST PARSE simulations of the proportion of order errors as a function of serial position for 6 item lists with (a) an extended pause of 7 time units between the third and fourth items, and (b) pauses of 5 time units (solid curve) and 10 time units (dashed curve) between all items. (right column) Simulations (solid curves) and data (dashed curves) illustrating close model fits in various immediate free recall tasks.
    || (Left) Temporal grouping and presentation variability. Temporal grouping: Inserting an extended pause leads to inter-group bowing; Significantly different times of integration and activity levels across pause, fewer interchanges. (Right) Immediate free recall, and [delayed, continuous] distractor-free recall. Overt rehersal IFR task with super-span (ie 20 item) lists: Extended recency- even more extended with shorter ISIs; Increased probability of recall with diminished time from last rehearsal; Early items in list rehearsed most;. LIST PARSE (unique) for long lists: Incoming items from recency gradient; Rehearsal (re-presentation) based upon level of activity.
  • image p448fig12.46 A Masking Field working memory is a multiple-scale self-similar recurrent shunting on-center off-surround network. It can learn list chunks that respond selectively to lists of item chunks of variable length that are stored in an item working memory at the previous processing stage. Chunks that code for longer lists (eg MY vs MYSELF) are larger, and give rise to stronger recurrent inhibitory neurons (red arrows).
    || How to code variable length lists? MASKING FIELDS code list chunks of variable length (Cohen, Grossberg 1986, 1987; Grossberg, Kazerounian 2011, 2016; Grossberg, Meyers 2000; Grossberg, Pearson 2008). Multiple-scale self-similar WM: Masking field, adaptive filter. Variable length coding- Masjking fields select list chunks that are sensitive to WM sequences of variable length; Selectivity- Larger cells selectively code code longer lists; Assymetric competition- Larger cells can inhibit smaller cells more than conversely MAgic Number 7! Temporal order- different list chunks respond to the same items in different orders eg LEFT vs FELT;.
  • image p449fig12.47 This figure illustrates the self-similarity in a Masking Field of both its recurrent inhibitory connections (red arrows) and its top-down excitatory priming signals (green arrows) to the item chunk working memory.
    || Both recurrent inhibition and top-down excitatory priming are self-similar in a masking field. MYSELF <-> [MY, MYSELF]
  • image p452fig12.48 (left column) In experiments of (Repp etal 1978), the silence duration between the words GRAY and SHIP was varied, as was the duration of the fricative noise in S, with surprising results. (right column) The red arrow directs our attention to surprising perceptual changes as silence and noise durations increase. See the text for details.
    || Perceptual integration of acoustic cues, data (Repp etal 1978). GRAY-> silence duration-> SHIP (noise duration from start of word). Noise duration vs silence duration: GRAY SHIP <-> [GREAT SHIP <-> GRAY CHIP] <-> GREAT CHIP.
  • image p453fig12.49 The ARTWORD model that I published in 2000 with my PhD student Christopher Myers simulates data such as the (Repp etal 1978) data in Figure 12.68. See the text for details.
    || ARTWORD model (Grossberg, Myers 2000). Input phonetic features-> Phonemic item working memory-> Masking Field unitized lists-> Automatic gain control-> Phonemic item working memory. [habituative gate, adaptive filter]s.
  • image p453fig12.50 The ARTWORD perception cycle shows how sequences of items activate possible list chunks, which compete among each other and begin to send their top-down expectations back to the item working memory. An item-list resonance develops through time as a result.
    || ARTWORD perception cycle. (a) bottom-up activation (b) list chunk competition (c) item-list resonance (d) chunk reset due to habituative collapse.
  • image p454fig12.51 (left column) Even as a resonance with the list chunk GRAY begins to develop, if the delay between "gray" and "chip" is increased, greater habituation of this resonance may allow the GREAT chunk to begin to win, thereby smoothly transfering the item-list resonance from GRAY to GREAT through time. (right column) Simulation of a resonant treansfer from GRAY to GREAT, and back again as the silence interval between the words {gray" and "chip" increases. The red region between GRAY and GREAT curves calls attention to when GREAT wins. See the text for details.
    || Resonant transfer, as silence interval increases. (left) Delay GRAY resonance weakens. A delayed additional item can facilitate perception of a longer list. (right) GRAY-> GREAT-> GRAY.
  • image p455fig12.52 Simulation of cARTWORD dynamics in response to the complete list /1/-/2/-/3/. The relevant responses are surrounded by a red box.
    || Presentation of a normal sequence: input /1/-/2/-/3/. |c(i,1)-5| vs time (msec). List chunks select most predictive code. Order stored in WM layers. Resonant activity of /1/-/2/-/3/ in item and feature layers corresponds to conscious speech percept.
  • image p456fig12.53 Simulation of cARTWORD dynamics in response to the partial list /1/-silence-/3/ with /2/ replaced by silence. Only the representations of these items can be seen in the red box.
    || Presentation with silence duration: input /1/-silence-/3/. |c(i,1)-5| vs time (msec). List chunks select most predictive code. Order stored in WM layers. Gap in resonant activity of /1/-silence-/3/ in item and feature layers corresponds to perceived silence.
  • image p456fig12.54 Item /2/ is restored in the correct list position in response to the list /1/-noise-/3/.
    || Presentation with noise: input /1/-noise-/3/. |c(i,1)-5| vs time (msec). List chunks select the most predictive code. Order restored in WM layers. Resonant activity of /1/-/2/-/3/ in item and feature layers corresponds to restoration of item /2/ replaced by noise in input.
  • image p457fig12.55 Item /4/ is restored in the correct list position in response to the list /1/-noise-/5/. This and the previous figure show how future context can disambiguate past noisy sequences that are otherwise identical.
    || Presentation with noise: input /1/-noise-/5/. |c(i,1)-5| vs time (msec). List chunks select the most predictive code. Order restored in WM layers. Resonant activity of /1/-/4/-/3/ in item and feature layers corresponds to restoration of item /4/ replaced by noise in input.
  • image p459fig12.56 (Grossberg, Pearson 2008) proposed that the ability of working memories to store repeated items in a sequence represents rank information about the position of an item in a list using numerical hypercolumns in the prefrontal cortex (circels with numbered sectors: 1,2,3,4). These numerical hypercolumns are conjointly activated by inputs from item categories and from the analog spatial representation of numerosity in the parietal cortex. Thes parietal representations (overlapping Gausian activity profiles that obey a Weber Law) had earlier been modeled by (Grossberg, Repin 2003). See the text for details.
    || Item-order-rank working memory, rank information from parietal numerosity cicuit (Grossberg, Peaarson 2008; Grossberg, Repin 2003). [Sensory working memory-> adaptive filter-> list chunk-> attentive prime-> Motor working memory]-> [large, small] numbers-> transfer functions with variable thresholds and slopes-> uniform input-> integrator amplitude-> number of transient sensory signals.
  • image p460fig12.57 The lisTELOS architecture explains and simulates how sequences of saccadic eye movement commands can be stored in a spatial working memory and recalled. Multiple brain regions are needed to coordinate these processes, notably three different basal ganglia loops to replace saccade storage, choice, and performance, and the supplementary eye fields (SEF) to choose the next saccadic command from a stored sequence. Because all working memories use a similar network design, this model can be used as a prototype for storing and recalling many other kinds of cognitive, spatial, and motor information. See the text for details.
    || lisTELOS model- Spatial working memory (Silver, Grossberg, Bulloch, Histed, Miller 2011). Simulates how [PPC, PFC, SEF, FEF, SC] interact with 3 BG loops to learn and perform sequences of saccadic eye movements.
  • image p461fig12.58 The lisTELOS model built upon key processes that were earlier modeled by the TELOS model. See the text for details.
    || TELOS model (Brown, Bulloch, Grossberg 1999, 2004). shows [BG nigro-[thalamic, collicular], FEF, ITa, PFC, PNR-THAL, PPC, SEF, SC, V1, V4/ITp, Visual Cortex input] and [GABA].
  • image p462fig12.59 The TELOS model clarifies how reactive vs. planned eye movements may be properly balanced against one another, notably how a fast reactive movement is prevented from occuring in response to onset of a cue that requires a different, and more contextually appropriate, response, even if the latter response takes longer to be chosen and performed. The circuit explains how "the brain knows it before it knows" what this latter response should be by changing the balance of excitation to inhibition in the basal ganglie (BG) to keep the reactive gate stays shut until the correct target position can be chosen by a frontal-parietal resonance.
    || Balancing reactive vs. planned movements (Brown, Bulloch, Grossberg 2004). (a) shows [FEF, PPC]-> [BG, SC], and BG-> SC. (b) FTE vs time (msec) for [fixation, saccade, overlap, gap, delayed saccade] tasks.
  • image p463fig12.60 Rank-related activity in prefrontal cortex and supplementary eye fields from two different experiments. See the text for details.
    || Rank-related activity in PFC and SEF. Prefrontal cortex (Averbeck etal 2003) [sqare, inverted triangle]. Supplementary eye field (Isoda, Tanju 2002).
  • image p464fig12.61 (left column) A microstimulating electrode causes a spatial gradient of habituation. (right column) The spatial gradient of habituation that is caused by microstimulation alters the order of saccadic performance of a stored sequence, but not which saccades are performed, using interactions between the prefrontal cortex (PFC) working memory and the supplemental eye field (SEF) saccadic choice.
    || (left) Microstimulation causes habituation (Grossberg 1968). Stimulation caused habituation. Cells close to the stimulation site habituate most strongly. (right) Stimulation biases selection PFC-> SEF-> SEF. PFC Activity gradient in working memory, SEF Microstimulation causes habituation, During selection habituated nodes are less likely to win this competition.
  • image p464fig12.62 The most habituated positions have their neuronal activites most reduced, other things being equal, as illustrated by the gradient from deep habituation (red) to less habituation (pink). The saccadic performance orders (black arrows) consequently tend to end in the most habituated positions that have been stored.
    || The most habituated position is foveated last. For each pair of cues, the cue closest to the stimulation site is most habituated -- and least likely to be selected. Because stimulation spreads in all directions, saccade trajectories tend to converge.
  • image p465fig12.63 Neurophysiological data (left image) and lisTELOS stimulation (right figure) showing how microstimulation biases saccadic performance order but not the positions to which the saccades will be directed. See the text for details.
    || Saccade trajectories converge to a single location in space. Microstimulation biased selection so saccade trajectories converged toward a single location in space. [Data, model] contra <-> Ipsi (msec)
  • image p467fig12.64 Some of the auditory cortical regions that respond to sustained or transient sounds. See text for details.
    || Some auditory cortical regions. Core <-> belt <-> parabelt. [Belt, Core, ls, PAi, Parabelt, PGa, TAs, TE, TP, TPO, st s].
  • image p468fig12.65 Linguistic properties of the PHONET model and some of the data that it simulates. The upper left image summarizes the asymmetric transient-to-sustained gain control that helps to create invariant intraword ratios during variable-rate speech. The lower left image summarizes the rate-dependent gain control of the ARTPHONE model that creates rate-invariant working memory representations in response to sequences of variable-rate speech. The right image summarizes the kind of paradoxical VC-CV category boundary data of (Repp 1980) that ARTPHONE simulates. See the text for details.
    || (left upper) [transient, sustained] [working memory, filter, category]. (left lower) phone inputs-> [input rate estimate, features], Features w <- habituative transmitter gates -> categories-> rate invariant phonetic output, input rate estimate-> gain control-> [features, categories] rate-dependent integration of categories and features. (right) % 2-stop vs VC-CV silent interval (msec): [ib-ga, ib-ba, iga, iba].
  • image p469fig12.66 (left column) A schematic of how preserving relative duration, as in the first and third images, of consonant and vowel pairs can preserve a percept, in this case of /ba/, but not doing so, as in the first and second images, can cause a change in percept, as from /ba/ to /wa/, as in the data of (Miller, Liberman 1979) that PHONET simulates. (right column) Changing frequency extent can also cause a /ba/ - /wa/ transition, as shown in data of (Schwab, Sawusch, Nusbaum 1981) that PHONET also simulates.
    || (left image) Maintaining relative duration as speech speeds up preserves percept (Miller, Liberman 1979). frequency vs time- [/ba/, /wa/, /ba/] (right image) Changing frequency extent causes /b/-/wa/ transition (Schwab, Sawusch, Nusbaum 1981). frequency vs time- [/ba/, /wa/] Dt extent.
  • image p469fig12.67 PHONET contains transient and sustained cells that respond to different kinds of sounds, notably the transients of certain consonants and the sustained sounds of certain vowels. It then uses the transient working memory to gain contol the integration rate of the sustained working memory to which these different detectors input.
    || Phonetic model summary. (left) Acoustic tokens [consonant, vowel]. (middle) Acoustic detectors [transient (sensitive to rate), Sustained (sensitive to duration)]. (right) Working memory, Spatially stored transient pattern (extent) + gain control-> spatially stored sustained pattern.
  • image p471fig12.68 A mismatch reset of /b/ in response to the /g/ in [ib]-[ga] can rapidly shut off the [ib] percept, leading to the percept of [ga] after an interval of silence. In contrast, resonant fusion of the two occurences of /b/ in [ib]-[ba] can cause a continuous percept of sound [iba] to occur during times at which silence is heard in response to [ib]-[ga].
    || Mismatch vs resonant fusion
  • image p473fig12.69 Error rate and mean reaction time (RT) data from the lexical decision experiments of (Schvanevelt, McDonald 1981). ART Matching Rule properties explain these data in (Grossberg, Stone 1986).
    || (left) Error rate vs type of prime [R, N, U], [non,] word. (right) Mean RT (msec) vs type of prime [R, N, U], [non,] word.
  • image p474fig12.70 The kind of model macrocircuit that was used in (Grossberg, Stone 1986) to explain lexical decision task data.
    || inputs-> A1 <-> A2 oconic sensory features <-> A3 item and order in sensory STM <-> A4 list parsing in STM (masking field) <-> A5 semantic network (self-feedback). [A4, A5] <-> V* visual object recognition system. M1-> [outputs, A1]. M1 <-> M2 iconic motor features <-> M3 item and order in motor STM. A2-> M2. A3-> M3.
  • image p476fig12.71 Word frequency data of (Underwood, Freund 1970) that were explained in (Grossberg, Stone 1986).
    || percent errors vs frequency of old words [L-H to H-H, L-L to H-L].
  • image p481fig13.01 Macrocircuit of the functional stages and anatomical interpretations of the Cognitive-Emotional-Motor, or CogEM, model.
    || Drive-> hypothalamus value categories <-> amygdala incentive motivational learning-> Orbitofrontal cortex- object-value categories <-> sensory cortex- invariant object categories- conditioned reinforcer learning-> amygdala-> hypothalamus.
  • image p483fig13.02 The object-value categories in the orbitofrontal cortex require converging specific inputs from the sensory cortex and nonspecific incentive motivational inputs from the amygdala in order to fire. When the orbitofrontal cortex fires, it can deliver top-down ART Matching Rule priming signals to the sensory cortical area by which it was activated, thereby helping to choose the active recognition categories there that have the most emotional support, while suppressing others, leading to attentional blocking of irrelevant cues.
    || Cognitive-Emotional-Motor (CogEM) model. Drive-> amygdala incentive motivational learning-> orbitofrontal cortex- need converging cue and incentive iputs to fire <-> sensory cortex- conditioned renforcer learning-> amygdala. CS-> sensory cortex. Motivated attention closes the cognitive-emotional feedback loop, focuses on relevant cues, and causes blocking of irrelevant cues.
  • image p483fig13.03 The predicted processing stages of CogEM have been supported by anatomical studies of connections between sensory cortices, amygdala, and orbitofrontal cortex.
    || Adapted from (Barbas 1995). sensory cortices = [visual, somatosensory, auditory, gustatory, olfactory]. sensory cortices-> amygdala-> orbital prefrontal cortex. sensory cortices-> orbital prefrontal cortex. [visual cortex, amygdala]-> lateral prefrontal cortex.
  • image p484fig13.04 The top-down feedback from the orbitofrontal cortex closes a feedback loop that supports a cognitive-emotional resonance. If this resonance can be sustained long enough, it enables us to have feelings at the same time that we experience the categories that caused them.
    || Cognitive-Emotional resonance. Basis of "core consciousness" and "the feeling of what happens". (Damasio 1999) derives heuristic version of CogEM model from his clinical data. Drive-> amygdala-> prefrontal cortex-> sensory cortex, resonance around the latter 3. How is this resonance maintained long enough to become conscious?
  • image p484fig13.05 Classical conditioning is perhaps the simplest kind of associative learning.
    || Classical conditioning (nonstationary prediction). Bell (CS)-> (CR), Shock (US)-> Fear (UR), associative learning.
  • image p485fig13.06 (left column) An inverted-U occurs in conditioned reinforcer strength as a function of the ISI between the CS and the US. Why is learning attenuated at 0 ISI? (right column) Some classical conditioning data that illustrate the inverted-U in conditioning as a function of the ISI.
    || InterStimulus Interval (ISI) effect. Data from (Dmith etal 1969; Schneiderman, Gormezano 1964).
  • image p485fig13.07 The paradigm of secondary conditioning. See the text for details.
    || Secondary conditioning (Advertising!). [CS1, C2] become conditioned reinforcers.
  • image p486fig13.08 The blocking paradigm illustrates how cues that do not predict different consequences may fail to be attended.
    || Blocking- minimal adaptive prediction. Phase [I, II] - CS2 is irrelevant.
  • image p486fig13.09 Equally salient cues can be conditioned in parallel to an emotional consequence.
    || Parallel processing of equally salient cues vs overshadowing (Pavlov).
  • image p486fig13.10 Blocking follows if both secondary conditioning and attenuation of conditioning at a zero ISI occur.
    || Blocking = ISI + secondary conditioning.
  • image p487fig13.11 The three main properties of CogEM that help to explain how attentional blocking occurs.
    || CogEM explanation of attentional blocking. Internal drive input <-> Conditioned reinforcer learning (self-recurrent) <-> Competition for STM <- Motor learning. 1. Sensory representations compete for limited capacity STM. 2. Previously reinforced cues amplify their STM via positive feedback. 3. Other dues lose STM via competition.
  • image p488fig13.12 (left column) How incentive motivational feedback amplifies activity of a sensory cortical cell population. (right column) A sensory cortical cell population whose activity is amplified by incentive motivational feedback can suppress the activities of less activated populations via self-normalizing recurrent competitive interactions.
    || Motivational feedback and blocking. (left) sensory input CS, STM activity without motivational feedback, STM activity with motivational feedback. (right) STM suppressed by competition, STM amplified by (+) feedback.
  • image p489fig13.13 (top row) If a positive ISI separates onset of a CS and US, then the CS can sample the consequences of the US during the time interval before it is inhibited by it. (bottom row) A CogEM simulation of the inverted-U in conditioning as a function of the ISI betweeen CS and US.
    || Positive ISI and conditioning.
  • image p490fig13.14 In order for conditioning to work properly, the sensory representation needs to have at least two successive processing stages. See the text for why.
    || Model of Cognitive-Emotional circuit. Drive-> Drive representation-> ??? <-> Sensory STM <-CS
  • image p490fig13.15 The CogEM circuit is an ancient design that is found even in mollusks like Aplysia. See the text for details.
    || Aplysia (Buononamo, Baxter, Byrne, Neural Networks 1990; Grossberg, Behavioral and Brain Sciences 1983). Facilitator neuron ~ drive representation.
  • image p492fig13.16 (left column) In order to satisfy all four postulates, there needs to be UCS-activated arousal of polyvalent CS-activated sampling neuron. (right column) The arousal needs to be nonspecific in order to activate any of the CSs that could be paired with the UCS.
    || Polyvalent CS sampling and US-activated nonspecific arousal.
  • image p493fig13.17 (top row) Overcoming the ostensible contradiction that seems to occur when attempting to simultaneously realize hypotheses (3) and (4). (bottom row) The problem is overcome by assuming the existence of US-activated drive representation to which CSs can be associated, and that activate nonspecific incentive motivational feedback to sensory representations.
    || Learning nonspecific arousal and CR read-out. (top) Learning to control nonspecific arousal, Learning to read-out the CR (bottom) Drive representation, Incentive motivation.
  • image p494fig13.18 Realizing the above constraints favor one particular circuit. Circuits (a) and (b) are impossible. Circuit (d) allows previously occurring sensory cues to be stored in STM. Circuit (e) in addition enables a CS can be stored in STM without initiating conditioning in the absence of a US.
    || Learning to control nonspecific arousal and read-out of the CR: two stages of CS. (d) & (e) polyvalent cells.
  • image p494fig13.19 (left column, top row) Secondary conditioning of both arousal and a specific response are now possible. (bottom row) The CogEM circuit may be naturally extended to include multiple drive representations and inputs. (right column, top row) The incentive motivational pathways is also conditionable in order to enable motivational sets to be learned.
    || Secondary conditioning. Homology: conditionable incentive motivation. Multiple drive representations and inputs.
  • image p496fig13.20 (top image) A single avalanche sampling cell can learn an arbitrary space-time pattern by sampling it as a temporally ordered series of spatial patterns using a series of outstars. Once an avalanche
  • image p497fig13.21 (left column) An early embodiment of nonspecific arousal was a command cell in such primitive animals as crayfish. (right column) The songbird pattern generator is also an avalanche. This kind of circuit raises the question of how the connections self-organize through developmental learning.
    || Nonspecific arousal as a command cell. Crayfish swimmerets (Stein 1971). Songbird pattern generator (Fee etal 2002)+. Motor-> RA-> HVC(RA).
  • image p498fig13.22 (left column, top row) Adaptive filtering and conditioned arousal are both needed to regulate what cues can learn to activate particular space-time patterns. These developments lead inexorably to basic cognitive abilities, as embodied in the 3D LAMINART models for 3D vision and figure-ground perception (Chapter 11) and the 3D ARTSCAN SEARCH model for invariant object learning, recognition, and 3D search (Chapter 6). (right column, top row) Conditioned arousal enables only emotionally important cues to activate a motivationally relevant space-time pattern. (bottom row) Conditioned arousal and drive representations arise naturally from the unlumping of avalanche circuits to make them selective to motivationally important cues. The MOTIVATOR model is a natural outcome of this unlumping process (this chapter).
    || (top) Adaptive filtering and Conditioned arousal. Towards Cognition: need to filter inputs to the command cell. Towards Emotion: important signals turn arousal ON and OFF. (bottom) Conditioned arousal and Drive representations. Competition between conditioned arousal sources at drive representations, eg amygdala.
  • image p499fig13.23 (left column) Self-organization in avalanches includes adaptive filtering by outstars [?instars?], serial learning of temporal order, and learned read-out of spatial patterns by outstars. (right column) Serial learning of temporal order occurs in recurrent associative networks.
    || (left) Self-organizing avalanches [instars, serial learning, outstars]. (right) Serial list learning.
  • image p500fig13.24 Both primary excitatory and inhibitory conditioning can occur using opponent processes and their antagonistic rebounds.
    || Opponent processing. Cognitive drive associations. Primary associations: excitatory [CS, US, Fear], inhibitory [CS, US, Fear, Relief rebound].
  • image p501fig13.25 When an unbiased transducer is embodied by a finite rate physical process, mass action by a chemical transmitter is the result.
    || Unbiased transducer (Grossberg 1968). S=input, T=output, ?SB?=SB B is the gain. Suppose T is due to release of chemical transmitter y at a synapse: release rate T = S*y (mass action); Accumulation y ~= B.
  • image p501fig13.26 A simple differential equation describes the processes of transmitter accumulation and release that do their best, at a finite rate, to carry out unbiased transduction.
    || Transmitter accumulation and release. Transmitter y cannot be restored at an infinite rate: T = S*ym y ~= B, Differential equations: d[dt: y] = A*(B - y) - S*y = accumulate - release. Transmitter y tries to recover to ensure unbiased transduction. What if it falls behind? Evolution has exploited the good properties that happen then.
  • image p502fig13.27 Despite the fact that less transmitter y is available after persistent activation by a larger input signal S, the gated output signal S*y is larger die to the mass action gating of S by y.
    || Minor mathematical miracle. At equilibrium: 0 = d[dt: y] = A*(B - y) - S*y. Transmitter y decreases when input S increases: y = A*B/(A + S). However, output S*y increases with S!: S*y = S*A*B/(A + S) (gate, mass action).
  • image p502fig13.28 Fast increments and decrements in an input S lead to slow habituation of the habituative gate, or medium-term memory, transmitter y. The output T is a product of these fast and slow variables, and consequently exhibits overshoots, habituation, and undershoots in its response.
    || Habituative transmitter gate: Input; Habituative gate d[dt: y] = A*(B - y) - S*y; Output [overshoot, habituation, undershoot]s Weber Law.
  • image p503fig13.29 The ON response to a phasic ON input has Weber Law properties due to the divisive terms in its equilibrium response, which are due to the habituative transmitter.
    || ON-response to phasic ON-input. S1 = f(I+J): y1 = A*B/(A+S1), T1 = s1*y1 = A*B*S1/(A+S1); S2 = f(I): y2 = A*B/(A+S2), T2 = s2*y2 = A*B*S2/(A+S2);. ON = T1 - T2 = (A^2*B*(f(I+J)-f(I)) / (A+f(I)) / (A+f(I+J)) Note Weber Law. When f has a threshold, small I requires larger J to fire due to numerator, but makes suprathreshold ON bigger due to denominator. When I is large, quadratic in denominator and upper bound of f make ON small.
  • image p504fig13.30 OFF rebound occurs when the ON-input shuts off due to the imbalance that is caused by the ON input in the habituation of the transmitters in the ON and OFF channels. The relative sizes of ON responses and OFF rebounds is determined by the arousal level I.
    || OFF-rebound due to phasic input offset. Shut off J (Not I!). Then: S1 = f(I), S2 = f(I); y1 ~= A*B/(A+f(I+J)) < y2 ~= A*B/(A+f(I)) y1 and y2 are SLOW; T1 = S1*y1, T2 = S2*y2, T1 < T2;. OFF = T2 - T1 = A*B*f(I)*(f(I+J) - f(I)) / (A+f(I)) / (A + f(I+J)), Note Weber Law due to remembered previous input. Arousal sets sensitivity of rebound: OFF/ON = f(I)/A. Why is the rebound transient? Note equal f(I) inputs.
  • image p504fig13.31 Behavioral contrast can occur during reinforcement learning due to decreases in either positive or negative reinforcers. See Figure 13.32 for illustrative operant conditioning data.
    || Behavioral contrast: rebounds! Shock level vs trials. 1. A sudden decrease in frequency or amount of food can act as a negative reinforcer: Frustration. 2. A sudden decrease in frequency or amount of shock can act as a positive reinforcer: Relief.
  • image p505fig13.32 Response suppression and the subsequent antagonist rebounds are both calibrated by the inducing shock levels.
    || Behavioral contrast (Reynolds 1968). Responses per minute (VI schedule) vs Trial shock level.
  • image p505fig13.33 An unexpected event can disconfirm ongoing processing by triggering a burst of nonspecific arousal that causes antagonistic rebounds in currently active gated dipoles, whether cognitive or affective.
    || Novelty reset: rebound to arousal onset. 1. Equilibrate to I and J: S1 = f(I+J); y1 = A*B/(A+S1); S2 = f(I+J); y2 = A*B/(A+S2);. 2. Keep phasic input J fixed; increase arousal I to I* = I + ∆I: (a) OFF reaction if T1 < T2; OFF = T2 - T1 = f(I*+J)*y2 - f(I*)*y1 = { A*B*(f(I*) - f(I*+J)) - B*(f(I*)*f(I+J) - f(I)*f(I*+J)) } / (A+f(I)) / (A + f(I+J)). 3. How to interpret this complicated equation?
  • image p506fig13.34 With a linear signal function, one can prove that the rebound increases with both the previous phasic input intensity J and the unexpectedness of the disconfirming event that caused the burst of nonspecific arousal.
    || Novelty reset: rebound to arousal onset.
  • image p506fig13.35 A shock, or other reinforcing event, can have multiple cognitive and emotional effects on different brain processes.
    || Multiple functional roles of shock. 1. Reinforcement sign reversal: An isolated shock is a negative reinforcer; In certain contexts, a shock can be a positive reinforcer. 2. STM-LTM interaction: Prior shock levels need to be remembered (LTM) and used to calibrate the effect of the present shock (STM). 3. Discriminative and situational cues: The present shock level is unexpected (novel) with respect to the shock levels that have previously been contingent upon experimental cues: shock as a [1.reinforcer, 2. sensory cue, 3. expectancy].
  • image p509fig13.36 How can life-long learning occur without passive forgetting or associative saturation?
    || Associative learning. 1. Forgetting (eg remember childhood experiences): forgetting [is NOT passive, is Selective]; 2. Selective: larger memory capacity; 3. Problem: why doesn
  • image p510fig13.37 A disconfirmed expectation can cause an antagonistic rebound that inhibits prior incentive motivational feedback, but by itself is insufficient to prevent associative saturation.
    || Learn on-response. 1. CS-> ON, disconfirmed expectation-> antagonistic rebound, OFF-channel is conditioned 2. CS-> [ON, OFF]-> net, zero net output. What about associative saturation?
  • image p510fig13.38 Dissociation of the read-out of previously learned adaptive weights, or LTM traces, and of the read-in of new weight values enables back-propagating dendritic action potentials to teach the new adaptive weight values.
    || Dissociation of LTM read-out and read-in. Backpropagating dendritic action potentials as teaching signals. 1. LTM Denditic spines (Rall 1960
  • image p510fig13.39 Shunting competition and informational noise suppression in affective gated dipoles, plus back-propagating action potentials for teaching signals, enable the net normalized adaptive weights to be learned. They never saturate!
    || Learn net dipole output pattern. Opponent "decision" controls learning. Cf. competitive learning. Learning signal, opponent extinction.
  • image p512fig13.40 A conditioning paradigm that illustrates what it means for conditioned excitators to extinguish.
    || Conditioned excitor extinguishes. 1. Learning phase: CS1 bell-> US, CS1-> Fear(-). 2. Forgetting phase: CS1 bell-> Forgetting. 3. The expectation of shock is disconfirmed.
  • image p513fig13.41 A conditioning paradigm that illustrates what it means for conditioned inhibitors not to extinguish.
    || Conditioned inhibitor does not extinguish. 1. Learning phase: CS1 light-> shock, CS1-> Fear(-); Forgetting phase: n/a;. 2. Learning phase : CS1 + CS bell-> no shock; CS2-> relief;. Forgetting phase: CS2 bell- no forgetting. SAME CS could be used! SAME "teacher" in forgetting phase! Something else must be going on , or else causality would be violated!
  • image p513fig13.42 A conditioned excitor extinguishes because the expectation that was learned of a shock during the learning phase is disconfirmed during the forgetting phase.
    || Conditioned excitor extinguishes. Learning phase: CS1 bell-> US; CS1-> Fear(-); CS1-> shock; CS1 is conditioned to an expectation of shock. Forgetting phase: CS2 bell-> forgetting;. The expectation of shock is disconfirmed.
  • image p513fig13.43 A conditioned inhibitor does not extinguish because the expectation that was learned of no shock during the learning phase is not disconfirmed during the forgetting phase.
    || Conditioned excitor extinguishes. 1. Learning phase: CS1 light-> Shock; CS1-> Fear(-);. Forgetting phase: n/a;. 2. Learning phase: CS1 bell + CS2-> NO shock; CS2-> relief(+); CS2-> no shock;. Forgetting phase: CS2 bell!-> no forgetting;. The expectation that "no shock" follows CS2 is NOT disconfirmed!
  • image p514fig13.44 Analog of the COgEM model in Figure 6.1 of (Damasio 1999).
    || (a) map of object X-> map of proto-self at inaugural instant-> [, map of proto-self modified]-> assembly of second-order map. (b) map of object X enhanced-> second-order map imaged.
  • image p519fig14.01 Coronal sections of prefrontal cortex. Note particulary the areas 11, 13, 14, and 12o.
    ||
  • image p520fig14.02 Macrocircuit of the main brain regions, and connections between them, that are modelled in the unified predictive Adaptive Resonance Theory (pART) of cognitive-emotional and working memory dynamics. Abbreviations in red denote brain regions used in cognitive-emotional dynamics. Those in green denote brain regions used in working memory dynamics. Black abbreviations denote brain regions that carry out visual perception, learning and recognition of visual object categories, and motion perception, spatial representation and target tracking. Arrow denote non-excitatory synapses. Hemidiscs denote adpative excitatory synapses. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown. Also not shown are output signals from cortical areas to motor responses. V1: striate, or primary, visual cortex; V2 and V4: areas of prestriate visual cortex; MT: Middle Temporal cortex; MST: Medial Superior Temporal area; ITp: posterior InferoTemporal cortex; ITa: anterior InferoTemporal cortex; PPC: Posterior parietal Cortex; LIP: Lateral InterParietal area; VPA: Ventral PreArculate gyrus; FEF: Frontal Eye Fields; PHC: ParaHippocampal Cortex; DLPFC: DorsoLateral Hippocampal Cortex; HIPPO: hippocampus; LH: Lateral Hypothalamus; BG: Basal Ganglia; AMYG: AMYGdala; OFC: OrbitoFrontal Cortex; PRC: PeriRhinal Cortex; VPS: Ventral bank of the Principal Sulcus; VLPFC: VentroLateral PreFrontal Cortex. See the text for further details.
    ||
  • image p523fig14.03 (a) The MOTIVATOR neural model generalizes CogEM by also including the basal ganglia. It can hereby explain and simulate complementary functions of the amygdala and basal ganglia (SNc) during conditioning and learned performance. The basal ganglia generate Now Print signals in response to unexpected rewards. These signals modulate learning of new associations in many brain regions. The amygdala supports motivated attention to trigger actions that are expected to occur in response to conditioned or unconditioned stimuli. Object Categories represent visual or gustatory inputs in anterior inferotemporal (ITA) and rhinal (RHIN) cortices, respectively. Value Categories represent the value of anticipated outcomes on the basis of hunger and satiety inputs, in amygdala (AMYG) and lateral hypothalamus (LH). Object-Value Categories resolve the value of competing perceptual stimuli in medial (MORB) and lateral (ORB) orbitofrontal cortex. The Reward Expectation Filter detects the omission or delivery of rewards using a circuit that spans ventral striatum (VS), ventral pallidum (VP), striosomal delay (SD) cells in the ventral striatum, the pedunculopontine nucleus (PPTN) and midbrain dopaminergic neurons of the substantia nigra pars compacta/ventral tegmental area (SNc/VTA). The circuit that processes CS-related visual information (ITA, AMTG, ORB) operates in parallel with a circuit that processes US-related visual and gustatory information (RHIN, AMYG, MORB). (b) Reciprocal adaptive connections between hypothalamus and amygdala enable amygdala cells to become learned value categories. The bottom region represents hypothalmic cells, which receive converging taste and metabolite inputs whereby they become taste-drive cells. Bottom-up signals from activity patterns across these cells activate competing value category, or US Value Representations, in the amygdala. A winning value category learns to respond selectively to specific combinations of taste-drive activity patterns and sends adaptive top-down priming signals back to the taste-drive cells that activated it. CS-activated conditioned reinforcer signals are also associatively linked to value categories. Adpative connections end in (approximately) hemidiscs. See the text for details.
    ||
  • image p524fig14.04 (a) Model basal ganglia circuit for the control of dopaminergic Now Print signals from the substantia nigra pars compacta, or SNc, in response to unexpected rewards. Cortical inputs (Ii), activated by conditioned stimuli, learn to excite the SNc via a multi-stage pathway from the vantral striatum (S) to the ventral pallidum and then on to the PPTN (P) and the SNc (D). The inputs Ii excite the ventral striatum via adaptive weights WIS, and the ventral striatum excites the SNc with strength W_PD. The striosomes, which contain an adaptive spectral timing mechanism [xij, Gij, Yij, Zij], learn to generate adaptively timed signals that inhibit reward-related activation of the SNc. Primary reward signals (I_R) from the lateral hypothalamus both excite the PPTN directly (with strength W_RP) and act as training signals to the ventral striatum S (with strength W_RS) that trains the weights W_IS. Arrowheads denote excitatory pathways, circles denote inhibitory pathways, and hemidiscs denote synapses at which learning occurs. Thick pathways denote dopaminergic signals.
    ||
  • image p530fig14.05 Displays used by (Buschman, Miller 2007) in their visual search experiments. See the text foir details.
    || Fixation 500 ms-> Sample 1000 ms-> Delay 500 ms-> Visual [pop-out, search]- reaction time.
  • image p531fig14.06 Classification of scenic properties as texture categories by the ARTSCENE model. See the text for details.
    || Image-> Feature extraction (texture principal component rankings)-> Learning feature-to-scene mapping (texture category principal component rankings)<- scene class. Large-to-small attentional shrouds as principle component higher.
  • image p531fig14.07 Voting in the ARTSCENE model achieves even better prediction of scene type. See the text for details.
    || Image-> Feature extraction (texture principal component rankings)-> Learning feature-to-scene mapping (texture category principal component rankings)-> evidence accumulation (sum)-> scene class winner-take-all inference. Large-to-small attentional shrouds as principle component higher.
  • image p532fig14.08 Macrocircuit of the ARTSCENE Search neural model for learning to search for desired objects by using the sequences of already experienced objects and their locations to predict what and where the desired object is. V1 = First visual area or primary visual cortex; V2 = Second visual area; V4 = Fourth visual area; PPC = Posterior Parietal Cortex; ITp = posterior InferoTemporal cortex; ITa = anterior InferoTemporal cortex; MTL = Medial Temporal Lobe; PHC = ParaHippoCampal cortex; PRC = PeriRhinal Cortex; PFC = PreFrontal Cortex; DLPFC = DorsoLateral PreFrontal Cortex; VPFC = Ventral PFC; SC = Superior Colliculus.
    ||
  • image p533fig14.09 Search data and ARTSCENE Search simulations of them in each pair of images from (A) to (F). See the text for details.
    || 6*[data vs simulation], [Response time (ms) versus epoch].
  • image p540fig15.01 The timing of CS and US inputs in the delay and trace conditioning paradigms.
    || Delay and trace conditioning paradigms. [CS, US] vs [Delay, Trace]. To perform an adaptively timed CR, trace conditioning requires a CS memory trace over the Inter-Stimulus Interval (ISI).
  • image p541fig15.02 The neurotrophic Spectrally Timed Adaptive Resonance Theory, or nSTART, model of (Franklin, Grossberg 2017) includes hippocampus to enable adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between CS and US.
    || Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. [CS, US] -> Sensory Cortex (SC) <- motivational attention <-> category learning -> Prefrontal Cortex (PFC). SC conditioned reinforcement learning-> Amygdala (cannot bridge the temporal gap) incentive motivational learning-> PFC. SC adaptively timer learning and BDNF-> Hippocampus (can bridge the temporal gap) BDNF-> PFC. PFC adaptively timed motor learning-> cerebellum.
  • image p541fig15.03 Stages in the processing of adaptively timed conditioning, leading to timed responses in (d) that exhibit both individual Weber laws and an inverted U in conditioning as a function of ISI. See the text for details.
    || Curves of [Response vs ISI].
  • image p542fig15.04 Conditioning data from (Smith 1968; Millenson etal 1977). The former shows the kind of Weber Law and inverted U that were simulated in Figure 15.3. The latter shows that, if there are two ISIs during an experiment, then the animals learn to adaptively time their responses with two properly scaled Weber laws.
    || (left) One ISI (Smith 1968) [mean membrane extension (mm) versus time after CS onset (msec)]. (right) Two ISIs (Millenson etal 1977) [200, 100] msec CS test trials, [mean momentary CS amplitude (mm) vs time after CS onset (msec)]. (bottom) Conditioned eye blinks, made with nictitating membrane and/or eyelid, are adaptively timed: peak closure occurs at expected time(s) of arrival of the US following the CS and obeys a Weber Law.
  • image p543fig15.05 Simulation of conditioning with two ISIs that generate their own Weber Laws, as in the data shown in Figure 15.4.
    || Learning with two ISIs: simulation: R = sum[all: f(xi)*yi*xi] vs msec. Each peak obeys Weber Law! strong evidence for spectral learning.
  • image p543fig15.06 The circuit between dentate granule cells and CA1 hippocampal pyramid cells seems to compute spectrally timed responses. See the text for details.
    || Hippocampal interpretation. 1. Dentate granule cells (Berger, Berry, Thompson 1986): "increasing firing...in the CS period...the latency...was constant". 2. Pyramidal cells: "Temporal model" Dentate granule cells-> CA3 pyramids. 3. Convergence (Squire etal 1989): 1e6 granule cells, 1.6e5 CA3 pyramids. 80-to-1 (ri).
  • image p544fig15.07 In response to a step CS and sustained storage by I_CS of that input, a spectrum of responses xi at different rates ri develops through time.
    || Spectral timing: activation. CS-> I_CS-> All xi. STM sensory representation. Spectral activation d[dt: xi] = ri*[-A*xi + (1 - B*xi)*I_CS].
  • image p544fig15.08 The spectral activities xi generate sigmoid signals f(xi) before the signals are, in turn, gated by habituative transmitters yi.
    || Habituative transmitter gate. transmitter.
  • image p544fig15.09 As always, the habituative transmitter gate yi increases in response to accumulation and decreases due to gated inactivation, leading to the kinds of transmitter and output responses in the right hand column.
    || Habituative transmitter gate (Grossberg 1968). 1. d[dt: yi] = c*(1-yi) - D*f(xi)*yi, C-term - accumulation, D-term gated inactivation. 2. Sigmoid signal f(xi) = xi^n / (B^n + xi^n). 3. Gated output signal f(xi)*yi.
  • image p545fig15.10 When the activity spectrum xi generates a spectrum of sigmoidal signals (f(xi), the corresponding transmitters habituate at different rates. The output signals f(xi)*yi therefore generate a series of unimodal activity profiles that peak at different times, as in Figure 15.3a.
    || A timed spectrum of sampling intervals. [f(xi) activation, yi habituation, f(xi)*yi gated sampling] spectra. gated = sampling intervals.
  • image p545fig15.11 The adaptive weight, or LTM trace , zi learns from the US input I_US at times when the sampling signal f(xi)*yi is on. It then gates the habituative sampling signal f(xi)*yi to generate a doubly gated response f(xi)*yi*zi.
    || Associative learning, gated steepest descent learning (Grossberg 1969). d[dt: zi] = E*f(xi)*yi*[-zi + I_US], E-term read-out of CS gated signal, []-term read-out of US. Output from each population: f(xi)*yi*zi doubly gated signal.
  • image p546fig15.12 The adaptive weights zi in the spectrum learn fastest whose sampling signals are large when the US occurs, as illustrated by the green region in this simulation of (Grossberg, Schmajuk 1989).
    || Computer simulation of spectral learning. (left) fast (right) slow. Constant ISI: 6 cells fast to slow, 4 learning trials, 1 test trial.
  • image p546fig15.13 The total learned response is a sum R of all the doubly gated signals in the spectrum.
    || Adaptive timing is a population property. Total output signal: R = sum[i: f(xi)*yi*zi]. Adaptive timing is a collective property of the circuit. "Random" spectrum of rates achieves good collective timing.
  • image p547fig15.14 An individual
  • image p547fig15.15 Expected non-occurences do not prevent the processing of sensory events and their expectations. Rather, they prevent mismatches of those expectations from triggering orienting reactions.
    || Expected non-occurrence of goal. Some rewards are reliable but delayed in time. Does not lead to orienting reactions: How? Both expected and unexpected nonoccurrences are diue to mismatch of a sensory event with learned expectations. Expected non-occurrences do not inhibit sensory matching: eg a pigeon can see an earlier-than-usual food pellet. Hypothesis: Expected non-occurrences inhibit the process whereby sensory mismatch activates orienting reactions. Mismatch not-> orient.
  • image p548fig15.16 Homologous recognition learning and reinforcement learning macrocicuits enable adaptively timed conditioning in the reinforcement learning circuit to increase inhibition of the orienting system at times when a mismatch in the recognition system would have reduced inhibition of it.
    || Homolog between ART and CogEM model, complementary systems. [Recognition, Reinforcement] learning vs [Attentional, Orienting] system. Reinforcement: timing, drive representation.
  • image p548fig15.17 The timing paradox asks how inhibition of an orienting response (-) can be spread throughout the ISI, yet accurately timed responding can be excited (+) at the end of the ISI.
    || Timing paradox. [CS light, US shock] vs t. ISI = InterStimulus Interval = expected delay of reinforcer. Want timing to be accurate. Want to inhibit exploratory behaviour throught ISI.
  • image p549fig15.18 The Weber Law solves the timing paradox by creating an adaptively timed response throughout the ISI that peaks at the ISI. Within the reinforcement learning circuit, this response can maintain inhibition of the orienting system A at the same time as it generates adaptively timed incentive motivation to the orbitofrontal cortex.
    || Weber Law: reconciling accurate and distributed timing. Resolution: Output can inhibit orienting, peak response probability. What about different ISIs? Standard deviation = peak time. Weber law rule.
  • image p549fig15.19 How the adaptively timed hippocampal spectrum T inhibits (red arrow) the orienting system A as motivated attention in orbitofrontal cortex Si(2) peaks at the ISI.
    || Conditioning, Attention, and Timing circuit. Hippocampus spectrum-> Amgdala orienting system-> neocortex motivational attention. Adaptive timing inhibits orienting system and maintains adaptively timed Motivated Attention on the CS.
  • image p550fig15.20 Adaptively timed conditioning of Long Term Depression, or LTD, occurs in the cerebellum at synapses between parallel fibres and Purkinje cells, thereby reducing inhibition of subcortical nucleus cells and enabling them to express their learned movement gains within the learned time interval. Also see Figure 15.21.
    || [CS-Activated input pathways parallel fibres, US-Activated climbing fibres]-> [Subcortical nucleus (gain control), Cerebella cortex- Purkinje cells (timing)].
  • image p551fig15.21 The most important cells types and circuitry of the cerebellum: Purkinje cells (PC) receive excitatory inputs from the climbing fibres (CF) that originate in the inferior olive (IO) and from parallel fibres (PF), which are the axons for granule cells (GC). GCs, in turn, receive inputs from the mossy fibres (MF) coming from the precerebellar nuclei (PCN). The PF also inhibit PC via basket cells (BC), thereby helping to select the most highly activated PC. The PC generate inhibitory outputs from the cerebellum cortex to the deep cerebellar nuclei (DCN), as in Figure 15.20. Excitatory signals are denoted by (+) and inhibitory signals by (-). Other notations: GL- granular layer; GoC- golgi cells; ML- molecular layer; PCL- Purkinje cell layer; SC- stellate cell; WM- white matter.
    ||
  • image p551fig15.22 Responses of a retinal cone in the turtle retina to brief flashes of light of increasing intensity.
    || response vs msc.
  • image p552fig15.23 Cerebellar biochemistry that supports the hypothesis of how mGluR supports adaptively timed conditioning at cerebellar Purkinje cells. AMPA, Amino-3-hydroxy-5-methyl4-isoxazole priopionic acid-sensitive glutamate receptor; cGMP, cyclic guanosine monophosphate; DAG, diacylglycerol; glu, glutamate; GC, guanylyl cyclase; gK, Ca+-dependent K+ channel protein; GTP, guanosine triphosphate; IP 3
  • image p556fig15.24 (a) Data showing normally timed responding (solid curve) and short latency responses after lesioning cerebellar cortex (dashed curve). (b) computer simulation of short latency response after ablation of model cerebellar cortex.
    ||
  • image p557fig15.25 Computer simulations of (a) adaptively timed long term depression at Purkinje cells, and (b) adaptively timed activation of cereballar nuclear cells.
    || response vs time (msec)
  • image p557fig15.26 Brain regions and processes that contribute to autistic behavioral symptoms when they become imbalanced in prescribed ways.
    || Basal Gamglia prolonged gate opening <-> { Amygdala emotionally depressed-> [hippocampus- hyperspecific learning; Cerebellum- adaptive timing fails; hypofrontal blocking fails, no Theory of Mind]-> Neocortex; Neocortex- rewards not received-> Amygdala}.
  • image p559fig15.27 Brain regions and processes that contribute to the release of dopaminergic Now Print signals by the substantia nigra pars compacta, or SNc, in response to unexpected reinforcing events. See the text for details.
    || Model of spectrally timed SNc learning (Brown, Bulloch, Grossberg 1999). Delayed inhibitory expectations of reward. Dopamine cells signal an error in reqard prediction timing or magnitude. Immediate excitatory predictions of reward. Lateral hypothalamus (Primary Reward Input)-> [(+)ventral striatum <-> ventral pallidium (+)-> PPTN(+)-> SNc]. SNc-> [dopamine signal -> ventral striatum, Striosomal cells]. Conditioned Stimuli (CS)(+)-> [ventral striatum, striosomal cells]. Striosomal cells(-)-> SNc.
  • image p559fig15.28 Neurophysiological data (left column) and model simulations (right column) of SNc responses. See the text for details.
    || membrane potential vs time
  • image p560fig15.29 Excitatory pathways that support activation of the SNc by a US and the conditioning of a CS to the US.
    || Excitatory pathway. Primary reward (apple juice) briefly excites lateral hypothalamus. Hypothalamic-PPTN excitation causes SNc dopamine burst. Hypothalamic activity excites ventral striatum for training. Active CS working memory signals learn to excite ventral striatum. Lateral hypothalamus (Primary Reward Input)-> [(+)ventral striatum <-> ventral pallidium(+)-> PPTN(+)-> SNc]. SNc-> [dopamine signal -> ventral striatum. Conditioned Stimuli working memory trace (CS)(+)-> ventral striatum.
  • image p560fig15.30 The inhibitory pathway from striosomal cells to the SNc is able to inhibit the SNc when a reward occurs with expected timing and magnitude.
    || Inhibitory pathway. Learning: CS-striosomal LTP occurs due to a three-way coincidence [An active CS working memory input, a Ca2+ spike, a dopamine burst]; Signaling: The delayed Ca2+ spike facilitates striosomal-SNc inhibition;. Striosomal cells learn to predict both timing and magnitude of reward signal to cancel it: reward expectation;. Conditioned stimuli (CS) LTP-> Striosomal cells <- dopamine | (-)-> SNc->.
  • image p561fig15.31 The CS activates a population of striosomal cells that respond with different delays in order to enable adaptively timed inhibition of the SNc.
    || Expectation timing (Fiala, Grossberg, Bulloch 1996; Grossberg, Merrill 1992, 1996; Grossberg, Schmajuk 1989). How do cells bridge hundreds of milliseconds? Timing spectrum (msec). 1. CS activates a population of cells with delayed transient signals: MGluR. 2. Each has a different delay, so that the range of delays covers the entire interval. 3. Delayed transients gate both learning and read-out of expectations.
  • image p561fig15.32 The SNc can generate both dopamine bursts and dips in response to rewards whose amplitude is unexpectedly large or small.
    || Inhibitory pathway: expectation magnitude. 1. If reward is greater than expected, a dopamine burst causes striosomal expectation to increase. 2. If reward is less than expected, a dopamine dip causes striosomal expectation to decrease. 3. This is a negative feedback control system for learning. Conditioned stimuli (CS)-> Striosomal cells <- dopamine | (-)-> SNc->.
  • image p563fig15.33 The basal ganglia gate neural processing in many parts of the brain. The feedback loop through the lateral orbitofrontal cortex (blue arrow, lateral orbitofrontal) is the one that MOTIVATOR models.
    || MOTIVATOR models one of several thalamocortical loops through basal ganglia (Adapted from Fundamental Neuroscience. 2002 Copyright Elsevier). [cortex-> striatum-> pallidum S. nigra-> thalamus] vs [motor, oculomotor, dorsolateral prefrontal, lateral orbitofrontal, anterior cingulate]. thalamus-> [striatum, cortex].
  • image p563fig15.34 The colored regions are distinct parts of the basal ganglia in the loops depicted in Figure 15.33.
    || Distinct basal ganglia zones for each loop (Adapted from Fundamental Neuroscience. 2002 Copyright Elsevier).
  • image p564fig15.35 (a) A pair of recurrent shunting on-center off-surround networks for control of the fore limbs and hind limbs. (b) Varying the GO signal to these networks can trigger changes in movement gaits. See the text for details.
    ||
  • image p565fig15.36 (a) The FOVEATE model circuit for the control of saccadic eye movements within the peri-pontine reticular formation. (b) A simulated saccade staircase. See the text for details.
    || [left, right] eye FOVEATE model. [vertical vs horizontal] position (deg).
  • image p566fig15.37 Steps in the FOVEATE model
  • image p567fig15.38 (a) The Gated Pacemaker model for the control of circadian rythms is a recurrent shunting on-center off-surround network whose excitatory feedback signals are gated by habituative transmitters. Tonic arousal signals energize the pacemaker. Diurnal (left) and nocturnal (right) pacemakers are determined by whether phasic light signals turn the pacemaker on or off. An activity-dependent fatigue signal prevents the pacemaker from becoming overly active for too long. (b) Two simulations of circadian activity cycles during different schedules of light (L) and dark (D). See the text for details.
    || sourceOn-> on-cells (recurrent) <-(-) (-)> off-cells (recurrent) <-sourceOff. on-cells-> activity-> off-cells. off-cells-> fatigue. Diurnal: sourceOn=[light, arousal]; sourceOff=arousal;. Nocturnal: sourceOn=arousal; sourceOff=[arousal, light];.
  • image p568fig15.39 Circuits of the MOTIVATOR model that show hypothalamic gated dipoles.
    || [inputs, -> [object, value] categories-> object-value categories-> [reward expectation filter, [FEF, EAT] outputs]. reward expectation filter [DA dip, arousal burst]-> alpha1 non-specific arousal-> value categories. Msi drive inputs-> value categories.
  • image p569fig15.40 The direct and indirect basal ganglia circuits that control GO and STOP movement signals. See the text for details.
    || [Direct path GO(+), Indirect path STOP(+), dopamine from SNc(+-)]-> striatum. GO-> GPi/SNr-> Thalamus (VA/Vlo) <-> frontal cortex. STOP-> GPe <-> STN-> GPi/SNr. NAc-> GPi/SNr.
  • image p573fig16.01 The experimental chamber (A) and neurophysiological recordings from a rat hippocampus (B) that led to the discovery of place cells. See the text for details.
    ||
  • image p574fig16.02 Neurophysiological recordings of 18 different place cell receptive fields. See the text for details.
    ||
  • image p575fig16.03 As a rat navigates in its experimental chamber (black curves), neurophysiological recordings disclose the firing patterns (in red) of (a) a hippocampal place cell and (b) an entrorhinal grid cell.
    ||
  • image p578fig16.04 Cross-sections of the hippocampal regions and the inputs to them. See the text for details.
    || EC-> CA1-> CA3-> DG. Layers [V/V1, II, II].
  • image p580fig16.05 Macrocircuit of the GridPlaceMap model, which can learn both 2D grid cells and place cells in response to realistic trajectories of navigating rats using a hierarchy of SOMs with identical equations.
    || GridPlaceMap model: rate-based and spiking (Pilly, Grossberg 2012). Pre-wired 1D stripe cells, learns both 2D frid and place cells! Same laws for both; both select most frequent and energetic inputs. Place cells emerge gradually in response to developing grid cells. [place-> grid-> stripe] cells-> path integration-> vestibular signals
  • image p581fig16.06 The learning of hexagonal grid cell receptive fields as an animal navigates an open field is a natural consequence of simple trigonometric properties of the positions at which the firing of stripe cells that are tuned to different directions will co-occur.
    || The Trigonometry of spatial navigation. Coactivation of stripe cells.
  • image p582fig16.07 Stripe cells were predicted in (Mhatre, Gorchetchnikov, Grossberg 2012) to convert linear velocity signals into the distances travelled in particular directions. They are modeled by directionally-sensitive ring attractors, which help to explain their periodic activation as an animal continues to move in a given direction. See the text for details.
    || Stripe cells. Stripe cells are predicted to exist in (or no later than) EC layer (III, V/VI). Linear path integrators: represent distance traveled using linear velocity modulated with head direction signal. Ring attractor circuit: the activity bump represents distance traveled, stripe cells with same spatial period and directional preference fire with different spatial phases at different ring positions. Distance is computed directly, it does not require decoding by oscillatory interference. Periodic stripe cell activation due to ring anatomy: periodic boundary conditions. Stripe firing fields with multiple orientations, phases and scales.
  • image p582fig16.08 Some experimental evidence for stripe-like cell receptive fields has been reported. The band cells posited by Neil Burgess also exhibit the one-dimensional firing symmetry of stripe cells, but are modeled by oscillatory intererence. See the text for details.
    || Evidence for stripe-like cells. Entorhinal cortex data (Sargolini, Fyhn, Hafting, McNaughton, Witter, Moser, Moser 2006; Krupic, Burgess, O
  • image p583fig16.09 The GRIDSmap model used algorithmically defined stripe cells to process realistic rat trajectories. The stripe cell outputs then formed inputs to the adaptive filter of a self-organizing map which learned hexagonal grid cell receptive fields.
    || GRIDSmap. Self-organizing map receives inputs from stripe cells and learns to respond to most frequent co-activation patterns. Stripe cells combine speed and head direction to create a periodic 1D position code. Virtual rat navigated using live rat trajectories from Moser Lab. Speed and head direction drives stripe cells.
  • image p583fig16.10 The GRIDSmap model is embedded into a more complete representation of the processing stages from receipt of angular head velocity and linear velocity signals to this learning of place cells.
    || GRIDSmap. Pre-wired 2D stripe cells, learns 2D grid cells. vestibular cells [angular head velocity-> head direction cells, linear velocity]-> stripe cells- small scale 1D periodic spatial code (ECIII)-> SOM grid cells entorhinal cortex- small scale 2D periodic spatial scale-> SOM place cells hippocampal cortex- large scale 2D spatial code (dentate/CA3). Unified hierarchy of SOMs.
  • image p584fig16.11 GRIDSmap simulation of the learning of hexagonal grid fields. See the text for details.
    || Simulation results. Multiple phases per scale. response vs lenght scale (0.5m+).
  • image p584fig16.12 Temporal development of grid cell receptive fields on successive learning trials (1,3,5,7,25,50,75,100).
    || Temporal development of grid fields. Cells begin to exhibit grid structure by 3rd trial. Orientations of the emergent grid rotate to align with each other over trials.
  • image p585fig16.13 Hexagonal grid cell receptive fields develop if their stripe cell directional preferences are separated by 7, 10, 15, 20, or random numbers degrees. The number and directional selectivities of stripe cells can thus be chosen within broad limits without undermining grid cell development.
    ||
  • image p585fig16.14 Superimposing firing of stripe cells whose directional preferences differ by 60 degrees supports learning hexagonal grid cell receptive fields in GRIDSmap.
    || GRIDSmap: from stripe cells to grid cells. Grid-cell Regularity from Integrated Distance through Self-organizing map. Superimposing firing of stripe cells oriented at intervals of 60 degrees. Hexagonal grid!
  • image p586fig16.15 Superimposing stripe cells oriented by 45 degrees does not lead to learning of rectangular grids in GRIDSmap, but it does in an oscillatory inference model.
    || Why is a hexagonal grid favored? Superimposing firing of stripe cells oriented at intervals of 45 degrees. Rectangular grid. This and many other possibilities do not happen in vivo. They do happen in the oscillatory inference model. How are they prevented in GRIDSmap?
  • image p586fig16.16 In the place cell learning model of (Gorchetnikov, Grossberg 2007), three populations of five cells each of entorhinal grid cells (only two are shown) with different spatial periods input to the model
  • image p587fig16.17 A finer analysis of the 2D trigonometry of spatial navigation showed that both the frequency and amplitude of coactivations by stripe cells determine the learning of hexagonal grid fields.
    || A refined analysis: SOM amplifies most frequent and energetic coactivations (Pilly, Grossberg 2012). [linear track, 2D environment]. (left) Stripe fields separated by 90°. 25 coactivations by 2 inputs. (right) Stripe fields separated by 60°. 23 coactivations by 3 inputs.
  • image p588fig16.18 Simulations of coordinated learning of grid cell receptive fields (second row) and unimodal place cell receptive fields (third row) by the hierarchy of SOMs in the GridPlaceMap model. Note the exquisite regularity of the hexagonal grid cell firing fields.
    || [stripe, grid, place] cells vs [spikes on trajectory, unsmoothed rate map, smoothed rate map].
  • image p589fig16.19 Neurophysiological data showing the smaller dorsal grid cell scales and the larger ventral grid cell scales.
    || Spatial scale of grid cells increase along the MEC dorsoventral axis (Hafting etal 2005; Sargolini etal 2006; Brun etal 2008). [dorsal (left), ventral (right)] cart [rate map, autocortelogram]. How does the spatial scale increase along the MEC dorsoventral axis?
  • image p590fig16.20 Integration rate of grid cells decreases along the dorsoventral gradient of the Medial Entorhinal Cortex, or MEC.
    || Dorsoventral gradient in the rate of synaptic integration of MEC layer II stellate cells (Garden etal 2008). Cross-section of [Hp, CC, LEC, MEC. (A left column) [dorsal, ventral] mV? vs msec. (B center column) [half width (ms), rise time (ms), amplitude (mV)] vs location (μm). (C right upper) responses (D right lower) width (ms) vs loacation (μm).
  • image p590fig16.21 Frequency of membrane potential oscillations in grid cells decreases along the dorsoventral gradient of the MEC.
    || Dorsoventral gradient in the frequency of membrane potential oscillations of MEC layer II stellate cells (Giocomo etal 2007). (C left column) Oscillation (Hz) vs distance from dorsal surface (mm). (D right upper) [dorsal, ventral oscillations 5mV-500ms. (E right lower) [dorsal, ventral oscillations 100ms. Both membrane potential oscillation frequency and resonance frequency decrease from the dorsal to ventral end of MEC.
  • image p591fig16.22 Time constants and duration of afterhyperpolarization currents of grid cells increase along the dorsoventral gradient of the MEC.
    || Dorsoventral gradient in afterhyperpolarization (AHP) kinetics of MEC layer II stellate cells (Navratilova etal 2012). [mAHP time constant (ms), Half-width (mm)] vs distance from the dorsal surface (mm), at [-55, -50, -45] mV. Time constants and duration of AHP increase from the dorsal to the ventral end of MEC layer II. Effectively, the relative refractory period is longer for ventral stellate cells in MEC layer II.
  • image p591fig16.23 The Spectral Spacing Model uses a rate gradient to learn a spatial gradient of grid cell receptive field sizes along the dorsoventral gradient of the MEC.
    || Spectral spacing model. Map cells responding to stripe cell inputs of multiple scales. Grid cells: MEC layer II (small scale 2D spatial code). Stripe cells: PaS / MEC deep layer (small scale 1D spatial code). Path Integration. Vestibular signals- linear velocity and angular head velocity. SOM. How do entorhinal cells solve the scale selection problem?
  • image p592fig16.24 Parameter settings in the Spectral Spacing Model that were used in simulations.
    || Simulation settings. Activity vs distance (cm). Learning trials: 40.
  • image p593fig16.25 Spectral Spacing Model STM, MTM, and LTM equations. The rate spectrum that determines the dorsoventral gradient of multiple grid cell properties is defined by μm.
    || Spectral Spacing Model equations. [STM, MTM, LTM]. μm = rate spectrum.
  • image p593fig16.26 Data (left column) and simulations (right column) of the gradient of increasing grid cell spacing along the dorsoventral axis of MEC.
    || Gradient of grid spacing along dorsoventral axis of MEC (Brun etal 2008). data-[Distance (m?), Median grid spacing (m?)] simulations-[Grid spacing (cm), Grid spacing (cm)] vs response rate.
  • image p594fig16.27 Data (left column) and simulations (right column) of the gradient of increasing grid cell field width along the dorsoventral axis of MEC.
    || Gradient of field width along dorsoventral axis of MEC (Brun etal 2008). data-[Distance (m?), Width autocorr peak (m?)] simulations-[Grid field width (cm), Width autocorr peak (cm)] vs response rate.
  • image p595fig16.28 Data (left column) and simulations (right column) about peak and mean grid cell response rates along the dorsoventral axis of MEC.
    || Peak and mean rates at different locations along DV axis of MEC (Brun etal 2008). Peak rate (Hz) vs [data- DV quarter, simulations- Response rate].
  • image p596fig16.29 Data (top row) and simulations (bottom row) showing decreasing frequency of subthreshold membrane potential oscillations along the DV axis of MEC.
    || Subthreshold membrane potential oscillations at different locations along DV axis of MEC (Giocomo etal 2020; Yoshida etal 2011). Data [oscillations (Hz) vs distance from dorsal surface (mm) @[-50, -45] mV, Frequency (Hz) vs [-58, -54, -50] mV]. Simulations MPO frequency (Hz) s [response, habituation] rate.
  • image p596fig16.30 Data (top row) and simulations (bottom row) of spatial phases of learned grid and place cells.
    || Spatial phases of learned grid and place cells (Hafting etal 2005). Data: Cross-correlogram of rate maps of two grid cells; Distribution of phase difference: distance from origin to nearest peak in cross-correlogram. Simulations: Grid cell histogram of spatial correlation coefficients; Place cell histogram of spatial correlation coefficients.
  • image p597fig16.31 Data (a) and simulations (b-d) about multimodal place cell receptive fields in large spaces. The simulations are the result of learned place fields.
    || Multimodal place cell firing in large spaces (Fenton etal 2008; Henriksen etal 2010; Park etal 2011). Number of cells (%) vs Number of place fields. [2, 3] place fields, 100*100 cm space.
  • image p597fig16.32 Data (top row) and simulations (bottom row) about grid cell development in juvenile rats. Grid score increases (a-b and d), whereas grid spacing remains fairly flat (c and e).
    || Model fits data about grid cell development (Wills etal 2010; Langston etal 2010). Data: [Gridness, grid score, inter-field distance (cm)]. Simulations: [Gridness score, Grid spacing (cm)] vs trial.
  • image p598fig16.33 Data (top row) and simulations (bottom row) of changes in place cell properties in juvenile rats, notably about spatial information (a,c) and inter-trial stability (b,d).
    || Model fits data about grid cell development (Wills etal 2010). [Data, Simulation] vs [spatial information, inter-trial stability]. x-axis [age (postnatal day), trial].
  • image p598fig16.34 The spiking GridPlaceMap model generates theta-modulated place and grid cell firing, unlike the rate-based model.
    || Theta-modulated cells in spiking model. [place, grid] cell vs [membrane potential (mV vs time), frequency vs inter-spike intervals (s), power spectra (normalized power vs frequency (Hz))].
  • image p599fig16.35 Data (a) and simulations (b,c) about anatomically overlapping grid cell modules. (a) shows the anatomical distribution of grid cells belonging to different modules in one animal. DV location (mm) vs postrhinal border. (b) shows the simulated distribution of learned grid cell spacings from two stripe cell scales. frequency (%) vs grid spacing (cm). mu = [1, 0.6]. (c) shows what happens when half the cells respond with one rate and half another rate. (d) shows the same with three rates. (e-g) show spatial maps and autocorrelograms of grid cells that arise from the different rates in (d). [rate map, autocorelogram] vs [score [1.07, 0.5, 0.67], spacing (cm) [23.58, 41, 63.64]].
    ||
  • image p600fig16.36 The entorhinal-hipppocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories. See the text for details.
    || Entorhinal-hippocampal interactions as an ART system. Hippocampal place cells as spatial categories. Angular head velocity-> head direction cells-> stripe cells- small scale 1D periodic code (ECIII) SOM-> grid cells- small scale 2D periodic code (ECII) SOM-> place cells- larger scale spatial map (DG/CA3)-> place cells (CA1)-> conjunctive-coding cells (EC V/VI)-> top-down feedback back to stripe cells- small scale 1D periodic code (ECIII). stripe cells- small scale 1D periodic code (ECIII)-> place cells (CA1).
  • image p602fig16.37 Data showing the effect of hippocampal inactivation by muscimol on grid cell firing before, during, and six hours after the muscimol, reading from left to right.
    || Hippocampal inactivation disrupts grid cells (Bonnevie etal 2013). muscimole inactivation. spikes on trajectory: [before, after min [6-20, 20-40, 40-60, 6h]]. rate map (Hz) [18.6, 11.4, 9.5, 6.7, 10.8]. spatial autocorrelogram g=[1.12, 0.05, -0.34, 0.09, 1.27].
  • image p603fig16.38 Role of hippocampal feedback in maintaining grid fields. (a) Data showing the effect of hippocampal inactivation before and during muscimol inhibition of hippocampal cells, as in Figure 16.37. (b) Model simulation with normal grid fields. (c) Model simulation that emulates the effect of hippocampal inhibition on grid fields.
    || (a) Data: hippocampal inactivation [before, after] cart [spikes on trajectory (p: [18.6, 6.7] Hz), spatial autocorrelogram (g= [1.12, 0.09])]. (b) Model: noise-free path integration, [spikes on trajectory (p: 14.56 Hz), rate map, spatial autocorrelogram (g= 1.41), dynamic autocorrelogram (g=0.6)]. (c) Model: noisy path integration + non-specific tonic inhibition, [spikes on trajectory (p: 11.33 Hz), rate map, spatial autocorrelogram (g= 0.05), dynamic autocorrelogram (g=0.047)].
  • image p605fig16.39 Data showing effects of medial septum (MS) inactivation on grid cells and network theta oscillations in medial entorhinal cortex (MEC). (A) Examples of disruption in the spatial expression of the hexagonal grid structure for two grid cells (Brandon etal 2011). (B) Temporal reduction in the power and frequency of network theta oscillations (Koenig etal 2011). (C) Temporary reduction in the gridness score, mean firing rate, and patial stability of grid cells (Koenig etal 2011).
    || Disruptive effects of Medial Septum inactivation in Medial Entorhinal Cortex (Brandon etal 2011; Koenig etal 2011). (A) Rate map [rate map, spatial autocorrelations, trajectory] vs [baseline, sub-sampled, medial septum inactivation, 3-6 hour recovery, 24 hour recovery], [rate map (Hz- m, p), spatial autocorrelations (gridness)][ 1.2, 7.2, 1.1; 0.25, 1.7, 0.6; 0.25, 2.5, -0.53; 0.7, 5.1, 0.55; 1.0, 5.3, 1.3; 2.1, 15, 0.19; 1.7, 12, 0.71; 1.7, 3.2, -0.22; 1.8, 9.1, 0.68; 2.5, 13, 0.46]. (B) [normalized power at 7-9 Hz, frequency (Hz)] vs 5-minute periods. (C) [mean gridness score (+-SEM), mean firing rate (% of baseline), mean correlation coeff (+-SEM)] vs 10-minute periods.
  • image p607fig16.40 Effects of medial septum (MS) inactivation on grid cells. (a) Each row shows data and different data-derived measures of grid cell responsiveness, starting from the left with the baseline response to the middle column with maximal inhibition. (b) Data showing the temporary reduction in the gridness scores during MS inactivation, followed by recovery. (c) Simulation of the collapse in gridness, achieved by reduction in cell response rates to mimic reduced cholinergic transmission. (d,e) Simulations of the reduction in gridness scores in (d) by reduction of cell response rates, in (e) by changing the leak conductance. See the text for details.
    ||
  • image p611fig16.41 How back-propagating action potentials, supplemented by recurrent inhibitory interneurons, control both learning within the synapses on the apical dendrites of winning pyramidal cells, and regulate a rythm by which associative read-out is dissociated from read-in. See the text for details.
    ||
  • image p612fig16.42 Macrocircuit of the main SOVEREIGN subsystems.
    || [reward input, drive input, drive representation (DR), visual working memory and planning system (VWMPS), visual form and motion system (VFMS), motor approach and orienting system (MAOS), visual input (VisIn), motor working memory and planning system (MWMPS), motor approach and orienting system (MAOS), motor plant (MotP), Proprioceptive Input (PropIn), Vestibular Input (VesIn), Environmental feedback (EnvFB). DR [incentive motivational learning-> [VWMPS, MWMPS], -> VFMS, -> MAOS], VWMPS [conditioned reinforcer learning-> DR, MAOS], VFMS [visual object categories-> VWMPS, reactive movement commands-> MAOS], MWMPS [conditioned reinforcer learning-> DR, planned movement commands-> MAOS], MAOS [motor map positions-> MWMPS, motor outflow-> MotP], VisIn-> VFMS, VesIn-> MAOS, EnvFB-> [VisIn, MotP, VesIn].
  • image p613fig16.43 The main visual form and motion processing stream mechanisms of SOVEREIGN, many of them described at length in previous chapters.
    || Render 3-D scene (R3DS), figure-ground separation (FGS), log-polar transform (LPT), Gaussian coarse-coding (GCC), Invariant visual target map (IVTM), What Fuzzy ART (WhatFuzz), body spatial coordinates (BSC), where reactive visual TPV storage (WRVTS), Directional transient cell network (DTCN), Motion direction hemifild map (MDHM), Hemifiled left/right scoring (HLRS), reactive visual control signal (RVCS), Parvo/Magno/Erg competition (PMEC), Approach and Orient GOp (AOGp), GOm (GOm). R3DS [parvo-> FGS, magno-> DTCN], FGS-> [LPT, WRVTS], LPT-> GCC-> IVTM-> WhatFuzz, BSC-> [RVTS, PMEC], PMEC-> [gateRVTS-> RVTS, gateRVCS-> RVCS], DTCN-> MDHM-> HLRS, HLRS-> [PMEC, RVCS], AOGp-> gateRVTS, GOm-> gateRVCS.
  • image p613fig16.44 The main target position vector (TPV), difference vector (DV), and volitional GO computations in SOVEREIGN that bring together reactive and planned signals to control decision-making and action. See the text for details.
    || Reactive visual TPV (RVT), NETs (NETs), S-MV mismatch (SMVM), NETmv (NETmv), reactive visual TPV storage (RVTS), reactive DV1 (RD1), NET (NET), motivated what and where decisions (MWWD), Planned DV1 (PD1), tonic (Tonic), top-down readout mismatch (TDRM), Parvo gate (tonic) (PG), Orienting GOp offset (OGpO). RVT-> [NETs, RVTS], NETs-> [SMVM, NET], SMVM-> NET, NETmv-> SMVM, RVTS-> [NETs, RD1], NET-> [RD1, PD1, TDRM], MWWD-> PD1, PD1-> Tonic-> TDRMPG-> NETs, OGpO-> [NETmv, PD1].
  • image p614fig16.45 The main distance (d) and angle (a) computations that bring together and learn dimensionally-consistent visual and motor information whereby to make the currently best decisions and actions. See the text for details.
    || Reactive Visual TPV [m storage], NETm S-MV mismatch, MV mismatch, NETmv, PPVv, PPVm, Vestibular feedback, motor copy.
  • image p615fig16.46 SOVEREIGN uses homologous processing stages to model the (a) What cortical stream and the (b) Where cortical stream, including their cognitive working memories and chunking networks, and their modulation by motivational mechanisms. See the text for details.
    ||
  • image p615fig16.47 SOVEREIGN models how multiple READ circuits, operating in parallel in response to multiple internal drive sources, can be coordinated to realize a sensory-drive heterarchy that can maximally amplify the motivationally most currently favored option.
    ||
  • image p616fig16.48 SOVEREIGN was tested using a virtual reality 3D rendering of a cross maze (a) with different visual cues at the end of each corridor.
    ||
  • image p616fig16.49 The animat learned to convert (a) inefficient exploration of the maze into (b) an efficient direct learned path to the goal.
    ||
  • image p617fig16.50 The perirhinal and parahippocampal cortices enable adaptively timed reinforcement learning and spatial navigational processes that are modeled by Spectral Spacing models in the What and Where cortical streams, respectively, to be fused in the hippocampus.
    || What and Where inputs to the hippocampus (Diana, Yonelinas, Ranganath 2007). Adaptively timed conditioning and spatial naviga039tbl01.03 tion. Hippocampus <-> Entorhinal Cortex <-> [Perirhinal Cortex <-> what, Parahippocampal Cortex <-> where].
  • image p627tbl17.01 Homologs between reaction-diffusion and recurrent shunting cellular network models of development.
    || byRows: (reaction-diffusion, recurrent shunting net) (activator, excitatory activity) (inhibitor, inhibitory activity) (morphogenic source density, inputs) (firing of morphogen gradient, contrast enhancement) (maintenance of morphogen gradient, short-term memory) (power or sigmoidal signal functions, power or sigmoidal signal functions) (on-center off-surround interactions via diffusion, on-center off-surround interactions via signals) (self-stabilizing distributions of morphogens if inhibitors equilibrate rapidly, short-term memory pattern if inhibitors equilibrate rapidly) (periodic pulses if inhibitors equilibrate slowly, periodic pulses if inhibitors equilibrate slowly) (regulation, adaptation).
  • image p628fig17.01 A hydra
    ||
  • image p628fig17.02 Schematics of how different cuts and grafts of the normal Hydra in (a) may (*) or may not lead to the growth of a new head. See the text for details.
    ||
  • image p629fig17.03 How an initial morphogenetic gradient may be contrast enhanced to exceed the threshold for head formation in its most active region.
    || head formation threshold, final gradient, initial gradient.
  • image p630fig17.04 Morphogenesis: more ratios (Wolpert 1969). Shape preserved as size increases. French flag problem. Use cellular models! (Grossberg 1976, 1978) vs chemical or fluid reaction-diffusion models (Turing 1952; Gierer, Meinhardt 1972).
    ||
  • image p631fig17.05 How a blastula develops into a gastrula. See the text for details.
    || 1. The vegetal pole of the blastula flattens, [Animal, vegetal] hemisphere, blastocoel. 2. Some cells change shape and move inward to form the archenteron, Elastopore. 3. Other cells break free, becoming mesenchyme. 4. Then extensions of mesenchyme cells attach to the overlying ctoderm, Archenteron. 5. The archenteron elongates, assisted by the contraction of mesenchyme cells. 6. The mouth will form, where the archenteron meets ectoderm. 7. The blastopone will form the anus of the mature animal. [Mesenchyme, Ectoderm, Endoderm, Blastocoel, Archenteron, Mesenchyme]. Concept 38.3, www.macmillanhighered.com
  • image p634fig17.06 Summing over a population of cells with binary output signals whose firing thresholds are Gaussianly distributed (left image) generates a total output signal that grows in a sigmoidal fashion with increasing input size (dashed vertical line).
    || How binary cells with a Gaussian distribution of output thresholds generates a sigmoidal population signal. [# of binary cells with threshold T, Total output signal] vs Cell firing thresholds T. Cell population with firing thresholds Gaussianly distributed around a mean value. As input increases (dashed line), more cells in population fire with binary signals. Total population output obeys a sigmoid signal function f.
  • Introduction webPage, questions driving this "webSite" (collection of webPages, defined by the menu above) are :
  • How difficult would it be to augment "Transformer Neural Networks" (TrNNs) with Grossberg This section is repeated in the Introduction webPage.
  • Grossberg 2021 p229c2h0.60 SMART computer simulations demonstrate that a good enough match of a top-down expectation with a bottom-up feature pattern generates an attentive resonance during which the spikes of active cells synchronize in the gamma frequency range of 20-70 Hz (Figure 5.40). Many labs have reported a link between attention and gamma oscillations in the brain, including two articles published in 2001, one from the laboratory of Robert Desimone when he was at the the National Institute of Mental Health in Bethseda (Fries, Reynolds, Rorie, Desimone 2001), and the other from the laboratory of Wolf Singer in Frankfurt (Engel, Fries, Singer 2001). You
  • Grossberg 2021 p081c2h0.66 sub-section: How does one evolve a computational brain?
    The above discussion illustrates that no single step of theoretical derivation can derive a whole brain. One needs a method for deriving a brain in stages, or cycles, much as evolution has incrementally discovered ever more complex brains over many thousands of years. The following theoretical method has been successfully applied many times since I first used it in 1957. It embodies a kind of conceptual evolutionary process for deriving a brain.

    Because "brain evolution needs to achieve behavioural success", we need to start with data that embodiey indices of behavioral success. That is why, as illustrated in Figure 2.37 Modelling method and cycle, one starts with Behavioral Data from scores or hundreds of psychological experiments. These data are analyszed as the result of an individual adapting autonomously in real time to a changing world. This is the Arty of Modeling. It requires that one be able to infer from static data curves the dynamical processes that control individual behaviors occuring in real time. One of the hardest things that I teach to my students to do is "how to think in real time" to be able to carry out this speculative leap.

    Properly carried out, this analysis leads to the discovery of new Design Principles that are embodied by these behavioral processes. The Design Principles highlight the functional meaning of the data, and clarify how individual behaviors occurring in real time give rise to these static data curves.

    These principles are then converted into the simplest Mathematical Model using a method of minimal anatomies, which is a form of Occam
  • see incorporate reader questions into theme webPage
    see Navigation: [menu, link, directory]s
  • p153 Howell: grepStr
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
    bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :
    Note that a separate webPage lists a very small portion of Stephan Grossberg
  • J.E. Kaal, A. Otte, J.A. Sorensen, J.G. Emming 2021 "The nature of the atom" www.Curtis-Press.com, 268pp ISBN 978-1-8381280-2-9 https://StructuredAtom.org/
  • rationalwiki.org "Quantum consciousness" (last update 07Nov2022, viewed 16Jul2023)
    also critiques of the article above
  • Terrence J. Sejnowski 21Aug2023 "Large Language Models and the Reverse Turing Test", Neural Computation (2023) 35 (3): 309–342 (33 pages) https://direct.mit.edu/neco/issue (also copy in case original link fails)
  • Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin 12Jun2017 "Attention Is All You Need" [v5] Wed, 6 Dec 2017 03:30:32 UTC https://arxiv.org/abs/1706.03762
  • Wikipedia Consciousness
  • Menu
  • Grossbergs list of [chapter, section]s.html - Note that the links on this webPage can be used to individually view all captioned images.
  • directory of captioned images - users can easily view all of the captioned images, especially if they are downloaded onto their computer. Many image viewers have "forward, backward] arrows to go through these sequentially, or right-click to open a link in a window.
  • core bash script for extracting captions from webPage listing, convert them to images, then vertically appending them to the figure.
  • my bash utility to [position, move] windows. This is normally used to start up 6 workspaces on my computer (Linux Mint Debian Edition), each with 5-10 apps in separate windows.
  • Prepared themes with links to the captioned images - there are a huge number of themes from the book to focus on. I have prepared a few as examples.
  • What is consciousness? - video example not ready as of 30Aug2023. I save videos as "ogv/ogg" files, and open standard format. The "VLC media viewer" is the program that I use to view them. I have found that although some of the standard video viewers complain, when pushed into the process ogv files can be viewed with them.
  • Navigation: [menu, link, directory]s
  • Theme webPage generation by bash script
  • Notation for [chapter, section, figure, table, index, note]s
  • incorporate reader questions into theme webPages
  • A very primitive bash script is used to generate the search results for ALL themes in the Themes webPage. Many readers will already have far better tools for this from the Computational Intelligence area etc.
    Because the theme webPage is automatically generated, and frequently re-generated as I update the list of themes and sources, I do NOT edit the file directly. The output format can be confusing, due to the special formatted [chapter, section] headings, and large tables which will keep the readers guessing whether they are still within the theme they want to peruse (as per the Table of Contents). Perhaps I can upgrade the searches in time to reduce the confusion, and to split themes in a better way.
  • list of [chapter, section]s
  • list of [figure, table]s
  • selected index items - I have NO intention of re-typing the entire index!
  • Grossberg quotes
  • reader Howell notes - this is an example of building your own webPage of [note, comment, thought]s when reading the book, which can them be added to the bash script for searches. Are notes in addition to [figure, table] captions, mostly comprised of text within the image, but also including quotes of text in the book. Rarely, it includes comments by Howell preceded by "Howell".
    The latter are distinct from "readers notes" (see, for example : Grossberg The reader may want to create their own file of comments based on this example, or augment this list with their [own, others More importantly, and as an easy first adaptation of Grossbergs [core, fun, strange] concepts.html thematic listings, you probably want to get rid of Howell
  • downloading the entire webDirectories below to some directory on your filesystem, say {yourDir} : TrNNs_ART , bin (hopefully I
  • adapt the bash script bash script: thematic [search, collect]s.sh to your own system, and run. This will require re-defining several environmental variables for your, such as :