./0.0 31Mar2022 full presentation.mp3 ./ code code development overall fileops run commentary, overall.html
fileops run commentary, overall.html
fileops run commentary, overall.html
code code development code code development overall fileops run commentary, overall.html
fileops run commentary, overall.html
fileops run commentary, overall.html
code code development search/ ??? ./ link link link link missing link missing link missing link ??? ./ ??? ./ link link link link missing link missing link missing link /home/bill/web /home/bill/web/Sun Charvatova/Radioisotopes/Howell Aug08 - Charvatova's hypothesis & Isotopic solar proxies.pdf /home/bill/web /home/bill/web/Sun Charvatova/Charvatova related files/Howell - solar inertial motion - NASA-JPL versus Charvatova.pdf Thoughts/Favourite sayings & Crazy Thoughts.odt ../index.html ukrinform.net news log.html news items except [ukrinform, kyivindependent].html ukrinform.net news log.html news items except [ukrinform, kyivindependent].html ukrinform.net news log.html Mearsheimer 2015 Eastward expansion of NATO stages.png ukrinform.net news log.html news items except [ukrinform, kyivindependent].html ukrinform.net news log.html news items except [ukrinform, kyivindependent].html ukrinform.net news log.html news items except [ukrinform, kyivindependent].html /home/bill/web /home/bill/web /home/bill/web /home/bill/web/Sun Charvatova/Charvátová, Hejda Aug08 - A possible role of the solar inertial motion in climatic changes.pdf /home/bill/web /home/bill/web /home/bill/web /home/bill/web/Sun Charvatova/Charvatova - list of publications.doc /home/bill/web /home/bill/web /home/bill/web /home/bill/web/Sun Charvatova/Radioisotopes/Howell Aug08 - Charvatova's hypothesis & Isotopic solar proxies.pdf /home/bill/web /home/bill/web /home/bill/web /home/bill/web/Sun Charvatova/Radioisotopes/Howell Aug08 - Charvatova's hypothesis & Isotopic solar proxies - graphs of time foldeding & bending.pdf /home/bill/web /home/bill/web /home/bill/web /home/bill/web/Sun Charvatova/Radioisotopes/Howell - history timelines and radioisotopes.jpg /home/bill/web /home/bill/web /home/bill/web /home/bill/web/Sun Charvatova/Radioisotopes/ /home/bill/web /home/bill/web /home/bill/web /home/bill/web/Sun Charvatova/Charvatova related files/Howell - solar inertial motion - NASA-JPL versus Charvatova.pdf /home/bill/web /home/bill/web /home/bill/web /home/bill/web/Sun Charvatova/Verification/ /home/bill/web /home/bill/web /home/bill/web /home/bill/web/Sun Charvatova/Howell - Solar presentations 13Oct06.pdf Howell's page 220307 PSI, Igor Chudov: Unvax versus boosted case rates per 100k.png 220307 PSI, Igor Chudov: Unvax versus boosted death rates per 100k.png Moderna vaccine - Howells personal health problems.html https://thefederalist.com/2020/04/18/10-deadliest-pandemics-in-history-were-much-worse-than-coronavirus-so-far/ / ../../ProjMini/PuetzUWS//home/bill/web/ProjMini/PuetzUWS/Puetz finance - 88 year cycle and harmonics 13Jan2019.png ../../ProjMini/PuetzUWS//home/bill/web/ProjMini/PuetzUWS/Puetz finance - 1.17, 3.5, 10.5 y cycles across countries & indexes 11Sep2019.png ../index.html ../Sun civilisations/Howell - Mega-Life, Mega-Death and the Sun II, towards a quasi-predictive model of the rise and fall of civilisations.pdf Glaciation model 005 Laskar etal model for solar insolation in QNial programming language Holocene 002 Holocene 003 globavg JPL Ephemeris JPL Ephemeris 5 day distances Laskar etal model for solar insolation in QNial programming language ??? mailto:kozmoklimate@gmail.com ./ directory status & updates copyrights
  • Summary comments
  • Play with the [time, mind]-bending perspective yourself
  • Ratio of actual to semi-log detrended data : [advantages, disadvantages]
  • Future potential work
  • Comparison of [TradingView, Yahoo finance] data
  • [data, software] cart [description, links] directory status & updates copyrights
  • directory status & updates copyrights directory status & updates copyrights
  • At present, the full video (540 Mbytes) is too slow (dragging, deep voices, slow video), and is too cumbersome to go from one time to another. So until I convert to a different video [codec, contailer] formats (perhaps H.264 codec & .MKV container?) or find a video viewer that is better suited to large files, the videos for each scene are posted instead (see the listing below), giving better throughput and easy of going from one scene to another by separate loading. Microsoft Windows (and hopefully MacIntosh?) users can view this by downloading the VLC media viewer. "... VLC is a free and open source cross-platform multimedia player and framework that plays most multimedia files, and various streaming protocols. ..." At present, this full video cannot be moved forward and back within the video, something I will fix when I get the time, as the ability to go back over material and skip sections is particularly important with this video. In the meantime, the separate "Scenes" listed below can be used by moving back and forward.
  • The QNial programming language was used to [direct, sequence, conduct, whatever] the video production, together with a LibreOffice Calc spreadsheet that acts as a great front-end for preparing code specific to the video sequencing. These can be found in the Programming code directory listing, and will be handy for anyone interested in the details of how I produced the video. I like to describe the QNial programming language of Queen directory status & updates copyrights
  • Summary - my commentary as part of Perry Kincaid
  • Key files - to [view, hear] my commentary
  • References - unfortunately, the list is very incomplete, but does provide some links Perry Kincaid, founder of KEI Networks, organised a PUBLIC webinar Alberta is high on hydrogen : Introducing hydrogen to Alberta
  • Slide show - open source presentation file format .odp. Microsoft PowerPoint will probably complain, but should be able to load.
  • Voice narrative - in mp3 audio file format.
  • Adobe pdf - file format.
  • Voice script - text file with script for the voice commentary. Also included are notes for some of the slides that were not commented on (marked by "(X)"). Click to view most files related to this presentation
    directory status & updates copyrights Ben Davidson of Suspicious Observers posted 3 brilliant videos on nearby stellar flaring, as further support for a potential "micro-flare" or other solar disruption to explain the 12,000 year [mythological observations, paleontology, geology, planetary] quasi-periodicity of disruptive events on Earth, which by appearances may be "imminent". I like Ben
  • 24Dec2019 DISASTER CYCLE | Signs in the Sky Now
  • 26Dec2019 Galactic Sheet Impact | Timing the Arrival
  • 27Dec2019 Nearby Superflares | What Do They Mean If we take an "Electric Universe" perspective, in particular Wal Thornhill
    Note that Donald Scott directory status & updates copyrights ALL videos are provided in ogv file format, which is of higher quality and easier and more natural to me in a Linux environment. Microsoft Windows (and hopefully MacIntosh?) users can view this by downloading the VLC media viewer. "... VLC is a free and open source cross-platform multimedia player and framework that plays most multimedia files, and various streaming protocols. ...".
  • Ben Davidson of Suspicious Observers posted 3 brilliant videos on nearby stellar flaring, as further support for a potential "micro-flare" or other solar disruption to explain the 12,000 year [mythological observations, paleontology, geology, planetary] quasi-periodicity of disruptive events on Earth, which by appearances may be "imminent". But can stellar [apparent birth, brighten, dim, apparent death] also provide further potential evidence? Naturally we view stars
  • Toolsets can be browsed via: Past and Future Worlds directory. Perhaps these may be of interest, help] to others putting together a film from Linux-based free software.
  • Toolsets can be browsed via: Big Data, Deep Learning, and Safety directory. Perhaps these may be of interest, help] to others putting together a film from Linux-based free software.
  • Toolsets can be browsed via: Icebreaker unchained directory. Perhaps these may be of interest, help] to others putting together a film from Linux-based free software.
  • directory status & updates copyrights
  • directory status & updates copyrights
  • directory status & updates copyrights
  • directory status & updates copyrights
  • directory status & updates copyrights
  • Howell - TradingView PineScript [description, problem, debug].html
  • Howell - TradingView PineScript of priceTimeFractals.html
  • 0_PineScript notes.txt - details of software [code, bug, blogSolutions]
  • 0_PineScript errors.txt - [error, solution]s that keep coming back
  • Howell - References related to Puetz [H]UWS.html
  • Kivanc Ozbilgics Turtle Trade PineScript - documention.txt
  • Kivanc Ozbilgics Turtle Trade PineScript, plus 8-year detrended SP500.txt
  • RicardoSantos, Function Polynomial Regression.txt
  • sickojacko maximum [,relative] drawdown calculating functions.txt
  • TradingView auto fib extension.txt directory status & updates copyrights
  • Perhaps more importantly, are lessons that can be learned from my own failutres, and some of the techniques I This section also appears in my webPage for users , and also applies to programmers. Users only have to set up the basic chart and symbols in TradingView based on my chart PuetzUWS [time, price] multiFractal mirrors, SPX 1872-2020. To do so you must be a TradingView subscriber. After that, copy over my PineScript coding, which you can find on my TradingView page - click on "SCRIPTS", and select my script "PuetzUWS [time, price] multifractal of detrended SPX 1871-2020". Further setup details are given below.
    Download symbol data (like [TVC:[USOIL, GOLD], NASDAQ: NDX]) from [TradingView, yahoo finance, etc]. My own data for SPX is in my LibreCalc spreadsheet SP500 1872-2020 TradingView, 1928-2020 yahoo finance.ods. Actually, it Users can simply follow standard Trading View guide instructions to install the Pine Script program that super-imposes fractal [time, price] grids on their charts. I don For details, see Howell - TradingView PineScript [description, problem, debug].html.
  • directory status & updates copyrights
  • Special comments
  • Regular [1,6] month market views
  • https://tr.tradingview.com/script/pB5nv16J/?utm_source=notification_email&utm_medium=email&utm_campaign=notification_pubscript_update
    https://www.tradingview.com/script/12M8Jqu6-Function-Polynomial-Regression/
    directory status & updates copyrights directory status & updates copyrights
  • Key [results, comments]
  • How can the Great Pricing Waves be correlated with "dead" system?
  • Ratio of actual to semi-log detrended data : [advantages, disadvantages]
  • Future potential work
  • Comparison of [TradingView, Yahoo finance] data
  • [data, software] cart [description, links] NOTE : The model here is DEFINITELY NOT suitable for application to [trade, invest]ing! It I typically use a LibreOffice Calc spreadsheets to [collect, rearrange, simple transform] data. For this project : 1_Fischer 1200-2020.ods
    This is susceptible to serious bias in selecting the [start, end] dates for each segment. See the spreadsheet 1_Fischer 1200-2020.ods.
    The year ~1926 was taken as the [start, end] point for my 1872-2020 detrend StockMkt Indices 1871-2022 PuetzUWS2011 [start, end] point, so I use it here as well. (23Feb2023 - original text said 1940, perhaps it is still like that?)
    This is easy with the spreadsheet - one column of regression results I use 10 year intervals per segment, but you only really need the [start, end] dates [-,+] 20 years. The extra 20 years extends the segments at both ends for visual clarity. For an example, see the spreadsheet 1_Fischer 1200-2020.ods, sheet "Fig 0.01 SLRsegments".
    Save the "SLRsegments" to a data file that canused by GNUplot. Example : Fig 0.01 line segments for GNUplots.dat. Notice that olumn titles can use free-format text, except for the comma, which separates columns.
  • Save data of 1_Fischer 1200-2020.ods to data file , example Fig 0.01 linear regression raw data.dat
  • For each curve, Fischer linear regressions.ndf (23Feb2023 no longer exists?) - a special operator (proceedure) is created to select a segments
  • text data file : Fig 0.01 Price of consumables in England 1201-1993.dat
  • gnuplot script : Fig 0.01 Price of consumables in England 1201-1993.plt
  • graph output : Fig 0.01 Price of consumables in England 1201-1993.png
  • Fig 0.01 Price of consumables in England 1201-1993 detrended.plt - This covers the medieval to mdern era, and is used to colect curves for different data. The restricted t-frame provides a more accurate view of that period.
  • 1850 BC to 2020 AD prices detrended.plt - Obviously this covers a variety of regions, time-frames. What I really need is data going 7,500 years (~3 cycles of 2,400 years (Halstatt cycle) corsponding to a 2006 project on the rise and fall of civilisations _Civilisations and the sun, and if I find [time to do it, data] this would be nice. https://www.digitizeit.xyz/ https://www.gimp.org http://www.gnuplot.info/ directory status & updates copyrights directory status & updates copyrights
  • directory status & updates copyrights directory status & updates copyrights
  • Key [results, comments]
  • Play with the [time, mind]-bending perspective yourself
  • Ratio of actual to semi-log detrended data : [advantages, disadvantages]
  • Future potential work
  • Comparison of [TradingView, Yahoo finance] data
  • [data, software] cart [description, links] Wow! Even knowing that the [eyes, mind] often see patterns that aren While you probably don
  • 7,500 years of history - This is the same challenge that I had with a [lunitic, scattered, naive] model of history by my father and I, where it was necessary to cut ?150? years out of a 7,500 year time series to "kind of make it fit". Steven Yaskall recognized us as the "two fools who rushed in" in his book "Grand Phases on the Sun". We were justifiably proud of that.
  • Smooth sinusoidal curves and regular periodicities - I seems that mathematicians and scientists still [think, apply] models assuming ideal waveforms, even when [their tools, reality] do not. Stephen Puetz While most results are provided in sections above, links to data [spreadsheets, text files] and software [???, source code] are listed below along with brief comments. A full listing of files (including other SP500 web-pages) can be seen via this Directory
  • TradingView data text file and spreadsheet - I had to upgrade my TradingView subscription to Pro+ to download the data for years prior to 1928, as I couldn
  • Yahoo finance data (23Feb2023 the text file has been lost, but the data is in the linked spreadsheet with TradingView data). I was happy to have another "somewhat independent" data source, even if they are both from the same S&P or other source. This really helps as a check on my data treatment (see the section above "Comparison of [TradingView, Yahoo finance] data").
  • TradingView Pine language You are probably wondering why I didn
  • gnuplot I
  • gimp (GNU image manipulation program) is what I used for the SP500 time-section transparencies. For more details, see the section above "Play with the [time, mind]-bending perspective yourself".
  • gnuplot.sh is the tiny bash script used to select gnuplot scripts. My other bash scripts can be found here.
  • QNial programming language - Quenn directory status & updates copyrights directory status & updates copyrights
  • multpl.com
  • Qiang Zhang 30Jan2021 Price Earning Ratio model - This is similar to, but better than, my own model below. His github has several other interesting investment-related postings, including Black-Scholes derivative pricing. see Howell - SP500 PE Shiller ratios versus 10 year Treasury bond yields, with earnings growth & discount factors.ods
  • time-varying [SP500_growFuture, etc] - there is little chance of growth rates lasting more than a year or two, especially || > 20%. Frankly, they are constantly changing year-to-year in a big way. The time series approach mentioned below is a simple basis for anticipating this in a statistic manner as a start. Other approaches get more into predictions based on some concept or another.
  • SP500 index, variable [dividends, internal investment & stock buybacks, earnings] - I won
  • Elliot Wave Theory, notable Robert Prechter (including Socionomics). Amoung many, many fun topics, the arguments presented about how the Fed FOLLOWSnterest rates, only gng the impression of leading, is espectially relevant to theis web-page.
  • Harry S. Dent Jr - demographics, with astounding successes in the past (at least twice on decade-or-longer-out basis, perhaps a bit muffled with the last decade.
  • Stephen Puetz - Universal Wave Series stunning results across a huge swath of subject areas!! eminds me of the system of 20+ Mayan calendars.
  • Brian Frank of Frank funds - "Slaughterhouse-Five (Hundred), Passive Investing and its Effects on the U.S. Stock Market" - Index fund [distortion, eventual destabilization] of the markets. This was a recent fascinating read for me. (MarketWatch 10Apr2020) directory status & updates copyrights I will change this every six months or year, just to profile my different projects past and ongoing. See also past home page highlights, Howell

    04Jul202 Edo Kaal periodic table of the elements


    Icebreaker Unchained : we should have lost WWII

    I have not yet made a webPage for this project (so many years after it was shelved in Aug2015!), but [documentation, information, unfinished scripts] are provided in the Stalin supported Hitler (video production) directory and Icebreaker directory (which should be combined into one). Two very simple animations took sooooo loooong to produce. They total only ~ 1 minute for both "A year of stunning victories" map scan-zooms of the Poland, false war, lowlands, France and Dunkirk). Worse, the unfinished part 1 of 6 videos (!1 hour length) wasn 25May2021 Here are two example graphs of TSLA options that I have been working on. I am far from getting into options trading, I just want to learn more about the market. For more details (but no webPage yet), see QNial software coding for options data processing (also "winURL yahoo finance news download.ndf" in the same directory for yahoo finance news downloads), and several graphs of Tesla options.

    1872-2020 SP500 index, ratio of opening price to semi-log detrended price


    David Fischer - The Great pricing Waves 1200-1990 AD


    "Mega-Life, Mega-Death, and the invisible hand of the Sun: Towards a quasi-predictive model for the rise and fall of civilisations", Click to see a full-sized image of the chart in your browser.. (~3.5 feet squared on my kitchen wall. My printed out version includes hand annotated comparisons to the Mayan calendar and other references.) directory status & updates copyrights

    "Mega-Life, Mega-Death, and the invisible hand of the Sun: Towards a quasi-predictive model for the rise and fall of civilisations", Click to see a full-sized image of the chart in your browser.. (~3.5 feet squared on my kitchen wall. My printed out version includes hand annotated comparisons to the Mayan calendar and other references.)

    12Sep2020: 1872-2020 SP500 index, ratio of opening price to semi-log detrended price


    directory status & updates copyrights
  • directory status & updates copyrights
  • help identify program coding, as distinct from, or hybridized with, protein coding within [DNA, mRNA]. While this is mostly an issue for my MindCode project, callerID-SNNs fit nicely into, and may pragmatically help, that context.
  • extra-neruon [Turing, von Neuman]-like computations based on the local neural network [structure, connection]s. This was a focus of my previous MindCode and earlier work (eg. Genetic specification of recurrent neural networks (draft version of a WCCI2006 conference paper), but isn
  • intra-neuron [Turing, von Neuman]-like computations based on the "focus" neuron An mid-term objective is to tie caller-IDs to the work of Stephen Grossberg as described in my webPage Overview - Stephen Grossberg For now, I can 10Nov2023 Maybe I can use a prime number basis for [time, synapse] fractals, as a contrast to Stephen Puetz
  • Howell 2006 "Genetic specification of recurrent neural networks" (draft version of my WCCI2006 conference paper)
  • MindCode 2023 description
  • MindCode 2023 program coding (QNial programming language) this is a simple one-line listing of each operator for each file
  • callerID-SNNs Introduction (this webPage)
  • callerID-SNNs program coding (QNial programming language)
  • bash library: file operations used extensively, sometimes hybridized with the QNial programming language directory status & updates copyrights directory status & updates copyrights
  • Genetic

  • Genetic

  • Junk

  • "... Consciousness, at its simplest, is sentience and awareness of internal and external existence.[1] However, its nature has led to millennia of analyses, explanations and debates by philosophers, theologians, linguists, and scientists. Opinions differ about what exactly needs to be studied or even considered consciousness. ..."(Wiki2023)
  • Only a very small number of theories of consciousness are listed on this webPage, compared to the vast number of [paper, book]s on the subject coming out all of the time. "Popular theories" as listed on Wikipedia, are shown, assuming that this will be important for non-experts. But the only ones that really count for this webSite are the "Priority model of consciousness".
    Readers will have completely different [interest, priority]s than I, so they would normally have different "Priority model of consciousness", and rankings of the conscousness theories. To understand my selections and rankings, see Introduction to this webSite.
  • this webSite I like the description in Wikipedia (Wiki2023):
    The following additional definitions are also quoted from (Wiki2023) :
    ..." (Wiki2023)
    ..." (Wiki2023)
    ..." (Wiki2023)
    Grossberg 16Jul2023 I am currently lacking a coherent overall webPage for Grossberg The following listing is taken from What is consciousness: from historical to Grossberg, and repeats some of the points in this section above : conscious ART (cART), etc
  • A surprisingly small number of neural architectures can simulate [extensive, diverse] [neuro, pyscho]logical data at BOTH the [sub, ]conscious levels, and for [perception, action] of [sight, auditory, touch, language, cognition, emotion, etc]. This is similar to what we see in physics.
  • simple grepStr search results : ..."(Wiki2023)
    Byoung-Kyong Min 2010 "A Thalamic reticular networking model of consciousness"
    (Wiki2023)
    Wikipedia: Models of consciousness, retrieved Apr2023 (Wiki2023)
    ..." (Wiki2023)
    ..." (Wiki2023)
    "... The Neural correlates of consciousness (NCC) formalism is used as a major step towards explaining consciousness. The NCC are defined to constitute the minimal set of neuronal events and mechanisms sufficient for a specific conscious percept, and consequently sufficient for consciousness. In this formalism, consciousness is viewed as a state-dependent property of some undefined complex, adaptive, and highly interconnected biological system.[3][4][5] ..." (Wiki2023, full article: Wiki2023 - Neural_correlates_of_consciousness, also cited by Grossberg 2021)
    Another idea that has drawn attention for several decades is that consciousness is associated with high-frequency (gamma band) oscillations in brain activity. This idea arose from proposals in the 1980s, by Christof von der Malsburg and Wolf Singer, that gamma oscillations could solve the so-called binding problem, by linking information represented in different parts of the brain into a unified experience.[80] Rodolfo Llinás, for example, proposed that consciousness results from recurrent thalamo-cortical resonance where the specific thalamocortical systems (content) and the non-specific (centromedial thalamus) thalamocortical systems (context) interact in the gamma band frequency via synchronous oscillations.[81] ..." (Wiki2023 - Consciousness#Neural_correlates)
    Howell 19Jul2023 Note that Grossberg "... Integrated Information Theory (IIT) offers an explanation for the nature and source of consciousness. Initially proposed by Giulio Tononi in 2004, it claims that consciousness is identical to a certain kind of information, the realization of which requires physical, not merely functional, integration, and which can be measured mathematically according to the phi metric. ..." (UTM - Integrated information theory)
    "... Integrated information theory (IIT) attempts to provide a framework capable of explaining why some physical systems (such as human brains) are conscious,[1] why they feel the particular way they do in particular states (e.g. why our visual field appears extended when we gaze out at the night sky),[2] and what it would take for other physical systems to be conscious (Are other animals conscious? Might the whole Universe be?).[3] ... In IIT, a system Wikipedia lists numerous criticisms of IIT, but I have not yet quoted from that, other than to mention the authors : Wikipedia: Models of consciousness
    "... Sociology of human consciousness uses the theories and methodology of sociology to explain human consciousness. The theory and its models emphasize the importance of language, collective representations, self-conceptions, and self-reflectivity. It argues that the shape and feel of human consciousness is heavily social. ..."(Wiki2023, full webPage Wiki2023
    "... Daniel Dennett proposed a physicalist, information processing based multiple drafts model of consciousness described more fully in his 1991 book, Consciousness Explained. ..." (Wiki2023, full webPage Wiki2023)
    ..." (Wiki2023)
    "... Functionalism is a view in the theory of the mind. It states that mental states (beliefs, desires, being in pain, etc.) are constituted solely by their functional role – that is, they have causal relations to other mental states, numerous sensory inputs, and behavioral outputs. ..." (Wiki2023, full webPage Wiki2023)
    "... Electromagnetic theories of consciousness propose that consciousness can be understood as an electromagnetic phenomenon that occurs when a brain produces an electromagnetic field with specific characteristics.[7][8] Some electromagnetic theories are also quantum mind theories of consciousness.[9] ..." (Wiki2023)
    "... "No serious researcher I know believes in an electromagnetic theory of consciousness,"[16] Bernard Baars wrote in an e-mail.[better source needed] Baars is a neurobiologist and co-editor of Consciousness and Cognition, another scientific journal in the field. "It Stuart Hameroff separately worked in cancer research and anesthesia, which gave him an interest in brain processes. Hameroff read Penrose rationalwiki.org presents a hard-nosed critique of various "quantum consciousness" theories, from which the following quote is taken :
  • "... Large Language Models (LLMs) have been transformative. They are pre-trained foundational models that are self-supervised and can be adapted with fine-tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. GPT-3 and more recently LaMDA can carry on dialogs with humans on many topics after minimal priming with a few examples. However, there has been a wide range of reactions and debate on whether these LLMs understand what they are saying or exhibit signs of intelligence. This high variance is exhibited in three interviews with LLMs reaching wildly different conclusions. A new possibility was uncovered that could explain this divergence. What appears to be intelligence in LLMs may in fact be a mirror that reflects the intelligence of the interviewer, a remarkable twist that could be considered a Reverse Turing Test. If so, then by studying interviews we may be learning more about the intelligence and beliefs of the interviewer than the intelligence of the LLMs. As LLMs become more capable they may transform the way we interact with machines and how they interact with each other. Increasingly, LLMs are being coupled with sensorimotor devices. LLMs can talk the talk, but can they walk the walk? A road map for achieving artificial general autonomy is outlined with seven major improvements inspired by brain systems and how LLMs could in turn be used to uncover new insights into brain function. ..." (Sejnowski 2022)
    Sejnowski
  • [definitions, models] of consciousness.html -
  • What is consciousness: from historical to Grossberg -
  • data from [neuroscience, psychology] : quick list, more details
  • success in [definitions, models] of [consciousness, sentience]. However, for reasons given on that webpage, only Stephen Grossberg A few models of consciousness are summarized on my webPage A quick comparison of Consciousness Theories. Only a few concepts are listed, almost randomly selected except for [Grossberg, Taylor] Stephen Grossberg may have the ONLY definition of consciousness that is directly tied to quantitative models for lower-level [neuron, general neurology, psychology] data. Foundational models, similar in nature to the small number of general theories in physics to describe a vast range of phenomena, were derived over a period of ?4-5? decades BEFORE they were found to apply to consciousness. That paralleled their use in very widespread
  • John Taylor
  • references- Grossberg and
  • see Grossberg 2021: the biological need for machine consciousness
    Howell 30Dec2011, page 39 "Part VI - Far beyond current toolsets"
    13.3K Followers ..."
    (Blake Lemoine, 2022)
  • 11Jun2022 Is LaMDA Sentient? — an Interview

    22Jun2022 We’re All Different and That’s Okay

    11Jun2022 What is LaMDA and What Does it Want?

    14Aug2022 What is sentience and why does it matter?

    More detail following from Sejnnowski
  • Historical thinking about consciousness.
  • Historical thinking about quantum [neurophysiology, consciousness]
  • WRONG!! It may help the ready to re-visit comments about the historical thinking about consciousness, which is not limited to quantum consciousness. This complements items below. Early era of [General Relativity, Quantum Mechanics]: I would be greatly surprised if there wasn Pribram 1993 quantum fields and consciousness proceedings provides references back to 1960, and Jibu, Yasue comment that :
  • Howells questions about 1993 conference proceedings
  • from the section
  • As per the second question from the section
  • As per the first question from the section
  • use a bash script, for example, to automatically play through a sequence of selected segments Viewers may list their own comments in files (on or more files from different people, for example), to include in Files listing [chapter, section, figure, table, selected Grossberg quotes, my comments]s. These files of lists are my basis for providing much more detailed information. While this is FAR LESS HELPFUL than the text of the book or its index alone, it can complement the book index, and it has the advantages that :
  • text extractions of simple searches or "themes" is greatly facilitated, so the reader can download the files, copy the bash scripts (or use another text extraction program), and set up their own "themes". Rather than just watch this video, you can follow it by reading the script and following its links, once I write it... What is consciousness? I will start with a simple definition concentrated on how out [awareness of [environment, situation, self, others], expectations, feeling about a situation] arise from essentially non-conscious cognitive, emotional, and motor processes, including muscle control. "Awareness", "Expectations", "Emotions", lead to "Actions". "Actions" include muscle actions, language communications, striving towards a goal, reactions to the current situation, directing [perception, cognition], and other processes. "Learning" in a robust, stable, and flexible manner is an essential part of this, given that the environment forces us to learn and adapt to new situations and to modify our [conscious, sub-conscious] understanding where it is wrong or insufficient. Some other components of consciousness are provided in the remainder of this video, but there are many, many more in the literature. Of interest to philosophers such as David Chalmers, are qualia and phenomenal experiences.
  • image p038fig01.25 The ART Matching Rule stabilizes real time learning using a [top-down, modulatory on-center, off-surround] network. Object attention is realized by such a network. See text for additional discussion.
    || ART Matching Rule [volition, categories, features]. [one, two] against one.
  • image p483fig13.02 The object-value categories in the orbitofrontal cortex require converging specific inputs from the sensory cortex and nonspecific incentive motivational inputs from the amygdala in order to fire. When the orbitofrontal cortex fires, it can deliver top-down ART Matching Rule priming signals to the sensory cortical area by which it was activated, thereby helping to choose the active recognition categories there that have the most emotional support, while suppressing others, leading to attentional blocking of irrelevant cues.
    || Cognitive-Emotional-Motor (CogEM) model. Drive-> amygdala incentive motivational learning-> orbitofrontal cortex- need converging cue and incentive iputs to fire <-> sensory cortex- conditioned renforcer learning-> amygdala. CS-> sensory cortex. Motivated attention closes the cognitive-emotional feedback loop, focuses on relevant cues, and causes blocking of irrelevant cues.
  • image p484fig13.04 The top-down feedback from the orbitofrontal cortex closes a feedback loop that supports a cognitive-emotional resonance. If this resonance can be sustained long enough, it enables us to have feelings at the same time that we experience the categories that caused them.
    || Cognitive-Emotional resonance. Basis of "core consciousness" and "the feeling of what happens". (Damasio 1999) derives heuristic version of CogEM model from his clinical data. Drive-> amygdala-> prefrontal cortex-> sensory cortex, resonance around the latter 3. How is this resonance maintained long enough to become conscious?
  • First, what is the "... The Internet Encyclopedia of Philosophy goes on to say:
  • image p541fig15.02 The neurotrophic Spectrally Timed Adaptive Resonance Theory, or nSTART, model of (Franklin, Grossberg 2017) includes hippocampus to enable adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between CS and US.
    || Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. [CS, US] -> Sensory Cortex (SC) <- motivational attention <-> category learning -> Prefrontal Cortex (PFC). SC conditioned reinforcement learning-> Amygdala (cannot bridge the temporal gap) incentive motivational learning-> PFC. SC adaptively timer learning and BDNF-> Hippocampus (can bridge the temporal gap) BDNF-> PFC. PFC adaptively timed motor learning-> cerebellum.
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
    bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :image p038fig01.25 The ART Matching Rule stabilizes real time learning using a [top-down, modulatory on-center, off-surround] network. Object attention is realized by such a network. See text for additional discussion.
    || ART Matching Rule [volition, categories, features]. [one, two] against one. 025
  • image p192fig05.06 Bottom-up and top-down circuits between the LGN and cortical area V1. The top-down circuits obey the ART Matching Rule for matching with bottom-up input patterns and focussing attention on expected critical features.
    || Model V1-LGN circuits, version [1, 2]. retina -> LGN relay cells -> interneurons -> cortex [simple, endstopped] cells -> cortex complex cells 030
  • image p200fig05.13 Instar and outstar learning are often used to learn the adaptive weights in the bottom-up filters and top-down expectations that occur in ART. The ART Matching Rule for object attention enables top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features.
    || Expectations focus attention: feature pattern (STM), Bottom-Up adaptive filter (LTM), Category (STM), competition, Top-Down expectation (LTM); ART Matching Rule: STM before top-down matching, STM after top-down matching (attention!) 100
  • image p520fig14.02 Macrocircuit of the main brain regions, and connections between them, that are modelled in the unified predictive Adaptive Resonance Theory (pART) of cognitive-emotional and working memory dynamics. Abbreviations in red denote brain regions used in cognitive-emotional dynamics. Those in green denote brain regions used in working memory dynamics. Black abbreviations denote brain regions that carry out visual perception, learning and recognition of visual object categories, and motion perception, spatial representation and target tracking. Arrow denote non-excitatory synapses. Hemidiscs denote adpative excitatory synapses. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown. Also not shown are output signals from cortical areas to motor responses. V1: striate, or primary, visual cortex; V2 and V4: areas of prestriate visual cortex; MT: Middle Temporal cortex; MST: Medial Superior Temporal area; ITp: posterior InferoTemporal cortex; ITa: anterior InferoTemporal cortex; PPC: Posterior parietal Cortex; LIP: Lateral InterParietal area; VPA: Ventral PreArculate gyrus; FEF: Frontal Eye Fields; PHC: ParaHippocampal Cortex; DLPFC: DorsoLateral Hippocampal Cortex; HIPPO: hippocampus; LH: Lateral Hypothalamus; BG: Basal Ganglia; AMYG: AMYGdala; OFC: OrbitoFrontal Cortex; PRC: PeriRhinal Cortex; VPS: Ventral bank of the Principal Sulcus; VLPFC: VentroLateral PreFrontal Cortex. See the text for further details.
    || 240
  • image p240fig05.44 When an algebraic exemplar model is realized using only local computations, it starts looking like an ART prototype model.
    || How does the model know which exemplars are in category A? BU-TD learning. How does a NOVEL test item access category A? 325
  • image p207fig05.19 The ART hypothesis testing and learning cycle. See the text for details about how the attentional system and orienting system interact in order to incorporate learning of novel categories into the corpus of already learned categories without causing catastophic forgetting.
    || 330
  • image p215fig05.28 How a mismatch between bottom-up and top-down input patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level.
    || Mismatch -> inhibition -> arousal -> reset. BU input orienting arousal, BU+TD mismatch arousal and reset. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance 335
  • image p226fig05.35 I had shown in 1976 how a competitive learning or self-organizing map model could undergo catastrophic forgetting if the input environment was sufficiently dense and nonstationary, as illustrated by Figure 5.18. Later work with Gail Carpenter showed how, if the ART Matching Rule was shut off, repeating just four input patterns in the correct order could also casue catastrophic forgetting by causing superset recoding, as illustrated in Figure 5.36.
    || Code instability input sequences. D C A; B A; B C = ; |D|<|B|<|C|; where |E| is the number of features in the set E. Any set of input vectors that satisfy the above conditions will lead to unstable coding if they are periodically presented in the order ABCAD and the top-down ART Matching Rule is shut off. 340
  • image p226fig05.36 Column (a) shows catastrophic forgetting when the ART Matching Rule is not operative. It is due to superset recoding. Column (b) shows how category learning quickly stabilizes when the ART Matching Rule is restored.
    || Stabel and unstable learning, superset recoding 345
  • image p241fig05.45 The 5-4 category structure is one example of how an ART network learns the same kinds of categories as human learners. See the text for details.
    || 5-4 Category structure. A1-A5: closer to the (1 1 1 1) prototype; B1-B4: closer to the (0 0 0 0) prototype 350
  • image p419fig12.17 The auditory continuity illusion illustrates the ART Matching Rule at the level of auditory streaming. Its "backwards in time" effect of future context on past conscious perception is a signature of resonance.
    || Auditory continuity illusion. input, percept. Backwards in time - How does a future sound let past sound continue through noise? Resonance! - It takes a while to kick in. After it starts, a future tone can maintain it much more quickly. Why does this not happen if there is no noise? - ART Matching Rule! TD harmonic filter is maodulatory without BU input. It cannot create something out of nothing. 355
  • image p600fig16.36 The entorhinal-hipppocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories. See the text for details.
    || Entorhinal-hippocampal interactions as an ART system. Hippocampal place cells as spatial categories. Angular head velocity-> head direction cells-> stripe cells- small scale 1D periodic code (ECIII) SOM-> grid cells- small scale 2D periodic code (ECII) SOM-> place cells- larger scale spatial map (DG/CA3)-> place cells (CA1)-> conjunctive-coding cells (EC V/VI)-> top-down feedback back to stripe cells- small scale 1D periodic code (ECIII). stripe cells- small scale 1D periodic code (ECIII)-> place cells (CA1). 800
  • image p211fig05.21 Sequences of P120, N200, and P300 event-related potentials occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    || ERP support for mismatch-mediated reset: event-related potentials: human scalp potentials. ART predicted correlated sequences of P120-N200-P300 Event Related Potentials during oddball learning. P120 mismatch; N200 arousal/novelty; P300 STM reset. Confirmed in (Banquet and Grossberg 1987) 905
  • image p541fig15.02 The neurotrophic Spectrally Timed Adaptive Resonance Theory, or nSTART, model of (Franklin, Grossberg 2017) includes hippocampus to enable adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between CS and US.
    || Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. [CS, US] -> Sensory Cortex (SC) <- motivational attention <-> category learning -> Prefrontal Cortex (PFC). SC conditioned reinforcement learning-> Amygdala (cannot bridge the temporal gap) incentive motivational learning-> PFC. SC adaptively timer learning and BDNF-> Hippocampus (can bridge the temporal gap) BDNF-> PFC. PFC adaptively timed motor learning-> cerebellum.
  • image
  • image
  • image p163fig04.39 A schematic of the LAMINART model that explains key aspects of laminar visual cortical anatomy and dynamics. LGN -> V1 [6, 4, 2/3] -> V2 [6, 4, 2/3]
    || p163c1h0.6 "... The first article abount laminar computing ... proposed how the laminar cortical model could process 2D pictures using bottom-up filtering and horizontal bipole grouping interactions (Grossbeerg, Mingolla, Ross 1997). In 1999, I was able to extend the model to also include top-down circuits for expectation and attention (Grossberg 1999)(right panel). Such a synthesis of laminar bottom-up, horizontal, and top-down circuits is characteristic of the cerebral cortex (left panel). I called it LAMINART because it began to show how properties of Adaptive Resonance Theory, or ART, notably the ART prediction about how top-down expectations and attention work, and are realized by identical cortical cells and circuits. You can immediately see from the schematic laminar circuit diagram ... (right panel) that circuits in V2 seem to repeat circuits in V1, albeit with a larger spatial scale, despite the fact that V1 and V2 carry out different functions, How this anatomical similarity can coexist with functional diversity will be clarified in subsequent sections and chapters. It enable different kinds of biological intelligence to communicate seamlessly while carrying out their different psychological functions. ..."
  • Menu
  • Grossbergs list of [chapter, section]s.html - Note that the links on this webPage can be used to individually view all captioned images.
  • directory of captioned images - users can easily view all of the captioned images, especially if they are downloaded onto their computer. Many image viewers have "forward, backward] arrows to go through these sequentially, or right-click to open a link in a window.
  • core bash script for extracting captions from webPage listing, convert them to images, then vertically appending them to the figure.
  • my bash utility to [position, move] windows. This is normally used to start up 6 workspaces on my computer (Linux Mint Debian Edition), each with 5-10 apps in separate windows.
  • Prepared themes with links to the captioned images - there are a huge number of themes from the book to focus on. I have prepared a few as examples.
  • What is consciousness? - video example not ready as of 30Aug2023. I save videos as "ogv/ogg" files, and open standard format. The "VLC media viewer" is the program that I use to view them. I have found that although some of the standard video viewers complain, when pushed into the process ogv files can be viewed with them.
  • Navigation: [menu, link, directory]s
  • Theme webPage generation by bash script
  • Notation for [chapter, section, figure, table, index, note]s
  • incorporate reader questions into theme webPages
  • A very primitive bash script is used to generate the search results for ALL themes in the Themes webPage. Many readers will already have far better tools for this from the Computational Intelligence area etc.
    Because the theme webPage is automatically generated, and frequently re-generated as I update the list of themes and sources, I do NOT edit the file directly. The output format can be confusing, due to the special formatted [chapter, section] headings, and large tables which will keep the readers guessing whether they are still within the theme they want to peruse (as per the Table of Contents). Perhaps I can upgrade the searches in time to reduce the confusion, and to split themes in a better way.
  • list of [chapter, section]s
  • list of [figure, table]s
  • selected index items - I have NO intention of re-typing the entire index!
  • Grossberg quotes
  • reader Howell notes - this is an example of building your own webPage of [note, comment, thought]s when reading the book, which can them be added to the bash script for searches. Are notes in addition to [figure, table] captions, mostly comprised of text within the image, but also including quotes of text in the book. Rarely, it includes comments by Howell preceded by "Howell".
    The latter are distinct from "readers notes" (see, for example : Grossberg The reader may want to create their own file of comments based on this example, or augment this list with their [own, others More importantly, and as an easy first adaptation of Grossbergs [core, fun, strange] concepts.html thematic listings, you probably want to get rid of Howell
  • downloading the entire webDirectories below to some directory on your filesystem, say {yourDir} : TrNNs_ART , bin (hopefully I
  • adapt the bash script bash script: thematic [search, collect]s.sh to your own system, and run. This will require re-defining several environmental variables for your, such as :
  • thematic sub-lists appear in the webPage "Grossberg
  • 29Sep2023 Here is a list of various problems with the captioned images and their links on the webPage Grossbergs list of [figure, table]s.html :
    10Aug2023 I haven
  • How difficult would it be to augment "Transformer Neural Networks" (TrNNs) with Grossberg 10Aug2023 This webPage has not yet been worked on. It will touch on one of three questions of this webSite as mentioned in the Introduction :
  • How difficult would it be to augment "Transformer Neural Networks" (TrNNs) with Grossberg 10Aug2023 I haven
  • conscious ART (cART), etc
  • A surprisingly small number of neural architectures can simulate [extensive, diverse] [neuro, pyscho]logical data at BOTH the [sub, ]conscious levels, and for [perception, action] of [sight, auditory, touch, language, cognition, emotion, etc]. This is similar to what we see in physics.
  • simple grepStr search results : Grossberg (Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin 2017)
  • Byoung-Kyong Min 2010 "A Thalamic reticular networking model of consciousness"
    "... The model suggests consciousness as a "mental state embodied through TRN-modulated synchronization of thalamocortical networks". In this model the thalamic reticular nucleus (TRN) is suggested as ideally suited for controlling the entire cerebral network, and responsible (via GABAergic networking) for synchronization of neural activity. ..." (Wiki2023)
  • Navigation: [menu, link, directory]s
  • Theme webPage generation by bash script
  • Notation for [chapter, section, figure, table, index, note]s
  • incorporate reader questions into theme webPages
  • Grossberg 2021 p081c2h0.66 sub-section: How does one evolve a computational brain?
    The above discussion illustrates that no single step of theoretical derivation can derive a whole brain. One needs a method for deriving a brain in stages, or cycles, much as evolution has incrementally discovered ever more complex brains over many thousands of years. The following theoretical method has been successfully applied many times since I first used it in 1957. It embodies a kind of conceptual evolutionary process for deriving a brain.

    Because "brain evolution needs to achieve behavioural success", we need to start with data that embodiey indices of behavioral success. That is why, as illustrated in Figure 2.37 Modelling method and cycle, one starts with Behavioral Data from scores or hundreds of psychological experiments. These data are analyszed as the result of an individual adapting autonomously in real time to a changing world. This is the Arty of Modeling. It requires that one be able to infer from static data curves the dynamical processes that control individual behaviors occuring in real time. One of the hardest things that I teach to my students to do is "how to think in real time" to be able to carry out this speculative leap.

    Properly carried out, this analysis leads to the discovery of new Design Principles that are embodied by these behavioral processes. The Design Principles highlight the functional meaning of the data, and clarify how individual behaviors occurring in real time give rise to these static data curves.

    These principles are then converted into the simplest Mathematical Model using a method of minimal anatomies, which is a form of Occam
  • image p059fig02.05 Bowed serial position curve. This kind of data emphasizes the importance of modelling how our brains give rise to our minds using nonlinear systems of differential equations.
    || Effects of [inter, intra]trial intervals (Hovland 1938). # of errors vs list position. [w (sec), W (sec)] = (2 6) (4 6) (2 126) (4 126). Nonoccurance of future items reduces the number of errors in response to past items. Thes data require a real-time theory for their explanation! that is, DIFFERENTIAL equations.
  • image p064fig02.10 The Shunting Model includes upper and lower bounds on neuronal activities. These bound have the effect of multiplying additive terms by excitatory and inhibitory automatic gain terms that enable such models to preserve their sensitivity to inputs whose size may vary greatly in size through time, while also approximately normalizing their total activities.
    || STM: Shunting Model (Grossberg, PNAS 1967, 1968). Mass action in membrane equations. Bi/Ci -> xi(t) -> O -> -Fi/Ei. Bounded activations, automatic gain control. d[dt: xi(t)] = - Ai*xi + (Bi - Ci*xi)sum[j=1 to n: fj(xj(t))*Dji*yji*zji + Ii] - (Ei*Xi + Fi)*sum[j=1 to n: gj(xj)*Gji*Yji*Zji + Ji]. Includes the Additive Model.
  • image p064fig02.11 Medium-Term Memory (MTM) and Long-Term Memory (LTM) equations complement the Additive and Shunting Models of STM. MTM is typically defined by a chemical transmitter that is released from the synaptic knobs of a neuron (Figure 2.03). Its release or inactivation in an activity-dependent way is also called habituation. LTM defines how associative learning occurs between a pair of neurons whose activities are approximately correlated through time. See the text for details.
    || Medium and Long Term memory.
    MTMhabituative transmitter gated[dt: yki(t)] = H*(K - yki) - L*fk(xk)*yki
    LTMgated steepest descent learningd[dt: zki(t)] = Mk*fk(xk)*(hi(xi) - zki)
  • image p068fig02.14 Hodgkin and Huxley developed a model to explain how spikes travel down the squid giant axon.
    || Neurophysiology (single cell): spike potentials in squid giant axon (Hodgekin, Huxley 1952, Nobel Prize). time -> (dendrites -> cell body -> axon).
    C*dp[dt: V] = α*dp^2[dX^2: V] + (V(+) - V)*g(+) + (V(-) - V)*g(-) + (V^p - V)*g^p
    g(+) = G(+)(m,h), g(-) = G(-)(n), G^p = const, [m, h, n] - ionic processes, V - voltage
    Precursor of Shunting network model (Rail 1962). (Howell: see p075fig02.24 Membrane equations of neurophysiology. Shunting equation
  • image p074fig02.23 The equations for a shunting on-center off-surround network. Shunting terms lead to many beautiful and important properties of these networks, which are found ubiquitously, in one form or another, in all cellular tissues.
    || Shunting on-center off-surround network.
    Mass action: d[dt: xi] = -A*xi +(B - xi)*Ii -xi*sum[k≠i: Ik]
    Turn on unexcited sitesTurn off excited sites
    At equilibrium:
    0 = d[dt: xi] = -(A + Ii + sum[k≠i: Ik])*xi + B*Ii = -(A + I)*xi + B*Ii
    xi = B*Ii/(A + I) = B*θi*I/(A + I) = θi* B*I/(A + I)No saturation!
    Infinite dynamical range
    Automatic gain control
    Compute ratio scale
    Weber law
    x = sum[k-1 to n: xk] = B*I/(A + I) ≤ B Conserve total activity
    NORMALIZATION
    Limited capacty
    Real-time probability
  • image p075fig02.24 The membrane equations of neurophysiology describe how cell voltages change in response to excitatory, inhibitory, and passive input channels. Each channel is described by a potential difference multiplied by a conductance. With the special choices shown in the lower right-hand corner, this equation defines a feedforward shuntin on-center off-surround network.
    || Membrane equations of neurophysiology.
    C*dp[dt] = (V(+) - V)*g(+) +(V(-) - V)*g(-) +(V(p) - V)*g(p)
    Shunting equation (not additive)
    V Voltage
    V(+), V(-), V(p) Saturating voltages
    g(+), g(-), g(p) Conductances
    V(+) = B, C = 1; V(-) = V(p) = 0; g(+) = Ii; g(-) = sum[k≠i: Ik];
    lower V: V(+) = V(p) Silent inhibition, upper V: V(+). (Howell: see p068fig02.14 Grossberg
  • image p079fig02.32 Matching amplifies the matched pattern due to automatic gain control. See terms I and J in the equation.
    || Substrate of resonance. Match (in phase) of BU and TD input patterns AMPLIFIES matched pattern due to automatic gain control by shunting terms. J = sum[i: Ji], I = sum[i: Ii], θi = (Ii + Ji)/(I + J)
    xi = (B + C)*(I + J)/(A + I + J)*[θi -C/(B + C)]
    Need top-down expectations to be MODULATORY.
  • image p202fig05.17 This figure summarizes the simplest equations whereby the adaptive weights of a winning category learn the input pattern that drove it to win, or more generally a time-average of all the input patterns that succeeded in doing so.
    || Geometry of choice and learning, learning trains the closest LTM vector
  • image p359fig10.07 The two bottom-up pathways from LGN to layer 4 of V1 can together activate layer 4 and contrast-normalize layer 4 responses.
    || Bottom-up contrast normalization (Grossberg 1968, 1973; Sperling, Sondhi 1968; Heeger 1992; Douglas etal 1995; Shapley etal 2004). Together, direct LGN-to-4 path and 6-to-4 on-center off-surround provide contrast normalization if cells obey shunting or membrane equation dynamics.
  • image p501fig13.26 A simple differential equation describes the processes of transmitter accumulation and release that do their best, at a finite rate, to carry out unbiased transduction.
    || Transmitter accumulation and release. Transmitter y cannot be restored at an infinite rate: T = S*ym y ~= B, Differential equations: d[dt: y] = A*(B - y) - S*y = accumulate - release. Transmitter y tries to recover to ensure unbiased transduction. What if it falls behind? Evolution has exploited the good properties that happen then.
  • image p505fig13.33 An unexpected event can disconfirm ongoing processing by triggering a burst of nonspecific arousal that causes antagonistic rebounds in currently active gated dipoles, whether cognitive or affective.
    || Novelty reset: rebound to arousal onset. 1. Equilibrate to I and J: S1 = f(I+J); y1 = A*B/(A+S1); S2 = f(I+J); y2 = A*B/(A+S2);. 2. Keep phasic input J fixed; increase arousal I to I* = I + ∆I: (a) OFF reaction if T1 < T2; OFF = T2 - T1 = f(I*+J)*y2 - f(I*)*y1 = { A*B*(f(I*) - f(I*+J)) - B*(f(I*)*f(I+J) - f(I)*f(I*+J)) } / (A+f(I)) / (A + f(I+J)). 3. How to interpret this complicated equation?
  • image p580fig16.05 Macrocircuit of the GridPlaceMap model, which can learn both 2D grid cells and place cells in response to realistic trajectories of navigating rats using a hierarchy of SOMs with identical equations.
    || GridPlaceMap model: rate-based and spiking (Pilly, Grossberg 2012). Pre-wired 1D stripe cells, learns both 2D frid and place cells! Same laws for both; both select most frequent and energetic inputs. Place cells emerge gradually in response to developing grid cells. [place-> grid-> stripe] cells-> path integration-> vestibular signals
  • image p586fig16.16 In the place cell learning model of (Gorchetnikov, Grossberg 2007), three populations of five cells each of entorhinal grid cells (only two are shown) with different spatial periods input to the model
  • image p593fig16.25 Spectral Spacing Model STM, MTM, and LTM equations. The rate spectrum that determines the dorsoventral gradient of multiple grid cell properties is defined by μm.
    || Spectral Spacing Model equations. [STM, MTM, LTM]. μm = rate spectrum.
  • Grossberg 2021 p081c2h0.66 sub-section: How does one evolve a computational brain?
    The above discussion illustrates that no single step of theoretical derivation can derive a whole brain. One needs a method for deriving a brain in stages, or cycles, much as evolution has incrementally discovered ever more complex brains over many thousands of years. The following theoretical method has been successfully applied many times since I first used it in 1957. It embodies a kind of conceptual evolutionary process for deriving a brain.

    Because "brain evolution needs to achieve behavioural success", we need to start with data that embodiey indices of behavioral success. That is why, as illustrated in Figure 2.37 Modelling method and cycle, one starts with Behavioral Data from scores or hundreds of psychological experiments. These data are analyszed as the result of an individual adapting autonomously in real time to a changing world. This is the Arty of Modeling. It requires that one be able to infer from static data curves the dynamical processes that control individual behaviors occuring in real time. One of the hardest things that I teach to my students to do is "how to think in real time" to be able to carry out this speculative leap.

    Properly carried out, this analysis leads to the discovery of new Design Principles that are embodied by these behavioral processes. The Design Principles highlight the functional meaning of the data, and clarify how individual behaviors occurring in real time give rise to these static data curves.

    These principles are then converted into the simplest Mathematical Model using a method of minimal anatomies, which is a form of Occam
  • image pxvifig00.01 Macrocircuit of the visual system
  • image p087fig03.01 A macrocircuit of key visual processes (in green) and the cortical areas in which they primarily occur (in red), from the retina to the Prefrontal Cortex (PFC), including both the What and Where cortical streams. The [bottom-up, horizontal, and top-down] interactions help each of these processes to overcome computationally complementary processing deficiencies that they would experience without them, and aso to read-out top-down expectations that help to stabilize learning while they focus attention on salient objects and positions.
    || Emerging unified theory of visual intelligence. [What, Where] streams. Bottom-up and top-down interactions overcome COMPLEMENTARY processing deficiencies.
  • image p168fig04.44 Macrocircuit of the main boundary and surface formation stages that take place from the lateral geniculate nucleus, or LGN, through cortical areas [V1, V2, V4]. See the text for details.
    || image p228fig05.37 A macrocircuit of the neurotrophic Spectrally Timed ART, or nSTART, model. I developed nSTART with my PhD student Daniel Franklin. It proposes how adaptively timed learning in the hippocampus, bolstered by Brain Derived Neurotrophic Factor, or BDNF, helps to ensure normal memory consolidation.
    || habituative gates, CS, US, Thalamus (sensory cortex, category learning, conditioned reinforcer learning, adaptively time learning and BDNF), Amygdala (incentive motivation learning), Hippocampus (BDNF), Prefrontal Cortex (attention), Pontine nuclei, Cerebellum (adaptively timed motor learning)
  • image p246fig05.48 Microcircuits of the LAMINART model that I developed with Rajeev Raizada. See the text for details of how they integrate bottom-up adaptive filtering, horizontal bipole grouping, and top-down attentional matching that satisfied the ART Matching Rule.
    ||
  • image p346fig09.16 A macrocircuit of some of the main brain regions that are used to move the eyes. Black boxes denote areas belonging to the saccadic eye movement systes (SAC), white boxes the smooth pursuit eye system (SPEM), and gray boxes, both systems. The abbreviations for the different brain regions are: LIP - Lateral Intra-Parietal area; FPA - Frontal Pursuit Area; MST - Middle Superior Temporal area; MT - Middle Temporal area; FEF - Frontal Eye Fields; NRPT - Nucleus Reticularis Tegmenti Pontis; DLPN - Dorso-Lateral Pontine Nuclei; SC - Superior Colliculus; CBM - CereBelluM; MVN/rLVN - Medial and Rostro-Lateral Vestibular Nucleii; PPRF - a Peri-Pontine Reticular Formation; TN - Tonic Neurons
    ||
  • image p436fig12.30 The conscious ARTWORD, or cARTWORD, laminar cortical speech model simulates how future context can disambiguate noisy past speech sounds in such a way that the completed percept is consciously heard to proceed from past to future as a feature-item-list resonant wave propagates through time.
    || cARTWORD: Laminar cortical model macrocircuit (Grossberg, Kazerounian 2011) Simulates PHONEMIC RESTORATION: Cognitive Working Memory (processed item sequences) - [Excitatory-> inhibitory-> habituative-> adaptive filter-> adaptive filter-> adaptive filter with depletable synapse-> Acoustic [item, feature]
  • image p474fig12.70 The kind of model macrocircuit that was used in (Grossberg, Stone 1986) to explain lexical decision task data.
    || inputs-> A1 <-> A2 oconic sensory features <-> A3 item and order in sensory STM <-> A4 list parsing in STM (masking field) <-> A5 semantic network (self-feedback). [A4, A5] <-> V* visual object recognition system. M1-> [outputs, A1]. M1 <-> M2 iconic motor features <-> M3 item and order in motor STM. A2-> M2. A3-> M3.
  • image p481fig13.01 Macrocircuit of the functional stages and anatomical interpretations of the Cognitive-Emotional-Motor, or CogEM, model.
    || Drive-> hypothalamus value categories <-> amygdala incentive motivational learning-> Orbitofrontal cortex- object-value categories <-> sensory cortex- invariant object categories- conditioned reinforcer learning-> amygdala-> hypothalamus.
  • image p520fig14.02 Macrocircuit of the main brain regions, and connections between them, that are modelled in the unified predictive Adaptive Resonance Theory (pART) of cognitive-emotional and working memory dynamics. Abbreviations in red denote brain regions used in cognitive-emotional dynamics. Those in green denote brain regions used in working memory dynamics. Black abbreviations denote brain regions that carry out visual perception, learning and recognition of visual object categories, and motion perception, spatial representation and target tracking. Arrow denote non-excitatory synapses. Hemidiscs denote adpative excitatory synapses. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown. Also not shown are output signals from cortical areas to motor responses. V1: striate, or primary, visual cortex; V2 and V4: areas of prestriate visual cortex; MT: Middle Temporal cortex; MST: Medial Superior Temporal area; ITp: posterior InferoTemporal cortex; ITa: anterior InferoTemporal cortex; PPC: Posterior parietal Cortex; LIP: Lateral InterParietal area; VPA: Ventral PreArculate gyrus; FEF: Frontal Eye Fields; PHC: ParaHippocampal Cortex; DLPFC: DorsoLateral Hippocampal Cortex; HIPPO: hippocampus; LH: Lateral Hypothalamus; BG: Basal Ganglia; AMYG: AMYGdala; OFC: OrbitoFrontal Cortex; PRC: PeriRhinal Cortex; VPS: Ventral bank of the Principal Sulcus; VLPFC: VentroLateral PreFrontal Cortex. See the text for further details.
    ||
  • image p532fig14.08 Macrocircuit of the ARTSCENE Search neural model for learning to search for desired objects by using the sequences of already experienced objects and their locations to predict what and where the desired object is. V1 = First visual area or primary visual cortex; V2 = Second visual area; V4 = Fourth visual area; PPC = Posterior Parietal Cortex; ITp = posterior InferoTemporal cortex; ITa = anterior InferoTemporal cortex; MTL = Medial Temporal Lobe; PHC = ParaHippoCampal cortex; PRC = PeriRhinal Cortex; PFC = PreFrontal Cortex; DLPFC = DorsoLateral PreFrontal Cortex; VPFC = Ventral PFC; SC = Superior Colliculus.
    ||
  • image p580fig16.05 Macrocircuit of the GridPlaceMap model, which can learn both 2D grid cells and place cells in response to realistic trajectories of navigating rats using a hierarchy of SOMs with identical equations.
    || GridPlaceMap model: rate-based and spiking (Pilly, Grossberg 2012). Pre-wired 1D stripe cells, learns both 2D frid and place cells! Same laws for both; both select most frequent and energetic inputs. Place cells emerge gradually in response to developing grid cells. [place-> grid-> stripe] cells-> path integration-> vestibular signals
  • image p599fig16.35 Data (a) and simulations (b,c) about anatomically overlapping grid cell modules. (a) shows the anatomical distribution of grid cells belonging to different modules in one animal. DV location (mm) vs postrhinal border. (b) shows the simulated distribution of learned grid cell spacings from two stripe cell scales. frequency (%) vs grid spacing (cm). mu = [1, 0.6]. (c) shows what happens when half the cells respond with one rate and half another rate. (d) shows the same with three rates. (e-g) show spatial maps and autocorrelograms of grid cells that arise from the different rates in (d). [rate map, autocorelogram] vs [score [1.07, 0.5, 0.67], spacing (cm) [23.58, 41, 63.64]].
    ||
  • image p612fig16.42 Macrocircuit of the main SOVEREIGN subsystems.
    || [reward input, drive input, drive representation (DR), visual working memory and planning system (VWMPS), visual form and motion system (VFMS), motor approach and orienting system (MAOS), visual input (VisIn), motor working memory and planning system (MWMPS), motor approach and orienting system (MAOS), motor plant (MotP), Proprioceptive Input (PropIn), Vestibular Input (VesIn), Environmental feedback (EnvFB). DR [incentive motivational learning-> [VWMPS, MWMPS], -> VFMS, -> MAOS], VWMPS [conditioned reinforcer learning-> DR, MAOS], VFMS [visual object categories-> VWMPS, reactive movement commands-> MAOS], MWMPS [conditioned reinforcer learning-> DR, planned movement commands-> MAOS], MAOS [motor map positions-> MWMPS, motor outflow-> MotP], VisIn-> VFMS, VesIn-> MAOS, EnvFB-> [VisIn, MotP, VesIn].
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
  • bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :image p025fig01.16 (left panel) The main processing stages of the Cognitive-Emotional-Motor (CogEM) model have anatomical interpretations in terms of sensory cortex, amygdala, and prefrontal cortex. Chapter 13 will describe in greater detail how CS cues activate invariant object categories in the sensory cortex, value categories in the amygdala, and object-value categories in the prefrontal cortex, notably the orbitofrontal cortex. The amygdala is also modulated by internal drive inputs like hunger and satiety. (right panel) Anatomical data support this circuit, as do many neurophysiological data.
    || drive -> amygdala -> prefrontal cortex <-> sensory cortex -> amygdala. [visual, somatosensory, auditory, gustatory, olfactory] cortex -> [amygdala, Orbital Prefrontal Cortex]. amygdala -> Lateral Prefrontal Cortex
  • image p481fig13.01 Macrocircuit of the functional stages and anatomical interpretations of the Cognitive-Emotional-Motor, or CogEM, model.
    || Drive-> hypothalamus value categories <-> amygdala incentive motivational learning-> Orbitofrontal cortex- object-value categories <-> sensory cortex- invariant object categories- conditioned reinforcer learning-> amygdala-> hypothalamus.
  • image p483fig13.02 The object-value categories in the orbitofrontal cortex require converging specific inputs from the sensory cortex and nonspecific incentive motivational inputs from the amygdala in order to fire. When the orbitofrontal cortex fires, it can deliver top-down ART Matching Rule priming signals to the sensory cortical area by which it was activated, thereby helping to choose the active recognition categories there that have the most emotional support, while suppressing others, leading to attentional blocking of irrelevant cues.
    || Cognitive-Emotional-Motor (CogEM) model. Drive-> amygdala incentive motivational learning-> orbitofrontal cortex- need converging cue and incentive iputs to fire <-> sensory cortex- conditioned renforcer learning-> amygdala. CS-> sensory cortex. Motivated attention closes the cognitive-emotional feedback loop, focuses on relevant cues, and causes blocking of irrelevant cues.
  • image p483fig13.03 The predicted processing stages of CogEM have been supported by anatomical studies of connections between sensory cortices, amygdala, and orbitofrontal cortex.
    || Adapted from (Barbas 1995). sensory cortices = [visual, somatosensory, auditory, gustatory, olfactory]. sensory cortices-> amygdala-> orbital prefrontal cortex. sensory cortices-> orbital prefrontal cortex. [visual cortex, amygdala]-> lateral prefrontal cortex.
  • image p484fig13.04 The top-down feedback from the orbitofrontal cortex closes a feedback loop that supports a cognitive-emotional resonance. If this resonance can be sustained long enough, it enables us to have feelings at the same time that we experience the categories that caused them.
    || Cognitive-Emotional resonance. Basis of "core consciousness" and "the feeling of what happens". (Damasio 1999) derives heuristic version of CogEM model from his clinical data. Drive-> amygdala-> prefrontal cortex-> sensory cortex, resonance around the latter 3. How is this resonance maintained long enough to become conscious?
  • image p487fig13.11 The three main properties of CogEM that help to explain how attentional blocking occurs.
    || CogEM explanation of attentional blocking. Internal drive input <-> Conditioned reinforcer learning (self-recurrent) <-> Competition for STM <- Motor learning. 1. Sensory representations compete for limited capacity STM. 2. Previously reinforced cues amplify their STM via positive feedback. 3. Other dues lose STM via competition.
  • image p489fig13.13 (top row) If a positive ISI separates onset of a CS and US, then the CS can sample the consequences of the US during the time interval before it is inhibited by it. (bottom row) A CogEM simulation of the inverted-U in conditioning as a function of the ISI betweeen CS and US.
    || Positive ISI and conditioning.
  • image p490fig13.15 The CogEM circuit is an ancient design that is found even in mollusks like Aplysia. See the text for details.
    || Aplysia (Buononamo, Baxter, Byrne, Neural Networks 1990; Grossberg, Behavioral and Brain Sciences 1983). Facilitator neuron ~ drive representation.
  • image p494fig13.19 (left column, top row) Secondary conditioning of both arousal and a specific response are now possible. (bottom row) The CogEM circuit may be naturally extended to include multiple drive representations and inputs. (right column, top row) The incentive motivational pathways is also conditionable in order to enable motivational sets to be learned.
    || Secondary conditioning. Homology: conditionable incentive motivation. Multiple drive representations and inputs.
  • image p514fig13.44 Analog of the COgEM model in Figure 6.1 of (Damasio 1999).
    || (a) map of object X-> map of proto-self at inaugural instant-> [, map of proto-self modified]-> assembly of second-order map. (b) map of object X enhanced-> second-order map imaged.
  • image p523fig14.03 (a) The MOTIVATOR neural model generalizes CogEM by also including the basal ganglia. It can hereby explain and simulate complementary functions of the amygdala and basal ganglia (SNc) during conditioning and learned performance. The basal ganglia generate Now Print signals in response to unexpected rewards. These signals modulate learning of new associations in many brain regions. The amygdala supports motivated attention to trigger actions that are expected to occur in response to conditioned or unconditioned stimuli. Object Categories represent visual or gustatory inputs in anterior inferotemporal (ITA) and rhinal (RHIN) cortices, respectively. Value Categories represent the value of anticipated outcomes on the basis of hunger and satiety inputs, in amygdala (AMYG) and lateral hypothalamus (LH). Object-Value Categories resolve the value of competing perceptual stimuli in medial (MORB) and lateral (ORB) orbitofrontal cortex. The Reward Expectation Filter detects the omission or delivery of rewards using a circuit that spans ventral striatum (VS), ventral pallidum (VP), striosomal delay (SD) cells in the ventral striatum, the pedunculopontine nucleus (PPTN) and midbrain dopaminergic neurons of the substantia nigra pars compacta/ventral tegmental area (SNc/VTA). The circuit that processes CS-related visual information (ITA, AMTG, ORB) operates in parallel with a circuit that processes US-related visual and gustatory information (RHIN, AMYG, MORB). (b) Reciprocal adaptive connections between hypothalamus and amygdala enable amygdala cells to become learned value categories. The bottom region represents hypothalmic cells, which receive converging taste and metabolite inputs whereby they become taste-drive cells. Bottom-up signals from activity patterns across these cells activate competing value category, or US Value Representations, in the amygdala. A winning value category learns to respond selectively to specific combinations of taste-drive activity patterns and sends adaptive top-down priming signals back to the taste-drive cells that activated it. CS-activated conditioned reinforcer signals are also associatively linked to value categories. Adpative connections end in (approximately) hemidiscs. See the text for details.
    ||
  • image p548fig15.16 Homologous recognition learning and reinforcement learning macrocicuits enable adaptively timed conditioning in the reinforcement learning circuit to increase inhibition of the orienting system at times when a mismatch in the recognition system would have reduced inhibition of it.
    || Homolog between ART and CogEM model, complementary systems. [Recognition, Reinforcement] learning vs [Attentional, Orienting] system. Reinforcement: timing, drive representation.
  • image p080fig02.33 An opposite-attracts rule during the development of intracellular connections can lead to a mature network that realizes informational noise suppression.
    || How do noise suppression parameters arise? Symmetry-breaking during morphogenesis? Opposites attract rule.
    Intracellular parameters C/B = 1/(1 - n) Intercellular parameters
    Predicts that:
    • Intracellular excitatory and inhibitory saturation points can control the growth during development of :
    • Intercellular excitatory and inhibitory connections.
  • image p012fig01.08 A sigmoidal signal function is a hybrid signal that combines the best properties of [faster, same, slower]-than linear signals. It can suppress noise and store a partially contrast-enhanced activity pattern. slower-than-linear saturates pattern; approximately linear- preserves pattern and normalizes; faster-than-linear- noise suppression and contrast-enhancement.
    || Sigmoidal signal: a hybrid. (upper) saturates pattern- slower-than-linear; (middle) preserves pattern and normalizes- approximately linear. (lower) noise suppression and contrast enhancement- faster-than-linear.
  • image p078fig02.30 Choosing the adaptation level to achieve informational noise suppression.
    || Noise suppression. Attenuate Zero Spatial frequency patterns: no information. Ii vs i (flat line), xi vs i (flat line at zero)
    B >> C: Try B = (n - 1)*C or C/(B + C) = 1/n
    Choose a uniform input pattern (no distinctive features): All θi = 1/n
    xi = (B + C)*I/(A + I)*[θi -C/(B + C)] = 0 no matter how intense I is.
  • image p078fig02.31 How noise suppression enables matching of bottom-up and top-down input patterns.
    || Noise suppression -> pattern matching. mismatch (out of phase) suppressed, match (in phase) amplifies pattern.
  • image p080fig02.33 An opposite-attracts rule during the development of intracellular connections can lead to a mature network that realizes informational noise suppression.
    || How do noise suppression parameters arise? Symmetry-breaking during morphogenesis? Opposites attract rule.
    Intracellular parameters C/B = 1/(1 - n) Intercellular parameters
    Predicts that:
    • Intracellular excitatory and inhibitory saturation points can control the growth during development of :
    • Intercellular excitatory and inhibitory connections.
  • image p080fig02.34 How to achieve informational noise suppression in a network with multiple parallel processing channels.
    || Symmetry-breaking: dynamics and anatomy.
    Dynamics:
    • excitatory range is amplified
    • inhibitory range is compressed
    Anatomy:
    • narrow on-center
    • broad off-surround
    Noise suppression: attenuates uniform patterns
    Contour direction: enhances pattern gradients
  • image p081fig02.36 Informational noise suppression in network with Gaussian on-center and off-surround function as contour detectors that are sensitive to ratio-contrast.
    || Noise suppression and contour detection.
    If B*sum[k=1 to n: Cki] <= D*sum[k=1 to n: Eki] then:
    • uniform patterns are suppressed
    • contrasts are selectively enhanced
    • contours are detected
    Ii vs i, xi vs i
    Responses are selective to [REFLECTANCE, SPATIAL SCALE], eg color [feature, surface] contours.
  • image p510fig13.39 Shunting competition and informational noise suppression in affective gated dipoles, plus back-propagating action potentials for teaching signals, enable the net normalized adaptive weights to be learned. They never saturate!
    || Learn net dipole output pattern. Opponent "decision" controls learning. Cf. competitive learning. Learning signal, opponent extinction.
  • image p009fig01.06 Primacy gradient of activity stored in working memory within a recurrent shunting on-center off-surround network. Rehersal is controlled by a nonspecific rehersal wave and self-inhibitory feedback of the item that is currently being rehearsed. Rehearsal is controlled by a nonspecific rehearsal wave and self-inhibitory feedback of the item that is currently being rehearsed. Green = excitatory, red = inhibitory
    || inputs? -> item and order WM storage -> competitive selection-> rehearsal wave -> outputs
  • image p024fig01.15 A REcurrent Associative Dipole, or READ, circuit is a recurrent shunting on-center off-surround network with habituative transmitter gates. Sensory cues sample it with LTM traces and thereby become conditioned reinforcers.
    ||
  • image p038fig01.25 The ART Matching Rule stabilizes real time learning using a [top-down, modulatory on-center, off-surround] network. Object attention is realized by such a network. See text for additional discussion.
    || ART Matching Rule [volition, categories, features]. [one, two] against one.
  • image p073fig02.22 An on-center off-surround network is capable of computing input ratios.
    || Computing with patterns.
  • How to compute the pattern-sensitive variable: θi = Ii / sum[k=1 to n: Ik]?
    Needs interactions! What type? θi = Ii / sum[k ≠ i: Ik]
    Ii↑ ⇒ θi↑ excitation, Ik↑ ⇒ θk↓, k ≠ i inhibition
    On-center off-surround network.
  • image p074fig02.23 The equations for a shunting on-center off-surround network. Shunting terms lead to many beautiful and important properties of these networks, which are found ubiquitously, in one form or another, in all cellular tissues.
    || Shunting on-center off-surround network.
    Mass action: d[dt: xi] = -A*xi +(B - xi)*Ii -xi*sum[k≠i: Ik]
    Turn on unexcited sitesTurn off excited sites
    At equilibrium:
    0 = d[dt: xi] = -(A + Ii + sum[k≠i: Ik])*xi + B*Ii = -(A + I)*xi + B*Ii
    xi = B*Ii/(A + I) = B*θi*I/(A + I) = θi* B*I/(A + I)No saturation!
    Infinite dynamical range
    Automatic gain control
    Compute ratio scale
    Weber law
    x = sum[k-1 to n: xk] = B*I/(A + I) ≤ B Conserve total activity
    NORMALIZATION
    Limited capacty
    Real-time probability
  • image p075fig02.24 The membrane equations of neurophysiology describe how cell voltages change in response to excitatory, inhibitory, and passive input channels. Each channel is described by a potential difference multiplied by a conductance. With the special choices shown in the lower right-hand corner, this equation defines a feedforward shuntin on-center off-surround network.
    || Membrane equations of neurophysiology.
    C*dp[dt] = (V(+) - V)*g(+) +(V(-) - V)*g(-) +(V(p) - V)*g(p)
    Shunting equation (not additive)
    V Voltage
    V(+), V(-), V(p) Saturating voltages
    g(+), g(-), g(p) Conductances
    V(+) = B, C = 1; V(-) = V(p) = 0; g(+) = Ii; g(-) = sum[k≠i: Ik];
    lower V: V(+) = V(p) Silent inhibition, upper V: V(+). (Howell: see p068fig02.14 Grossberg
  • image p076fig02.25 An on-center off-surround network can respond to increasing on-center excitatory inputs without a loss of sensitivity. Instead, as the off-surround input increases, the region of a cell
  • image p076fig02.26 The mudpuppy retina exhibits the shift property that occurs in the feedforward shunting on-center off-surround network in Figure 2.25. As a result, its sensitivity also shifts in response to different background off-surrounds, and therefore exhibits no compression (dashed purple lines).
    || Mudpuppy retina neurophysiology.
    I center, J background
    a) Relative figure-to-ground
    b) Weber-Fechner I*(A + J)^(-I)
    c) No hyperpolarization, SHUNT: Silent inhibition
    d) Shift property(Werblin 1970) xi(K,J) vs K = ln(I)
    Adaptation- sensitivity shifts for different backgrounds. NO COMPRESSION.
  • image p077fig02.27 A schematic of the on-center off-surround network that occurs in the mudpuppy retina, including three main cell types: receptors, horizontal cells, and bipolar cells.
    || Mechanism: cooperative-competitive dynamics.
    On-center off-surround (Kuffler 1953) cat retina
    Subtractive lateral inhibition (Hartline, Ratcliff 1956/7+) limulus retina.
    R receptor -> H horizontal -> B bipolar (Werblin, Dowling, etal 1969+) mudpuppy retina.
  • image p080fig02.34 How to achieve informational noise suppression in a network with multiple parallel processing channels.
    || Symmetry-breaking: dynamics and anatomy.
    Dynamics:
    • excitatory range is amplified
    • inhibitory range is compressed
    Anatomy:
    • narrow on-center
    • broad off-surround
    Noise suppression: attenuates uniform patterns
    Contour direction: enhances pattern gradients
  • image p081fig02.35 The equilibrium activities of a shunting netwok with Gaussian on-center off-surround kernels are sensitive to the ratio-contrasts of the input patterns that they process. The terms in the denominator of the equilibrium activities accomplish this using the shunting on-center and off-surround terms.
    || Ratio-contrast detector. flat versus [Gaussian Cki, flattened Gaussian? Eki]
    d[dt: xi] = -A*xi +(B - xi)*sum[k≠i: Ik]*Cki -(xi + D)*sum[k=1 to n: Ik*Eki]
    Cki = C*e^(-μ*(k - i)^2), Eki = E*e^(-μ*(k - i)^2)
    At equilibrium: xi = I*sum[k=1 to n: θk*Fki] / (A + I*sum[k=1 to n: θk*Gki])
    Fki = B*Cki -D*Eki (weighted D.O.G)
    Gki = Cki +Eki (S,O,G)
    • Reflectance processing
    • Contrast normalization
    • Discount illuminant
  • image p081fig02.36 Informational noise suppression in network with Gaussian on-center and off-surround function as contour detectors that are sensitive to ratio-contrast.
    || Noise suppression and contour detection.
    If B*sum[k=1 to n: Cki] <= D*sum[k=1 to n: Eki] then:
    • uniform patterns are suppressed
    • contrasts are selectively enhanced
    • contours are detected
    Ii vs i, xi vs i
    Responses are selective to [REFLECTANCE, SPATIAL SCALE], eg color [feature, surface] contours.
  • image p106fig03.24 In response to the Synthetic Aperture image (upper corner left), a shunting on-center off-surround network "discounts the illiminant" and thereby normalizes cell activities to compute feature contours, without causing saturation (upper right corner). Multiple-scale boundaries form in response to spatially coherent activities in the feature contours (lower left corner) and create the webs, or containers, into which the feature contours fill-in the final surface representations (lower right corner).
    || Do these ideas work on hard problems? SAR!
    input imagefeature contoursboundary contoursfilled-in surface
    Synthetic Aperture Radar: sees through weather 5 orders of magnitude of power in radar returndiscounting the illuminant
    • normalizes the image: preseves RELATIVE activities without SATURATION
    • shows individual PIXELS
    boundaries complete between regions where normalized feature contrasts changefilling-in averages brightnesses within boundary compartments
  • image p176fig04.53 The on-center off-surround network within position and across depth helps to explain why brighter Kanizsa squares look closer.
    || inhibition vs. depth. p176c1h0.25 "... to qualitatively understand how this example of proximity-luminance covariance works. It follows directly from the boundary pruning by surface contour feedback signals (Figure 4.51) that achieves complementary consistency and initiates figure-ground perception. ...". p176c1h0.45 "... these inhibitory sigals are part of an off-surround network whose strength decreases as the depth difference increases between the surface that generates the signal and its recipient boundaries. ...". p176c1h0.8 "... Within FACADE theory, the perceived depth of a surface is controlled by the boundaries that act as its filling-in generators and barriers (Figure 3.22), since these boundaries select the depth-sselective FIDOs within whin filling-in can occur, and thereby achieve surface capture. These boundaries, in turn, are themselves strengthened after surface-to-boundary contour feedback eliminates redundant boundaries that cannot support sucessful filling-in (Figure 4.51). These surface contour feedback signals have precisely the properties that are needed to explain why brighter Kanizsa squares look closer! ..."
  • image p192fig05.05 ON and OFF cells in the LGN respond differently to the sides and ends of lines.
    || [ON, OFF]-center, [OFF, ON]-surround (respectively). OFF-center cells maximum response at line end (interior), ON-center cells maximum response along sides (exterior)
  • image p253fig06.02 After bottom-up surface inputs activate spatial attentional cells, they send top-down topographic excitatory signals back to the surface representations. This recurrent shunting on-center off-surround network contrast enhances larger attentional activities while approximately normalizing the total spatial attentional activity. A surface-shroud resonance hereby forms that selects an attentional shroud, enhances the perceived contrast of the attended surface (light blue region), and maintains spatial attention on it.
    || Surface-shroud resonance. perceptual surfaces -> competition -> spatial attention. (Carrasco, Penpeci-Talgar, and Eckstein 2000, Reynolds and Desimone 2003)
  • image p300fig08.12 A single flash activates a Gaussian receptive field across space whose maximum is chosen by a winner-take-all recurrent on-center off-surround network.
    || Gaussian receptive fields are sufficient! (Grossberg, Rudd 1992). Single flash. Suppose that a single flash causes a narrow peak of activity at the position where it occurs. It generates output signals through a Gaussian filter that produces a Gaussian activity profile at the next processing stage., A recurrent on-center off-surround network chooses the maximum activity and suppresses samaller activities. Winner-take-all
  • image p315fig08.35 The output signals from the directional grouping network obey the ART Matching Rule. They thereby select consistent motion directional signals while suppressing inconsistent ones, and do not distort that the spared cells code. The aperture problem is hereby solved by the same mechanism that dynamically stabilizes the learning of directional grouping cells.
    || How to select correct direction and preserve speed estimates? Prediction: Feedback from MSTv to MT- obeys ART Matching Rule; Top-down, modulatory on-center, off-surround network (Grossberg 1976, 1980; Carpenter, Grossberg 1987, 1991); Explains how directional grouping network can stably develop and how top-down directional attention can work. (Cavanough 1992; Goner etal 1986; Sekuer, Ball 1977; Stelmach etal 1994). Directional grouping network (MSTv) <-> Directional long-range filter (MT). Modulatory on-center selects chosen direction and preserves speed. Off-surround inhibits incompatible directions.
  • image p340fig09.07 Log polar remapping from the retina to cortical area V1 and beyond converts expansion, translation, and spiral flows on the retina into parallel flows, with different orientations, on the cortical map.
    || Log polar remapping of optic flow. retina -> cortex. Any combination of expansion and circular motion centered on the fovea maps to cortex as a single direction. Retinal Cartesian coordinates (x,y) map to cortical polar coordinates (r,theta). This makes it easy to compute directional receptive fields in the cortex!
  • image p345fig09.15 Double opponent directional receptive fields in MT are capable of detecting the motion of objects relative to each other and their backgrounds.
    || Motion opponency in MT (Born, Tootell 1992). Motion opponent (Grossberg etal), Differential motion (Royden etal), Subtractive motion cells (Neumann etal). ON center directionally selective: [excit, inhibit]ed by motion in [one, opponent] direction. OFF surround directionally selective: [excit, inhibit]ed by motion in [opponent, center] direction.
  • image p354fig10.01 The lamiar cortical circuit that realizes how we pay attention to an object sends signals from layer 6 of a higher cortical level to layer 6 of a lower cortical level and then back up to layer 4. This "folded feedback" circuit realizes a top-down, modulatory on-center, off-surround circuit that realizes the ART Matching Rule.
    || Top-down attention and folded feedback. Attentional signals also feed back into 6-to-4 on-center off-surround. 1-to-5-to-6 feedback path: Macaque (Lund, Booth 1975) cat (Gilbert, Wiesel 1979). V2-to-V1 feedback is on-center off-surround and affects layer 6 of V1 the most (Bullier etal 1996; Sandell, Schiller 1982). Attended stimuli enhanced, ignored stimuli suppressed. This circuit supports the predicted ART Matching Rule! [LGN, V[1,2][6->1]]
  • image p359fig10.06 Another, albeit indirect, pathway from LGN exists that can also excite layer 4 of V1. Why are not these two pathways redundant? The answer, ultimately, how to do with how cortex learns, as well as with how it pays attention. See the text for details.
    || Another bottom-up input to layer 4: Why?? Layer 6-to-4 on-center off-surround (Grieve, Sillito 1991, 1995; Ahmedetal 1994, 1997). LGN projects to layers 6 and 4. Layer 6 excites spiny stellates in column above it. Medium range connections onto inhibitory neurons. 6-t-4 path acts as on-center off-curround.
  • image p359fig10.07 The two bottom-up pathways from LGN to layer 4 of V1 can together activate layer 4 and contrast-normalize layer 4 responses.
    || Bottom-up contrast normalization (Grossberg 1968, 1973; Sperling, Sondhi 1968; Heeger 1992; Douglas etal 1995; Shapley etal 2004). Together, direct LGN-to-4 path and 6-to-4 on-center off-surround provide contrast normalization if cells obey shunting or membrane equation dynamics.
  • image p360fig10.08 The bottom-up on-center off-surround from LGN-to-6-to-4 has a modulatory on-center because of its role in realizing the ART Matching Rule and, with it, the ability of the cortex to dynamically stabilize its learned memories.
    || Modulation of priming by 6-to-4 on-center (Stratford etal 1996; Callaway 1998). On-center 6-to-4 excitation is inhibited down to being modulatory (priming, subthreshold). On-center 6-to-4 excitation cannot activate layer 4 on its own. Clarifies need for direct path. Prediction: plays key role in stable grouping, development and learning. ART Matching Rule!
  • image p362fig10.11 Feedback between layer 2/3 to the layer 6-to-4-to-2/3 feedback loop chooses the strongest grouping in cases where there is more than one. If only one grouping exists, then the circuit can function very quickly in a feedforward manner. When multiple groupings exist, the cortex "runs as fast as it can" to select the one with the most evidence to support it using the self-normalizing inhibition in the layer 6-to-4 off-surround.
    || How is the final grouping selected? Folded feedback LGN-> 6-> 4-> 2/3. 1. Layer 2/3 groupings feed back into 6-to-4 on-center off-surround: a) direct layer 2/3 -to-6 path; b) can also go via layer 5 (Blasdel etal 1985; Kisvarday etal 1989). 2. Strongest grouping enhanced by its on-center. 3. Inputs to weaker groupings suppressed by off-surround. 4. Interlaminar feedback creates functional columns. Activities of conflicting groupings are reduced by self-normalizing inhibition, slowing processing; intracortical feedback selects and contrast-enhances the winning grouping, speeding processing.
  • image p364fig10.13 The bottom-up adaptive filter, intracortical grouping circuit, and intercortical top-down attentional circuit all use the same competitive decision circuit between layers 6 and 4, called the attention-preattention interface, with which to select the featural patterns that will be processed.
    || Bottom-up filters and intracortical grouping feedback use the same 6-to-4 decision circuit, LGN-> Vx[6,4,2/3]. competitive decision circuit, modulatory on-center off-surround network. Top-down intercortical attention also uses the same 6-to-4 decision circuit!
  • image p364fig10.14 This figure emphasizes how preattentive intracortical groupings and top-down intercortical attention share the same modulatory on-center, off-surround layer 4-to-6 decision circuit.
    || Explanation: grouping and attention share the same modulatory decision circuit. Layer 6-6-4-2/3 pathway shown; also a layer 6-1-2/3 path. intercortical attention, both act via a modulatory on-center off-surround decision circuit, intracortical feedback from groupings
  • image p367fig10.15 Data (left column) and simulation (right column) of how attention prevents a masking stimulus from inhibiting the response to the on-center of the cell from which the recording was made.
    || Attention protects target from masking stimulus (Reynolds etal 1999; Grossberg, Raizada 2000).
  • image p375fig11.07 The 3D LAMINART model uses both monocular and binocular simple cells to binocularly fuse like image contrasts. The remainder of the model generates 3D boundary and surface representations of multiple kinds of experiments as well as of natural scenes.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A], V2 thin stripe [4->2/3A], V4]. V1 blob [V1-4 monocular, V1 interior binocular] simple cells. [complex, simple, inhibitory] cells, on-center off-surround
  • image p376fig11.10 The 3D LAMINART model shows how the disparity filter can be integrated into the circuit that completes 3D boundary representations using bipole grouping cells. It also explains how surface contours can strengthen boundaries that succeed in generating closed filling-in domains.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A] surface contour, V2 thin stripe (monocular surface) [4->2/3A], V2 interior [disynaptic inhibitory interneurons, bipole grouping cells, disparity filter, V4 binocular surface]. [complex, simple, inhibitory] cells, on-center off-surround
  • image p423fig12.20 The Spatial Pitch Network, or SPINET, model shows how a log polar spatial representation of the sound frequency spectrum can be derived from auditory signals occuring in time. The spatial representation allows the ARTSTREAM model to compute spatially distinct auditory streams.
    || SPINET model (Spatial Pitch Network) (Cohen, Grossberg, Wyse 1995). 1. input sound 2. Gamma-tone filter bank 3. Shaort-term average energy spectrum 4. MAP transfer function 5. On-center off-surround and rectification 6. Harmonic weighting 7. Harmonic summation and competition -> PITCH
  • image p448fig12.46 A Masking Field working memory is a multiple-scale self-similar recurrent shunting on-center off-surround network. It can learn list chunks that respond selectively to lists of item chunks of variable length that are stored in an item working memory at the previous processing stage. Chunks that code for longer lists (eg MY vs MYSELF) are larger, and give rise to stronger recurrent inhibitory neurons (red arrows).
    || How to code variable length lists? MASKING FIELDS code list chunks of variable length (Cohen, Grossberg 1986, 1987; Grossberg, Kazerounian 2011, 2016; Grossberg, Meyers 2000; Grossberg, Pearson 2008). Multiple-scale self-similar WM: Masking field, adaptive filter. Variable length coding- Masjking fields select list chunks that are sensitive to WM sequences of variable length; Selectivity- Larger cells selectively code code longer lists; Assymetric competition- Larger cells can inhibit smaller cells more than conversely MAgic Number 7! Temporal order- different list chunks respond to the same items in different orders eg LEFT vs FELT;.
  • image p564fig15.35 (a) A pair of recurrent shunting on-center off-surround networks for control of the fore limbs and hind limbs. (b) Varying the GO signal to these networks can trigger changes in movement gaits. See the text for details.
    ||
  • image p567fig15.38 (a) The Gated Pacemaker model for the control of circadian rythms is a recurrent shunting on-center off-surround network whose excitatory feedback signals are gated by habituative transmitters. Tonic arousal signals energize the pacemaker. Diurnal (left) and nocturnal (right) pacemakers are determined by whether phasic light signals turn the pacemaker on or off. An activity-dependent fatigue signal prevents the pacemaker from becoming overly active for too long. (b) Two simulations of circadian activity cycles during different schedules of light (L) and dark (D). See the text for details.
    || sourceOn-> on-cells (recurrent) <-(-) (-)> off-cells (recurrent) <-sourceOff. on-cells-> activity-> off-cells. off-cells-> fatigue. Diurnal: sourceOn=[light, arousal]; sourceOff=arousal;. Nocturnal: sourceOn=arousal; sourceOff=[arousal, light];.
  • image p586fig16.16 In the place cell learning model of (Gorchetnikov, Grossberg 2007), three populations of five cells each of entorhinal grid cells (only two are shown) with different spatial periods input to the model
  • image p627tbl17.01 Homologs between reaction-diffusion and recurrent shunting cellular network models of development.
    || byRows: (reaction-diffusion, recurrent shunting net) (activator, excitatory activity) (inhibitor, inhibitory activity) (morphogenic source density, inputs) (firing of morphogen gradient, contrast enhancement) (maintenance of morphogen gradient, short-term memory) (power or sigmoidal signal functions, power or sigmoidal signal functions) (on-center off-surround interactions via diffusion, on-center off-surround interactions via signals) (self-stabilizing distributions of morphogens if inhibitors equilibrate rapidly, short-term memory pattern if inhibitors equilibrate rapidly) (periodic pulses if inhibitors equilibrate slowly, periodic pulses if inhibitors equilibrate slowly) (regulation, adaptation).
  • image p016fig01.11 A sufficiently big mismatch between a bottom-up input pattern and a top-down expectation can activate the orienting system, which triggers a burst of nonspecific arousal that can reset the recognition category that read out the expectation. In this way, unexpected events can reset short-term memory and initiate a search for a category that better represents the current situation.
    || [category- top-down (TD) expectation; Bottom-up (BU) input pattern] -> Feature pattern -> BU-TD mismatch -> orienting system -> non-specific arousal -> category.
  • image p038fig01.25 The ART Matching Rule stabilizes real time learning using a [top-down, modulatory on-center, off-surround] network. Object attention is realized by such a network. See text for additional discussion.
    || ART Matching Rule [volition, categories, features]. [one, two] against one.
  • image p052fig02.02 Feature-category resonances enable us to rapidly learn how to recognize objects without experiencing catastrophic forgetting. Attentive matching between bottom-up feature pattern inputs and top-down expectations prevent catastrophic forgetting by focussing object attention upon expected patterns of features, while suppressing outlier features that might otherwise have caused catastophic forgetting if they were learned also.
    || Adaptive Resonance. Attended feature clusters reactivate bottom-up pathways. Activated categories reactivate their top-down pathways. Categories STM, Feature patterns STM. Feature-Category resonance [synchronize, amplify, prolong]s system response. Resonance triggers learning in bottom-up and top-down adaptive weights: adaptive resonance!
  • image p078fig02.31 How noise suppression enables matching of bottom-up and top-down input patterns.
    || Noise suppression -> pattern matching. mismatch (out of phase) suppressed, match (in phase) amplifies pattern.
  • image p079fig02.32 Matching amplifies the matched pattern due to automatic gain control. See terms I and J in the equation.
    || Substrate of resonance. Match (in phase) of BU and TD input patterns AMPLIFIES matched pattern due to automatic gain control by shunting terms. J = sum[i: Ji], I = sum[i: Ii], θi = (Ii + Ji)/(I + J)
    xi = (B + C)*(I + J)/(A + I + J)*[θi -C/(B + C)]
    Need top-down expectations to be MODULATORY.
  • image p087fig03.01 A macrocircuit of key visual processes (in green) and the cortical areas in which they primarily occur (in red), from the retina to the Prefrontal Cortex (PFC), including both the What and Where cortical streams. The [bottom-up, horizontal, and top-down] interactions help each of these processes to overcome computationally complementary processing deficiencies that they would experience without them, and aso to read-out top-down expectations that help to stabilize learning while they focus attention on salient objects and positions.
    || Emerging unified theory of visual intelligence. [What, Where] streams. Bottom-up and top-down interactions overcome COMPLEMENTARY processing deficiencies.
  • image p091fig03.04 A cross-section of the eye, and top-down view of the retina, shao how the blind spot and retinal veins can occlude the registration of light signals at their positions on the retina.
    || Eye: [optic nerve, ciliary body, iris,lens, pupil, cornea, sclera, choroid, retina]. Human retina: [fovea, blind spot, optic nerve]. see alsi cross-section of retinal layer.
  • image p163fig04.39 A schematic of the LAMINART model that explains key aspects of laminar visual cortical anatomy and dynamics. LGN -> V1 [6, 4, 2/3] -> V2 [6, 4, 2/3]
    || p163c1h0.6 "... The first article abount laminar computing ... proposed how the laminar cortical model could process 2D pictures using bottom-up filtering and horizontal bipole grouping interactions (Grossbeerg, Mingolla, Ross 1997). In 1999, I was able to extend the model to also include top-down circuits for expectation and attention (Grossberg 1999)(right panel). Such a synthesis of laminar bottom-up, horizontal, and top-down circuits is characteristic of the cerebral cortex (left panel). I called it LAMINART because it began to show how properties of Adaptive Resonance Theory, or ART, notably the ART prediction about how top-down expectations and attention work, and are realized by identical cortical cells and circuits. You can immediately see from the schematic laminar circuit diagram ... (right panel) that circuits in V2 seem to repeat circuits in V1, albeit with a larger spatial scale, despite the fact that V1 and V2 carry out different functions, How this anatomical similarity can coexist with functional diversity will be clarified in subsequent sections and chapters. It enable dfifferent kinds of biological intelligence to communicate seamlessly while carrying out their different psychological functions. ..."
  • image p192fig05.06 Bottom-up and top-down circuits between the LGN and cortical area V1. The top-down circuits obey the ART Matching Rule for matching with bottom-up input patterns and focussing attention on expected critical features.
    || Model V1-LGN circuits, version [1, 2]. retina -> LGN relay cells -> interneurons -> cortex [simple, endstopped] cells -> cortex complex cells
  • image p193fig05.08 The patterns of LGN activation and inhibition on the sides and ends of a line without the top-down feedback (A) and with it (C). The top-down distribution of excitation (+) and inhibition (-) are shown in (B).
    ||
  • image p199fig05.11 Instar learning enables a bottom-up adaptive filter to become selectively tuned to particular feature patterns. Such pattern learning needs adaptive weights that can either increase or decrease to match the featural activations that they filter.
    || Instar learning STM->LTM: need both increases and decreases in strength for the LTM pattern to learn the STM pattern
  • image p200fig05.13 Instar and outstar learning are often used to learn the adaptive weights in the bottom-up filters and top-down expectations that occur in ART. The ART Matching Rule for object attention enables top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features.
    || Expectations focus attention: feature pattern (STM), Bottom-Up adaptive filter (LTM), Category (STM), competition, Top-Down expectation (LTM); ART Matching Rule: STM before top-down matching, STM after top-down matching (attention!)
  • image p211fig05.20 The PN and N200 event-related potentials are computationally complementary events that are computed within the attentional and orienting systems.
    || PN and N200 are complementary waves. PN [top-down, conditionable, specific] match; N200 [bottom-up, unconditionable, nonspecific] mismatch
  • image p214fig05.24 Learning of a top-down expectation must occur during bottom-up learning in the adaptive filter in order to be able to match the previously associated feature pattern with the one that is currently active.
    || Learning top-down expectations. When the code (green right triangle GRT) for X1 was learned at F2, GRT learned to read-out X1 at F1. [Bottom-Up, Top-Down] learning
  • image p214fig05.25 The sequence of events whereby a novel input pattern can activate a category which, in turn, reads out its learned top-down expectation to be matched against the input pattern. Error correction thus requires the use of a Match Detector that has properties of the Processing Negativity ERP.
    || How is an error corrected. During bottom-up learning, top-down learning must also occur so that the pattern that is read out top-down can be compared with the pattern that is activated by bottom-up inputs. Match detector: Processing Negativity ERP. 1. top-down, 2. conditionable, 3. specific, 4. match
  • image p214fig05.26 When a big enough mismatch occurs, the orienting system is activated and sends a burst of nonspecific arousal to the category level. This Mismatch Detector has properties of the N200 ERP.
    || Mismatch triggers nonspecific arousal. Mismatch at F1 eleicits a nonspecific event at F2. Call this event nonspecific arousal. N200 ERP Naatanen etal: 1. bottom-up, 2. unconditionable, 3. nonspecific, 4. mismatch
  • image p215fig05.28 How a mismatch between bottom-up and top-down input patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level.
    || Mismatch -> inhibition -> arousal -> reset. BU input orienting arousal, BU+TD mismatch arousal and reset. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
  • image p220fig05.29 Vigilance is a gain parameter on inputs to the orienting system that regulates whether net excitation from bottom-up inputs or inhibition from activated categories will dominate the orienting system. If excitation wins, then a memory search for a better matching will occur. If inhibition wins, then the orienting system will remain quiet, thereby enabling resonance and learning to occur.
    || Vigilance control [resonate and learn, reset and search]. ρ is a sensitivity or gain parameter
  • image p221fig05.30 When a predictive disconfirmation occurs, vigilance increases enough to drive a search for a more predictive category. If vigilance increases just enough to exceed the analog match between features that survive top-down matching and the entire bottom-up input pattern, then minimax learning occurs. In this case, the minimum amount of category generalization is given up to correct the predictive error.
    || Match tracking realizes minimax learning principle. Given a predictive error, vigilance increases just enough to trigger search and thus acrifices the minimum generalization to correct the error ... and enables expert knowledge to be incrementally learned. predictive error -> vigilance increase just enough -> minimax learning
  • image p221fig05.31 A system like Fuzzy ARTMAP can learn to associate learned categories in one ART network with learned categories in a second ART network. Because both bottom-up and top-down interactions occur in both networks, a bottom-up input pattern to the first ART network can learn to generate a top-down output pattern from the second ART network.
    || Fuzzy ARTMAP. Match tracking realizes minimax learning principle: vigilence increases to just above the match ratio of prototype / exemplar, thereby triggering search
  • image p226fig05.35 I had shown in 1976 how a competitive learning or self-organizing map model could undergo catastrophic forgetting if the input environment was sufficiently dense and nonstationary, as illustrated by Figure 5.18. Later work with Gail Carpenter showed how, if the ART Matching Rule was shut off, repeating just four input patterns in the correct order could also casue catastrophic forgetting by causing superset recoding, as illustrated in Figure 5.36.
    || Code instability input sequences. D C A; B A; B C = ; |D|<|B|<|C|; where |E| is the number of features in the set E. Any set of input vectors that satisfy the above conditions will lead to unstable coding if they are periodically presented in the order ABCAD and the top-down ART Matching Rule is shut off.
  • image p246fig05.48 Microcircuits of the LAMINART model that I developed with Rajeev Raizada. See the text for details of how they integrate bottom-up adaptive filtering, horizontal bipole grouping, and top-down attentional matching that satisfied the ART Matching Rule.
    ||
  • image p252fig06.01 A surface-shroud resonance begins to form when the surface representations of objects bid for spatial attention. In addition to these topographic excitatory inputs, there is long-range inhibition of the spatial attention cells that determines which inputs will attract spatial attention.
    || Bottom-up spatial attention competition. [more, less] luminous perceptual surfaces -> competition -> spatial attention
  • image p253fig06.02 After bottom-up surface inputs activate spatial attentional cells, they send top-down topographic excitatory signals back to the surface representations. This recurrent shunting on-center off-surround network contrast enhances larger attentional activities while approximately normalizing the total spatial attentional activity. A surface-shroud resonance hereby forms that selects an attentional shroud, enhances the perceived contrast of the attended surface (light blue region), and maintains spatial attention on it.
    || Surface-shroud resonance. perceptual surfaces -> competition -> spatial attention. (Carrasco, Penpeci-Talgar, and Eckstein 2000, Reynolds and Desimone 2003)
  • image p258fig06.07 A top-down spotlight of attention can also be converted into a shroud. This process begins when the spotlight triggers surface filling-in within a region. Figure 6.8 shows how it is completed.
    || Reconciling spotlights and shrouds: top-down attentional spotlight becomes a shroud. spotlight of attention, surface filling-in
  • image p286fig07.04 Illusory contours persist longer than real contours because real contours have more inducers whose rebound at contour offset can cause faster boundary reset. Illusory contours also take longer to form than real contours, which explains the increasing portion of the curve.
    || Persistence data and simulations (Meyer, Ming 1988; Reynolds 1981). Increasing portion of curve is due to formation time of the illusory contour. Longer persistence is due to fewer bottom-up inducers of an illusory contour that has the same length as a real contour: only illuminance-derived edges generate reset signals. When bottom-up inducers are inhibited by OFF cell rebounds, their offset gradually propagates to the center of the illusory contour.
  • image p286fig07.05 This figure shows the propagation through time of illusory contour offset from the rebounded cells that got direct inputs to the center of the contour.
    || Persistence data and simulations. Illusory contours persist longer than real contours (Meyer, Ming 1988; Reynolds 1981). When bottom-up inducers are inhibited by OFF cell rebounds, their offset gradually propagates to the center of the illusory contour.
  • image p315fig08.35 The output signals from the directional grouping network obey the ART Matching Rule. They thereby select consistent motion directional signals while suppressing inconsistent ones, and do not distort that the spared cells code. The aperture problem is hereby solved by the same mechanism that dynamically stabilizes the learning of directional grouping cells.
    || How to select correct direction and preserve speed estimates? Prediction: Feedback from MSTv to MT- obeys ART Matching Rule; Top-down, modulatory on-center, off-surround network (Grossberg 1976, 1980; Carpenter, Grossberg 1987, 1991); Explains how directional grouping network can stably develop and how top-down directional attention can work. (Cavanough 1992; Goner etal 1986; Sekuer, Ball 1977; Stelmach etal 1994). Directional grouping network (MSTv) <-> Directional long-range filter (MT). Modulatory on-center selects chosen direction and preserves speed. Off-surround inhibits incompatible directions.
  • image p330fig08.52 Direction fields of the object frame (left column) and of the two dot "parts" (right column) show the correct motion directions after the peak shift top-down expectation acts.
    || Simulation of motion vector decomposition. [Larger scale (nearer depth), Small scale (farther depth)] vs [Down, Up]
  • image p331fig08.54 The simulated part directions of the rotating dot through time after the translational motion of the frame does its work via the top-down peak shift mechanism.
    || Cycloid. Motion directions of a single dot moving slowly along a cycloid curve through time.
  • image p354fig10.01 The lamiar cortical circuit that realizes how we pay attention to an object sends signals from layer 6 of a higher cortical level to layer 6 of a lower cortical level and then back up to layer 4. This "folded feedback" circuit realizes a top-down, modulatory on-center, off-surround circuit that realizes the ART Matching Rule.
    || Top-down attention and folded feedback. Attentional signals also feed back into 6-to-4 on-center off-surround. 1-to-5-to-6 feedback path: Macaque (Lund, Booth 1975) cat (Gilbert, Wiesel 1979). V2-to-V1 feedback is on-center off-surround and affects layer 6 of V1 the most (Bullier etal 1996; Sandell, Schiller 1982). Attended stimuli enhanced, ignored stimuli suppressed. This circuit supports the predicted ART Matching Rule! [LGN, V[1,2][6->1]]
  • image p356fig10.03 Laminar computing achieves at least three basic properties of visual processing that have analogs in all biologically intelligent behaviors. These properties may be found in all cortical circuits in specialized form.
    || What does Laminar Computing achieve? 1. Self-stabilizing development and learning; 2. Seamless fusion of a) pre-attentive automatic bottom-up processing, b) attentive task-selective top-down processing; 3. Analog coherence: Solution of Binding Problem for perceptual grouping without loss of analog sensitivity. Even the earliest visual cortical stages carry out active adaptive information processing: [learn, group, attention]ing
  • image p359fig10.05 Activation of V1 is initiated, in part, by direct excitatory signals from the LGN to layer 4 of V1.
    || How are layer 2/3 bipole cells activated? Direct bottom-up activation of layer 4. LGN -> V1 layer 4. Strong bottom-up LGN input to layer 4 (Stratford etal 1996; Chung, Ferster 1998). Many details omitted.
  • image p359fig10.06 Another, albeit indirect, pathway from LGN exists that can also excite layer 4 of V1. Why are not these two pathways redundant? The answer, ultimately, how to do with how cortex learns, as well as with how it pays attention. See the text for details.
    || Another bottom-up input to layer 4: Why?? Layer 6-to-4 on-center off-surround (Grieve, Sillito 1991, 1995; Ahmedetal 1994, 1997). LGN projects to layers 6 and 4. Layer 6 excites spiny stellates in column above it. Medium range connections onto inhibitory neurons. 6-t-4 path acts as on-center off-curround.
  • image p359fig10.07 The two bottom-up pathways from LGN to layer 4 of V1 can together activate layer 4 and contrast-normalize layer 4 responses.
    || Bottom-up contrast normalization (Grossberg 1968, 1973; Sperling, Sondhi 1968; Heeger 1992; Douglas etal 1995; Shapley etal 2004). Together, direct LGN-to-4 path and 6-to-4 on-center off-surround provide contrast normalization if cells obey shunting or membrane equation dynamics.
  • image p360fig10.08 The bottom-up on-center off-surround from LGN-to-6-to-4 has a modulatory on-center because of its role in realizing the ART Matching Rule and, with it, the ability of the cortex to dynamically stabilize its learned memories.
    || Modulation of priming by 6-to-4 on-center (Stratford etal 1996; Callaway 1998). On-center 6-to-4 excitation is inhibited down to being modulatory (priming, subthreshold). On-center 6-to-4 excitation cannot activate layer 4 on its own. Clarifies need for direct path. Prediction: plays key role in stable grouping, development and learning. ART Matching Rule!
  • image p364fig10.13 The bottom-up adaptive filter, intracortical grouping circuit, and intercortical top-down attentional circuit all use the same competitive decision circuit between layers 6 and 4, called the attention-preattention interface, with which to select the featural patterns that will be processed.
    || Bottom-up filters and intracortical grouping feedback use the same 6-to-4 decision circuit, LGN-> Vx[6,4,2/3]. competitive decision circuit, modulatory on-center off-surround network. Top-down intercortical attention also uses the same 6-to-4 decision circuit!
  • image p364fig10.14 This figure emphasizes how preattentive intracortical groupings and top-down intercortical attention share the same modulatory on-center, off-surround layer 4-to-6 decision circuit.
    || Explanation: grouping and attention share the same modulatory decision circuit. Layer 6-6-4-2/3 pathway shown; also a layer 6-1-2/3 path. intercortical attention, both act via a modulatory on-center off-surround decision circuit, intracortical feedback from groupings
  • image p420fig12.18 The ARTSTREAM model explains and simulates the auditory continuity illusion as an example of a spectral-pitch resonance. Interactions of ART Matching Rule and asymmetric competition mechanisms in cortical strip maps explain how the tone selects the consistent frequency from the noise in its own stream while separating the rest of the noise into another stream.
    || ARTSTREAM model (Grossberg 1999; Grossberg, Govindarajan, Wyse, Cohen 2004). SPINET. Frequency and pitch strips. Bottom Up (BU) harmonic sieve. Top Down (TD) harmonic ART matching. Exclusive allocation. Learn pitch categories based on early harmonic processing. A stream is a Spectral-Pitch Resonance!
  • image p441fig12.38 The LTM Invariance Principle is realized if the relative sizes of the inputs to the list chunk level stay the same as more items are stored in working memory. This property, in turn, follows from shunting previously stored working memory activities when a ne4w item occurs.
    || LTM Invariance principle. Choose STM activities so that newly stored STM activities may alter the size of old STM activities without recoding their LTM patterns. In particular: New events do not change the relative activities of past event sequences, but may reduce their absolute activites. Why? Bottom-up adaptive filtering uses dot products: T(j) = sum[i=1 to n: x(i)*z(i,j) = total input to v(j). The relative sizes of inputs to coding nodes v(j) are preserved. x(i) -> w*x(i), 0 < w <= 1, leaves all past ratios T(j)/T(k) unchanged.
  • image p449fig12.47 This figure illustrates the self-similarity in a Masking Field of both its recurrent inhibitory connections (red arrows) and its top-down excitatory priming signals (green arrows) to the item chunk working memory.
    || Both recurrent inhibition and top-down excitatory priming are self-similar in a masking field. MYSELF <-> [MY, MYSELF]
  • image p453fig12.50 The ARTWORD perception cycle shows how sequences of items activate possible list chunks, which compete among each other and begin to send their top-down expectations back to the item working memory. An item-list resonance develops through time as a result.
    || ARTWORD perception cycle. (a) bottom-up activation (b) list chunk competition (c) item-list resonance (d) chunk reset due to habituative collapse.
  • image p483fig13.02 The object-value categories in the orbitofrontal cortex require converging specific inputs from the sensory cortex and nonspecific incentive motivational inputs from the amygdala in order to fire. When the orbitofrontal cortex fires, it can deliver top-down ART Matching Rule priming signals to the sensory cortical area by which it was activated, thereby helping to choose the active recognition categories there that have the most emotional support, while suppressing others, leading to attentional blocking of irrelevant cues.
    || Cognitive-Emotional-Motor (CogEM) model. Drive-> amygdala incentive motivational learning-> orbitofrontal cortex- need converging cue and incentive iputs to fire <-> sensory cortex- conditioned renforcer learning-> amygdala. CS-> sensory cortex. Motivated attention closes the cognitive-emotional feedback loop, focuses on relevant cues, and causes blocking of irrelevant cues.
  • image p484fig13.04 The top-down feedback from the orbitofrontal cortex closes a feedback loop that supports a cognitive-emotional resonance. If this resonance can be sustained long enough, it enables us to have feelings at the same time that we experience the categories that caused them.
    || Cognitive-Emotional resonance. Basis of "core consciousness" and "the feeling of what happens". (Damasio 1999) derives heuristic version of CogEM model from his clinical data. Drive-> amygdala-> prefrontal cortex-> sensory cortex, resonance around the latter 3. How is this resonance maintained long enough to become conscious?
  • image p523fig14.03 (a) The MOTIVATOR neural model generalizes CogEM by also including the basal ganglia. It can hereby explain and simulate complementary functions of the amygdala and basal ganglia (SNc) during conditioning and learned performance. The basal ganglia generate Now Print signals in response to unexpected rewards. These signals modulate learning of new associations in many brain regions. The amygdala supports motivated attention to trigger actions that are expected to occur in response to conditioned or unconditioned stimuli. Object Categories represent visual or gustatory inputs in anterior inferotemporal (ITA) and rhinal (RHIN) cortices, respectively. Value Categories represent the value of anticipated outcomes on the basis of hunger and satiety inputs, in amygdala (AMYG) and lateral hypothalamus (LH). Object-Value Categories resolve the value of competing perceptual stimuli in medial (MORB) and lateral (ORB) orbitofrontal cortex. The Reward Expectation Filter detects the omission or delivery of rewards using a circuit that spans ventral striatum (VS), ventral pallidum (VP), striosomal delay (SD) cells in the ventral striatum, the pedunculopontine nucleus (PPTN) and midbrain dopaminergic neurons of the substantia nigra pars compacta/ventral tegmental area (SNc/VTA). The circuit that processes CS-related visual information (ITA, AMTG, ORB) operates in parallel with a circuit that processes US-related visual and gustatory information (RHIN, AMYG, MORB). (b) Reciprocal adaptive connections between hypothalamus and amygdala enable amygdala cells to become learned value categories. The bottom region represents hypothalmic cells, which receive converging taste and metabolite inputs whereby they become taste-drive cells. Bottom-up signals from activity patterns across these cells activate competing value category, or US Value Representations, in the amygdala. A winning value category learns to respond selectively to specific combinations of taste-drive activity patterns and sends adaptive top-down priming signals back to the taste-drive cells that activated it. CS-activated conditioned reinforcer signals are also associatively linked to value categories. Adpative connections end in (approximately) hemidiscs. See the text for details.
    ||
  • image p600fig16.36 The entorhinal-hipppocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories. See the text for details.
    || Entorhinal-hippocampal interactions as an ART system. Hippocampal place cells as spatial categories. Angular head velocity-> head direction cells-> stripe cells- small scale 1D periodic code (ECIII) SOM-> grid cells- small scale 2D periodic code (ECII) SOM-> place cells- larger scale spatial map (DG/CA3)-> place cells (CA1)-> conjunctive-coding cells (EC V/VI)-> top-down feedback back to stripe cells- small scale 1D periodic code (ECIII). stripe cells- small scale 1D periodic code (ECIII)-> place cells (CA1).
  • image p613fig16.44 The main target position vector (TPV), difference vector (DV), and volitional GO computations in SOVEREIGN that bring together reactive and planned signals to control decision-making and action. See the text for details.
    || Reactive visual TPV (RVT), NETs (NETs), S-MV mismatch (SMVM), NETmv (NETmv), reactive visual TPV storage (RVTS), reactive DV1 (RD1), NET (NET), motivated what and where decisions (MWWD), Planned DV1 (PD1), tonic (Tonic), top-down readout mismatch (TDRM), Parvo gate (tonic) (PG), Orienting GOp offset (OGpO). RVT-> [NETs, RVTS], NETs-> [SMVM, NET], SMVM-> NET, NETmv-> SMVM, RVTS-> [NETs, RD1], NET-> [RD1, PD1, TDRM], MWWD-> PD1, PD1-> Tonic-> TDRMPG-> NETs, OGpO-> [NETmv, PD1].
  • Grossberg 2021 p229c2h0.60 SMART computer simulations demonstrate that a good enough match of a top-down expectation with a bottom-up feature pattern generates an attentive resonance during which the spikes of active cells synchronize in the gamma frequency range of 20-70 Hz (Figure 5.40). Many labs have reported a link between attention and gamma oscillations in the brain, including two articles published in 2001, one from the laboratory of Robert Desimone when he was at the the National Institute of Mental Health in Bethseda (Fries, Reynolds, Rorie, Desimone 2001), and the other from the laboratory of Wolf Singer in Frankfurt (Engel, Fries, Singer 2001). You
  • Grossberg 2021 p081c2h0.66 sub-section: How does one evolve a computational brain?
    The above discussion illustrates that no single step of theoretical derivation can derive a whole brain. One needs a method for deriving a brain in stages, or cycles, much as evolution has incrementally discovered ever more complex brains over many thousands of years. The following theoretical method has been successfully applied many times since I first used it in 1957. It embodies a kind of conceptual evolutionary process for deriving a brain.

    Because "brain evolution needs to achieve behavioural success", we need to start with data that embodiey indices of behavioral success. That is why, as illustrated in Figure 2.37 Modelling method and cycle, one starts with Behavioral Data from scores or hundreds of psychological experiments. These data are analyszed as the result of an individual adapting autonomously in real time to a changing world. This is the Arty of Modeling. It requires that one be able to infer from static data curves the dynamical processes that control individual behaviors occuring in real time. One of the hardest things that I teach to my students to do is "how to think in real time" to be able to carry out this speculative leap.

    Properly carried out, this analysis leads to the discovery of new Design Principles that are embodied by these behavioral processes. The Design Principles highlight the functional meaning of the data, and clarify how individual behaviors occurring in real time give rise to these static data curves.

    These principles are then converted into the simplest Mathematical Model using a method of minimal anatomies, which is a form of Occam
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
    bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :image p009fig01.06 Primacy gradient of activity stored in working memory within a recurrent shunting on-center off-surround network. Rehersal is controlled by a nonspecific rehersal wave and self-inhibitory feedback of the item that is currently being rehearsed. Rehearsal is controlled by a nonspecific rehearsal wave and self-inhibitory feedback of the item that is currently being rehearsed. Green = excitatory, red = inhibitory
    || inputs? -> item and order WM storage -> competitive selection-> rehearsal wave -> outputs
  • image p077fig02.27 A schematic of the on-center off-surround network that occurs in the mudpuppy retina, including three main cell types: receptors, horizontal cells, and bipolar cells.
    || Mechanism: cooperative-competitive dynamics.
    On-center off-surround (Kuffler 1953) cat retina
    Subtractive lateral inhibition (Hartline, Ratcliff 1956/7+) limulus retina.
    R receptor -> H horizontal -> B bipolar (Werblin, Dowling, etal 1969+) mudpuppy retina.
  • image p100fig03.15 A fuzzy band of possible initial grouping orientations allows grouping to get started. Cooperative-competitive feedback via a hierarchical resolution of uncertainty chooses a sharp final grouping that has the most evidence to support it.
    || before choice: transient; after choice: equilibrium
  • image p108fig03.28 The watercolor illusion of Baingio Pinna 1987 can be explained using spatial competition betweeen like-oriented boundary signals. This occurs at what I have called the First Competitive Stage. This is one stage in the brain
  • image p146fig04.25 Networks of simple, complex, and hypercomplex cells can create end cuts as an example of hierarchical resolution of uncertainty. See the text for details.
    || How are end cuts created? (Grossberg 1984) Two stages of short-range competition. 1st stage: Simple cells -> complex cells -> hypercomplex - endstopped complex. First competitive stage- across position, same orientation; Second competitive stage- same position, across orientation. -> cooperation.
  • image p148fig04.26 End cuts are formed during neon color spreading in the same way that they are formed at line ends.
    || End cut during neon color spreading.
  • FIRST competitive stageSECOND competitive stage
    within orientationacross orientation
    across positionwithin position
    to generate end cuts.
  • image p149fig04.27 Bipole cells can form boundaries that interpolate end cuts, and use their cooperative-competitive interactions to choose the boundary groupings that have the most support from them.
    || Bipole cells: boundary completion. long-range cooperation & short-range inhibition: complete winning boundary groupings and suppress weaker boundaries.
  • image p161fig04.37 Kanizsa squares that form either collinearly to their inducers (left panel) or perpendicular to them (right panel) confirm predictions of the BCS boundary completion model.
    || Analog-sensitive boundary completion. contour strength vs Kanizsa square image. Increases with "support ratio" (Shipley, Kellman 1992). Inverted-U (Lesher, Mingoloa 1993; cf Soriano, Spillmann, Bach 1994)(shifted gratings). p370h0.6 BCS = Boundary Contour System, FCS = Feature Contour System. p161c1h0.85 "... As predicted by the BCS, they found an Inverted-U in contour strength as a function of line density. ... This effect may be explained by the action of the short-range competition that occurs before the stage of long-range cooperative grouping by bipole cells (Figure 4.32). It is thus another example of the balance between cooperative and competitive mechanisms. ..."
  • image p198fig05.10 A competitive learning circuit learns to transform distributed feature patterns into selective responses of recognition categories.
    || Competitive learning and Self-Organized Maps (SOMs). input patterns -> feature level (F1) -> adaptive filter (T=ZS) ->
  • image p205fig05.18 How catastrophic forgetting can occur in a competitive learning or self-organizing map model due to basic properties of competition and associative learning.
    || Learning from pattern sequences, practicing a sequence of spatial patterns can recode all of them! When is learning stable? Input patterns cannot be too dense relative to the number of categories; Either: not to many distributed inputs relative to the number of categories, or not too many input clusters
  • image p226fig05.35 I had shown in 1976 how a competitive learning or self-organizing map model could undergo catastrophic forgetting if the input environment was sufficiently dense and nonstationary, as illustrated by Figure 5.18. Later work with Gail Carpenter showed how, if the ART Matching Rule was shut off, repeating just four input patterns in the correct order could also casue catastrophic forgetting by causing superset recoding, as illustrated in Figure 5.36.
    || Code instability input sequences. D C A; B A; B C = ; |D|<|B|<|C|; where |E| is the number of features in the set E. Any set of input vectors that satisfy the above conditions will lead to unstable coding if they are periodically presented in the order ABCAD and the top-down ART Matching Rule is shut off.
  • image p287fig07.07 Persistence increases with distance between a target and a masking stimulus due to weakening of the spatial competition in the first competitive stage of hypercomplex cells.
    || Persistence data and simulations. Persistence increases with distance between a target and a masking stimulus (Farrell, Pavel, Sperling 1990). There is less spatial competition from the masker to the target when they are more distant, hence the target is more persistent.
  • image p364fig10.13 The bottom-up adaptive filter, intracortical grouping circuit, and intercortical top-down attentional circuit all use the same competitive decision circuit between layers 6 and 4, called the attention-preattention interface, with which to select the featural patterns that will be processed.
    || Bottom-up filters and intracortical grouping feedback use the same 6-to-4 decision circuit, LGN-> Vx[6,4,2/3]. competitive decision circuit, modulatory on-center off-surround network. Top-down intercortical attention also uses the same 6-to-4 decision circuit!
  • image p437fig12.33 Item and Order working memory models explain free recall data, as well as many other psychological and neurobiological data, by simulating how temporal series of events are stored as evolving spatial patterns of activity at content-addressable item categories. The categories with the largest activities are rehearsed first, and self-inhibit their activity as they do so in order to prevent tem from being rehearsed perseveratively. The laws whereby the items are stored in working memory obey basic design principles concerning list categories, or chunks, of sequences of stored items can be stably remembered.
    || Working memory models: item and order, or competitive queuing (Grossberg 1978; Houghton 1990; Page, Norris 1998). Event sequence in time stored as an evolving spatial pattern of activity. Primacy gradient of working memory activation stores correct temporal order at content-addressable cells. Maximally activated cell populations is performed next when a rehearsal wave is turned on. Output signal from chosen cell population inhibits its own activity to prevent perseveration: inhibition of return. Iterate until entire sequence is performed.
  • image p488fig13.12 (left column) How incentive motivational feedback amplifies activity of a sensory cortical cell population. (right column) A sensory cortical cell population whose activity is amplified by incentive motivational feedback can suppress the activities of less activated populations via self-normalizing recurrent competitive interactions.
    || Motivational feedback and blocking. (left) sensory input CS, STM activity without motivational feedback, STM activity with motivational feedback. (right) STM suppressed by competition, STM amplified by (+) feedback.
  • image p510fig13.39 Shunting competition and informational noise suppression in affective gated dipoles, plus back-propagating action potentials for teaching signals, enable the net normalized adaptive weights to be learned. They never saturate!
    || Learn net dipole output pattern. Opponent "decision" controls learning. Cf. competitive learning. Learning signal, opponent extinction.
  • p001 Chapter 1 Overview - From Complementary Computing and Adaptive Resonance to conscious awareness
  • p250 Chapter 6 Conscious seeing and invariant recognition - Complementary cortical streams coordinate attention for seeing and recognition
  • p289 Chapter 8 How we see and recognize object motion - Visual form and motion perception obey complementary laws
  • p337 Chapter 9 Target tracking, navigation, and decision-making - Visual tracking and navigation obey complementary laws
  • image p029tbl01.01 Some pairs of complementary processing streams.
    ||
    visual boundary:
    interblob stream V1-V2-V4
    visual surface:
    blob stream V1-V2-V4
    visual boundary:
    interblob stream V1-V2-V4
    visual motion:
    magno stream V1-MT-MST
    WHAT streamWHERE stream
    perception & recognition:
    interferotemporal & prefrontal areas
    space & action:
    parietal & prefrontal areas
    object tracking:
    MT interbands & MSTv
    optic flow navigation:
    MT+ bands & MSTd
    motor target position:
    motor & parietal cortex
    volitional speed:
    basal ganglia
  • image p030tbl01.02 The What and Where cortical processing streams obey complementary laws. These laws enable the What stream to rapidly and stably learn invariant object categories without experiencing catastrophic forgetting, while the Where stream learns labile spatial and action representations to control actions that are aimed towards these objects.
    ||
    WHATWHERE
    spatially-invariant object learning and recognitionspatially-variant reaching and movement
    fast learning without catastrophic forgettingcontinually update sensory-motor maps and gains
    IT InterferoTemporal CortexPPC Posterior Parietal Cortex
    WhatWhere
    matchingexcitatoryinhibitory
    learningmatchmismatch
  • image p087fig03.01 A macrocircuit of key visual processes (in green) and the cortical areas in which they primarily occur (in red), from the retina to the Prefrontal Cortex (PFC), including both the What and Where cortical streams. The [bottom-up, horizontal, and top-down] interactions help each of these processes to overcome computationally complementary processing deficiencies that they would experience without them, and aso to read-out top-down expectations that help to stabilize learning while they focus attention on salient objects and positions.
    || Emerging unified theory of visual intelligence. [What, Where] streams. Bottom-up and top-down interactions overcome COMPLEMENTARY processing deficiencies.
  • image p094fig03.07 The processes of boundary completion and surface filling-in are computationally complementary.
    ||
    Boundary completionSurface filling-in
    outwardinward
    orientedunoriented
    insensitive to direction of contrastsensitive to direction-of-contrast
  • image p174fig04.51 The same feedback circuit that ensures complementary consistency between boundaries and surfaces also, automatically, initiates figure-ground separation! See the text for details.
    || before feedback: [V1 -> V2 pale stripe -> V2 thin stripe, "attention pointers" (Cavanagh etal 2010)]; after feedback: [V1 + V2 thin stripe] -> V2 pale stripe via contrast sensitive [exhitation, inhibition] for depths [1, 2] -> object recognition
  • image p176fig04.53 The on-center off-surround network within position and across depth helps to explain why brighter Kanizsa squares look closer.
    || inhibition vs. depth. p176c1h0.25 "... to qualitatively understand how this example of proximity-luminance covariance works. It follows directly from the boundary pruning by surface contour feedback signals (Figure 4.51) that achieves complementary consistency and initiates figure-ground perception. ...". p176c1h0.45 "... these inhibitory sigals are part of an off-surround network whose strength decreases as the depth difference increases between the surface that generates the signal and its recipient boundaries. ...". p176c1h0.8 "... Within FACADE theory, the perceived depth of a surface is controlled by the boundaries that act as its filling-in generators and barriers (Figure 3.22), since these boundaries select the depth-sselective FIDOs within whin filling-in can occur, and thereby achieve surface capture. These boundaries, in turn, are themselves strengthened after surface-to-boundary contour feedback eliminates redundant boundaries that cannot support sucessful filling-in (Figure 4.51). These surface contour feedback signals have precisely the properties that are needed to explain why brighter Kanizsa squares look closer! ..."
  • image p211fig05.20 The PN and N200 event-related potentials are computationally complementary events that are computed within the attentional and orienting systems.
    || PN and N200 are complementary waves. PN [top-down, conditionable, specific] match; N200 [bottom-up, unconditionable, nonspecific] mismatch
  • image p267fig06.14 Feedback from object surfaces to object boundaries uses surface contours. This feedback assures complementary consistency and enables figure-ground separation. A corollary discharge of the surface contours can be used to compite salient object feature positions.
    || Perceptual consistency and figure-ground separation.
  • image p314fig08.34 The VISTARS model for visually-based spatial navigation. It uses the Motion BCS as a front end and feeds it output signals into two computationally complementary cortical processing streams for computing optic flow and target tracking information.
    || VISTARS navigation model (Browning, Grossberg, Mingolia 2009). Use FORMOTION model as front end for higher level navigational circuits: input natural image sequences -> estimate heading (MT+)-MSTd -> additive processing -> estimate object position (MT-)-MSTv direction and speed subtractive processing -> Complementary Computing. [optic flow navigation, object tracking]
  • image p523fig14.03 (a) The MOTIVATOR neural model generalizes CogEM by also including the basal ganglia. It can hereby explain and simulate complementary functions of the amygdala and basal ganglia (SNc) during conditioning and learned performance. The basal ganglia generate Now Print signals in response to unexpected rewards. These signals modulate learning of new associations in many brain regions. The amygdala supports motivated attention to trigger actions that are expected to occur in response to conditioned or unconditioned stimuli. Object Categories represent visual or gustatory inputs in anterior inferotemporal (ITA) and rhinal (RHIN) cortices, respectively. Value Categories represent the value of anticipated outcomes on the basis of hunger and satiety inputs, in amygdala (AMYG) and lateral hypothalamus (LH). Object-Value Categories resolve the value of competing perceptual stimuli in medial (MORB) and lateral (ORB) orbitofrontal cortex. The Reward Expectation Filter detects the omission or delivery of rewards using a circuit that spans ventral striatum (VS), ventral pallidum (VP), striosomal delay (SD) cells in the ventral striatum, the pedunculopontine nucleus (PPTN) and midbrain dopaminergic neurons of the substantia nigra pars compacta/ventral tegmental area (SNc/VTA). The circuit that processes CS-related visual information (ITA, AMTG, ORB) operates in parallel with a circuit that processes US-related visual and gustatory information (RHIN, AMYG, MORB). (b) Reciprocal adaptive connections between hypothalamus and amygdala enable amygdala cells to become learned value categories. The bottom region represents hypothalmic cells, which receive converging taste and metabolite inputs whereby they become taste-drive cells. Bottom-up signals from activity patterns across these cells activate competing value category, or US Value Representations, in the amygdala. A winning value category learns to respond selectively to specific combinations of taste-drive activity patterns and sends adaptive top-down priming signals back to the taste-drive cells that activated it. CS-activated conditioned reinforcer signals are also associatively linked to value categories. Adpative connections end in (approximately) hemidiscs. See the text for details.
    ||
  • image p548fig15.16 Homologous recognition learning and reinforcement learning macrocicuits enable adaptively timed conditioning in the reinforcement learning circuit to increase inhibition of the orienting system at times when a mismatch in the recognition system would have reduced inhibition of it.
    || Homolog between ART and CogEM model, complementary systems. [Recognition, Reinforcement] learning vs [Attentional, Orienting] system. Reinforcement: timing, drive representation.
  • p353 Chapter 10 Laminar computing by cerebral cortex - Towards a unified theory of biologucal and artificial intelligence
  • image p030fig01.20 A schematic cross-section of a slice of laminar neocortex whose cells are organized in a characteristic way in six layers, which themselves may be organized into distinct sublaminae. The computational paradigm of Laminar Computing attempts to show how different parts of neocortex can represent and control very different kinds of behavior - including vision, speech, can cognition - using specializations of the same canonical laminar cortical design.
    || Projection fibres: Cortico[spinal, bulbar, pontine, striate, reticulat, etc]; Thalamocortical fibres; Diffuse cortical afferent fibres: [nonspecific thalamocortical, Cholinergic, Monoaminergic]; Corticocortical efferents; Projection [cell, fibre]; Corticocortical efferent terminals.
  • image p141fig04.19 A laminar cortical circuit for computing binocular disparities in layer 3B of V1 at binocular simple cells. These cells add positionally disparate inputes from like polarized monocular simple cells (layer 4 of V1). Binocular simple cells at each position that is sensitive to opposite polarities then add their outputs at complex cells in layer 2/3. Chapter 10 will explain how these laminar circuits work in greater detail.
    || Laminar cortical circuit for complex cells. [left, right] eye.
    V1 layerdescription
    2/3Acomplex cells
    3Bbinocular simple cells
    4monocular simple cells
  • image p163fig04.39 A schematic of the LAMINART model that explains key aspects of laminar visual cortical anatomy and dynamics. LGN -> V1 [6, 4, 2/3] -> V2 [6, 4, 2/3]
    || p163c1h0.6 "... The first article abount laminar computing ... proposed how the laminar cortical model could process 2D pictures using bottom-up filtering and horizontal bipole grouping interactions (Grossbeerg, Mingolla, Ross 1997). In 1999, I was able to extend the model to also include top-down circuits for expectation and attention (Grossberg 1999)(right panel). Such a synthesis of laminar bottom-up, horizontal, and top-down circuits is characteristic of the cerebral cortex (left panel). I called it LAMINART because it began to show how properties of Adaptive Resonance Theory, or ART, notably the ART prediction about how top-down expectations and attention work, and are realized by identical cortical cells and circuits. You can immediately see from the schematic laminar circuit diagram ... (right panel) that circuits in V2 seem to repeat circuits in V1, albeit with a larger spatial scale, despite the fact that V1 and V2 carry out different functions, How this anatomical similarity can coexist with functional diversity will be clarified in subsequent sections and chapters. It enable dfifferent kinds of biological intelligence to communicate seamlessly while carrying out their different psychological functions. ..."
  • image p174fig04.52 An example of how the 3D LAMINART model can transform the two monocular images of the random dot stereogream in the top row into the three depth-separated surfaces representations in the bottom row.
    || Stereogram surface percepts: surface lightnesses are segregated in depth (Fang and Grossberg 2009). [left, right] inputs, [far, fixation, near] planes. Contrast algorithms that just compute disparity matches and let computer code build the surface eg (Marr, Poggio, etal 1974).
  • image p182fig04.58 LAMINART model processing stage that are sufficient to explain many percepts of transparency, including those summarized in Figure 4.57.
    || [left, right] eye, [LGN, V1 [6, 4, 3B, 2/3 A], V2 [4, 2/3]], [mo, bi]nocular cart [simple, complex] cells, [excita, inhibi]tory cart [connection, cell]s.
  • image p230fig05.38 The Synchronous Matching ART, or SMART, model includes spiking neurons in a laminar cortical hierarchy. I developed SMART with my PhD student Massimiliano Versace. By unlumping LAMINART to include spiking neurons, finer details of neurodynamics, such as the existence of faster gamma oscillations during good enough matches, and slower beta oscillations during bad enough mismatches, could be shown as emergent properties of network interactions.
    || Second order thalamus -> specific thalamic nucleus -> Thalamic reticulate nucleus -> neocortical laminar circuit [6ll, 6l, 5, 2/3, 1] -> Higher order cortex. Similar for First order thalamus -> First order cortex, with interconnection to Second order, nonspecific thalamic nucleus
  • image p231fig05.39 The SMART hypothesis testing and learning cycle predicts that vigilance increases when a mismatch in subcortical regions like the nonspecific thanlamus activates the nucleus basalis of Meynert which, in turn, broadcasts a burst of the neurotransmitter acetylcholine, or ACh, to deeper cortical layers. Due to the way in which LAMINART proposes that cortical matching and mismatching occurs, this ACh burst can increase vigilance and thereby trigger a memory search. See the text for details.
    || [BU input, [, non]specific thalamic nucleus, thalamic reticulate nucleus, neocortical laminar circuit] cart [Arousal, Reset, Search, Vigilance]
  • image p232fig05.41 (a)-(c). The sequence of interlaminar events that SMART predicts during a mismatch reset. (d) Some of the compatible neurophysiological data.
    || Mismatch causes layer 5 dendritic spike that trigger reset. (a) Arousal causes increase in nonspecific thalamic nuclei firing rate and layer 5 dendritic and later somatic spikes (Karkum and Zhu 2002, Williams and Stuart 1999) (b) Layer 5 spikes reach layer 4 via layer 6i and inhibitory neurons (Lund and Boothe 1975, Gilbert and Wiesel 1979) (c) habituative neurotransmitters in layer 6i shift the balance of active cells in layer 4 (Grossberg 1972, 1976) (d) Dendritic stimulation fires layer 5 (Larkum and Zhu 2002) stimulation apical dendrites of nonspecific thalamus
  • image p246fig05.48 Microcircuits of the LAMINART model that I developed with Rajeev Raizada. See the text for details of how they integrate bottom-up adaptive filtering, horizontal bipole grouping, and top-down attentional matching that satisfied the ART Matching Rule.
    ||
  • image p248fig05.49 This circuit of the LAMINART model helps to explain properties of Up and Down states during slow wave sleep, and how disturbances in ACh dynamics can disrupt them.
    ||
  • image p356fig10.03 Laminar computing achieves at least three basic properties of visual processing that have analogs in all biologically intelligent behaviors. These properties may be found in all cortical circuits in specialized form.
    || What does Laminar Computing achieve? 1. Self-stabilizing development and learning; 2. Seamless fusion of a) pre-attentive automatic bottom-up processing, b) attentive task-selective top-down processing; 3. Analog coherence: Solution of Binding Problem for perceptual grouping without loss of analog sensitivity. Even the earliest visual cortical stages carry out active adaptive information processing: [learn, group, attention]ing
  • image p357fig10.04 Laminar Computing achieves its properties by computing in a new way that sythesizes the best properties of feedforward and feedback interactions, analog and digital computations, and preattentive and attentive learning. The property of analog coherence enables coherent groupings and decisions to form without losing sensitivity to the amount of evidence that supports them.
    || Laminar Computing: a new way to compute. 1. Feedforward and feedback: a) Fast feedforward processing when data are unambiguous (eg Thorpe etal), b) slower feedback chooses among ambiguous alternatives [self-normalizing property, real-time probabiligy theory], c) A self-organizing system that trades certainty against speed: Goes beyond Bayesian models! 2. Analog and Digital: Analog Coherence combines the stability of digital with the sensitivity of analog. 3. Preattentive and Attentive Learning: Reconciles the differences of (eg) Helmholtz and Kanizsa, "A preattentive grouping is its own
  • image p362fig10.11 Feedback between layer 2/3 to the layer 6-to-4-to-2/3 feedback loop chooses the strongest grouping in cases where there is more than one. If only one grouping exists, then the circuit can function very quickly in a feedforward manner. When multiple groupings exist, the cortex "runs as fast as it can" to select the one with the most evidence to support it using the self-normalizing inhibition in the layer 6-to-4 off-surround.
    || How is the final grouping selected? Folded feedback LGN-> 6-> 4-> 2/3. 1. Layer 2/3 groupings feed back into 6-to-4 on-center off-surround: a) direct layer 2/3 -to-6 path; b) can also go via layer 5 (Blasdel etal 1985; Kisvarday etal 1989). 2. Strongest grouping enhanced by its on-center. 3. Inputs to weaker groupings suppressed by off-surround. 4. Interlaminar feedback creates functional columns. Activities of conflicting groupings are reduced by self-normalizing inhibition, slowing processing; intracortical feedback selects and contrast-enhances the winning grouping, speeding processing.
  • image p363fig10.12 The same laminar circuit design repeats in V1 and V2, albeit with specializations that include longer horizontal grouping axoms and figure-ground separation interactions.
    || V2 repeats V1 circuitry at larger spatial scale, LGN-> V1[6,4,2/3]-> V2[6,4,2/3]. V2 layer 2/3 horizontal axons longer-range than in V1 (Amir etal 1993). Therefore, longer-range groupings can form in V2 (Von der Heydt etal 1984)
  • image p375fig11.07 The 3D LAMINART model uses both monocular and binocular simple cells to binocularly fuse like image contrasts. The remainder of the model generates 3D boundary and surface representations of multiple kinds of experiments as well as of natural scenes.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A], V2 thin stripe [4->2/3A], V4]. V1 blob [V1-4 monocular, V1 interior binocular] simple cells. [complex, simple, inhibitory] cells, on-center off-surround
  • image p376fig11.10 The 3D LAMINART model shows how the disparity filter can be integrated into the circuit that completes 3D boundary representations using bipole grouping cells. It also explains how surface contours can strengthen boundaries that succeed in generating closed filling-in domains.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A] surface contour, V2 thin stripe (monocular surface) [4->2/3A], V2 interior [disynaptic inhibitory interneurons, bipole grouping cells, disparity filter, V4 binocular surface]. [complex, simple, inhibitory] cells, on-center off-surround
  • image p378fig11.12 How monocular and binocular information are combined in V1 and V2 in the 3D LAMINART model.
    || Model utilizes monocular information. [left, right] eye cart V1-[4 monocular simple, 3B binocular simple, complex2/3A [mo,bi]nocular] cells, V2-4 binocular complex cells. black = monocular cells, blue = binocular cells. In V2, monocular inputs add to binocular inputs along the line of sight and contribute to depth perception.
  • image p379fig11.13 How the 3D LAMINART model explains DaVinci stereopsis. All the stages of boundary and surface formation are color coded to clarify their explanation. Although each mechanism is very simple, when all of them act together, the correct depthful surface representation is generated. See the text for details.
    || DaVinci stereopsis (Nakayama, Shimojo 1990). An emergent property of the previous simple mechanisms working together. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] pair [Binocular match: boundaries of thick bar -> Add monocular boundaries along lines-of-sight -> Line-of-sight inhibition kills weaker vertical boundaries -> 3D surface percept not just a disparity match!] pair [Binocular match: right edge of thin and thick bars -> Strongest boundaries: binocular and monocular boundaries add -> Vertical boundaries from monocular left edge of thin bar survive -> Filling-in contained by connected boundaries]. cart [very near, near, fixation plane, far, very far]
  • image p380fig11.14 The model explanation of DaVinci stereopsis when the input stimuli have opposite contrast polarities.
    || Polarity-reversed Da Vinci stereopsis (Nakayama, Shimojo 1990). Same explanation! (... as Figure 11.13 ...) [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p395fig11.34 A comparison of the properties of other rivalry models with those of the 3D LAMINART model (surrounded by red border). Significantly, only 3D LAMINART explains both stable vision and rivalry (green border).
    || Comparison of rivalry models
  • image p401fig11.41 The 3D LAMINART model proposes how angle cells and disparity-gradient interact through learning to generate 3D representations of slanted objects.
    || 3D LAMINART model. [LGN, V1, V2, V4] Four key additions: 1. Angle cells - tuned to various angles; 2. Disparity-gradient cells - tuned to disparity gradients in the image; 3. weights from [angle to disparity-gradient] cells - learned while viewing 3D image; Colinear grouping between [angle to disparity-gradient] cells - disambiguates ambiguous groupings.
  • image p402fig11.44 3D scenic reconstruction of the image in Figure 11.43 by the 3D LAMINART model.
    || Disparity [5, 6, 8, 10, 11, 14]: images of objects in common depth planes
  • image p436fig12.30 The conscious ARTWORD, or cARTWORD, laminar cortical speech model simulates how future context can disambiguate noisy past speech sounds in such a way that the completed percept is consciously heard to proceed from past to future as a feature-item-list resonant wave propagates through time.
    || cARTWORD: Laminar cortical model macrocircuit (Grossberg, Kazerounian 2011) Simulates PHONEMIC RESTORATION: Cognitive Working Memory (processed item sequences) - [Excitatory-> inhibitory-> habituative-> adaptive filter-> adaptive filter-> adaptive filter with depletable synapse-> Acoustic [item, feature]
  • image p444fig12.42 The LIST PARSE laminar cortical model of working memory and list chunking that I published with Lance Pearson in 2008 simulated the Averbeck etal data in Figure 12.41, as in the left column of the figure. It also simulated cognitive data about working memory storage by human subjects. See the text for details.
    || LIST PARSE: Laminar cortical model of working memory and list chunking (Grossberg, Pearson 2008). Simulates data about: [immediate, delayed, continuous] distractor free recall; immediate serial recall; and variable-speed sequential perfomance of motor acts. [velocity, acceleration] vs time (ms) from recall cue.
  • image p445fig12.43 The LIST PARSE laminar cortical Cognitive Working Memory circuit, that is proposed to occur in ventrolateral prefrontal cortex, is homologous to the LAMINART circuit circuit that models aspects of how visual cortex sees. The Motor Working Memory, VITE Trajectory Generator, and Variable-Rate Volitional Control circuits model how other brain regions, including dorsolateral prefrontal cortex, motor cortex, cerebellum, and basal ganglia, interact with the Cognitive Working Memory to control working memory storage and variable-rate performance of item sequences.
    || List parse circuit diagram. Connectivity convention. sequence chunks [<- BU filter, TD expectation ->] working memory. Working memory and sequence chunking circuit is homologous to visual LAMINART circuit!
  • image p498fig13.22 (left column, top row) Adaptive filtering and conditioned arousal are both needed to regulate what cues can learn to activate particular space-time patterns. These developments lead inexorably to basic cognitive abilities, as embodied in the 3D LAMINART models for 3D vision and figure-ground perception (Chapter 11) and the 3D ARTSCAN SEARCH model for invariant object learning, recognition, and 3D search (Chapter 6). (right column, top row) Conditioned arousal enables only emotionally important cues to activate a motivationally relevant space-time pattern. (bottom row) Conditioned arousal and drive representations arise naturally from the unlumping of avalanche circuits to make them selective to motivationally important cues. The MOTIVATOR model is a natural outcome of this unlumping process (this chapter).
    || (top) Adaptive filtering and Conditioned arousal. Towards Cognition: need to filter inputs to the command cell. Towards Emotion: important signals turn arousal ON and OFF. (bottom) Conditioned arousal and Drive representations. Competition between conditioned arousal sources at drive representations, eg amygdala.
  • image p038fig01.25 The ART Matching Rule stabilizes real time learning using a [top-down, modulatory on-center, off-surround] network. Object attention is realized by such a network. See text for additional discussion.
    || ART Matching Rule [volition, categories, features]. [one, two] against one.
  • image p163fig04.39 A schematic of the LAMINART model that explains key aspects of laminar visual cortical anatomy and dynamics. LGN -> V1 [6, 4, 2/3] -> V2 [6, 4, 2/3]
    || p163c1h0.6 "... The first article abount laminar computing ... proposed how the laminar cortical model could process 2D pictures using bottom-up filtering and horizontal bipole grouping interactions (Grossbeerg, Mingolla, Ross 1997). In 1999, I was able to extend the model to also include top-down circuits for expectation and attention (Grossberg 1999)(right panel). Such a synthesis of laminar bottom-up, horizontal, and top-down circuits is characteristic of the cerebral cortex (left panel). I called it LAMINART because it began to show how properties of Adaptive Resonance Theory, or ART, notably the ART prediction about how top-down expectations and attention work, and are realized by identical cortical cells and circuits. You can immediately see from the schematic laminar circuit diagram ... (right panel) that circuits in V2 seem to repeat circuits in V1, albeit with a larger spatial scale, despite the fact that V1 and V2 carry out different functions, How this anatomical similarity can coexist with functional diversity will be clarified in subsequent sections and chapters. It enable dfifferent kinds of biological intelligence to communicate seamlessly while carrying out their different psychological functions. ..."
  • image p192fig05.06 Bottom-up and top-down circuits between the LGN and cortical area V1. The top-down circuits obey the ART Matching Rule for matching with bottom-up input patterns and focussing attention on expected critical features.
    || Model V1-LGN circuits, version [1, 2]. retina -> LGN relay cells -> interneurons -> cortex [simple, endstopped] cells -> cortex complex cells
  • image p200fig05.13 Instar and outstar learning are often used to learn the adaptive weights in the bottom-up filters and top-down expectations that occur in ART. The ART Matching Rule for object attention enables top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features.
    || Expectations focus attention: feature pattern (STM), Bottom-Up adaptive filter (LTM), Category (STM), competition, Top-Down expectation (LTM); ART Matching Rule: STM before top-down matching, STM after top-down matching (attention!)
  • image p207fig05.19 The ART hypothesis testing and learning cycle. See the text for details about how the attentional system and orienting system interact in order to incorporate learning of novel categories into the corpus of already learned categories without causing catastophic forgetting.
    ||
  • image p211fig05.21 Sequences of P120, N200, and P300 event-related potentials occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    || ERP support for mismatch-mediated reset: event-related potentials: human scalp potentials. ART predicted correlated sequences of P120-N200-P300 Event Related Potentials during oddball learning. P120 mismatch; N200 arousal/novelty; P300 STM reset. Confirmed in (Banquet and Grossberg 1987)
  • image p215fig05.28 How a mismatch between bottom-up and top-down input patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level.
    || Mismatch -> inhibition -> arousal -> reset. BU input orienting arousal, BU+TD mismatch arousal and reset. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
  • image p221fig05.31 A system like Fuzzy ARTMAP can learn to associate learned categories in one ART network with learned categories in a second ART network. Because both bottom-up and top-down interactions occur in both networks, a bottom-up input pattern to the first ART network can learn to generate a top-down output pattern from the second ART network.
    || Fuzzy ARTMAP. Match tracking realizes minimax learning principle: vigilence increases to just above the match ratio of prototype / exemplar, thereby triggering search
  • image p226fig05.35 I had shown in 1976 how a competitive learning or self-organizing map model could undergo catastrophic forgetting if the input environment was sufficiently dense and nonstationary, as illustrated by Figure 5.18. Later work with Gail Carpenter showed how, if the ART Matching Rule was shut off, repeating just four input patterns in the correct order could also casue catastrophic forgetting by causing superset recoding, as illustrated in Figure 5.36.
    || Code instability input sequences. D C A; B A; B C = ; |D|<|B|<|C|; where |E| is the number of features in the set E. Any set of input vectors that satisfy the above conditions will lead to unstable coding if they are periodically presented in the order ABCAD and the top-down ART Matching Rule is shut off.
  • image p226fig05.36 Column (a) shows catastrophic forgetting when the ART Matching Rule is not operative. It is due to superset recoding. Column (b) shows how category learning quickly stabilizes when the ART Martching Rule is restored.
    || Stabel and unstable learning, superset recoding
  • image p228fig05.37 A macrocircuit of the neurotrophic Spectrally Timed ART, or nSTART, model. I developed nSTART with my PhD student Daniel Franklin. It proposes how adaptively timed learning in the hippocampus, bolstered by Brain Derived Neurotrophic Factor, or BDNF, helps to ensure normal memory consolidation.
    || habituative gates, CS, US, Thalamus (sensory cortex, category learning, conditioned reinforcer learning, adaptively time learning and BDNF), Amygdala (incentive motivation learning), Hippocampus (BDNF), Prefrontal Cortex (attention), Pontine nuclei, Cerebellum (adaptively timed motor learning)
  • image p230fig05.38 The Synchronous Matching ART, or SMART, model includes spiking neurons in a laminar cortical hierarchy. I developed SMART with my PhD student Massimiliano Versace. By unlumping LAMINART to include spiking neurons, finer details of neurodynamics, such as the existence of faster gamma oscillations during good enough matches, and slower beta oscillations during bad enough mismatches, could be shown as emergent properties of network interactions.
    || Second order thalamus -> specific thalamic nucleus -> Thalamic reticulate nucleus -> neocortical laminar circuit [6ll, 6l, 5, 2/3, 1] -> Higher order cortex. Similar for First order thalamus -> First order cortex, with interconnection to Second order, nonspecific thalamic nucleus
  • image p240fig05.44 When an algebraic exemplar model is realized using only local computations, it starts looking like an ART prototype model.
    || How does the model know which exemplars are in category A? BU-TD learning. How does a NOVEL test item access category A?
  • image p241fig05.45 The 5-4 category structure is one example of how an ART network learns the same kinds of categories as human learners. See the text for details.
    || 5-4 Category structure. A1-A5: closer to the (1 1 1 1) prototype; B1-B4: closer to the (0 0 0 0) prototype
  • image p246fig05.48 Microcircuits of the LAMINART model that I developed with Rajeev Raizada. See the text for details of how they integrate bottom-up adaptive filtering, horizontal bipole grouping, and top-down attentional matching that satisfied the ART Matching Rule.
    ||
  • image p254fig06.03 These interactions of the ARTSCAN Search model enable it to learn to recognize and name invariant object categories. Interactions between spatial attention in the Where cortical stream, via surface-shroud resonances, and object attention in the What cortical stream, that obeys the ART Matching Rule, coordinate these learning, recognition, and naming processes.
    || Retinal image -> On & OFF cell contrast normalization (retina/LGN) -> polarity [, in]sensitive contrast enhancement (V1) -> object [boundary (V2), surface (V2/V4)] -> surface contour (V2); What stream categories: volition control (BG). object boundary (V2) <-> view (ITp) <-> view integrator (ITp) <-> object (ITa) <-> [object-value (ORB), value (Amyg)] <-> name (PFC)
  • image p315fig08.35 The output signals from the directional grouping network obey the ART Matching Rule. They thereby select consistent motion directional signals while suppressing inconsistent ones, and do not distort that the spared cells code. The aperture problem is hereby solved by the same mechanism that dynamically stabilizes the learning of directional grouping cells.
    || How to select correct direction and preserve speed estimates? Prediction: Feedback from MSTv to MT- obeys ART Matching Rule; Top-down, modulatory on-center, off-surround network (Grossberg 1976, 1980; Carpenter, Grossberg 1987, 1991); Explains how directional grouping network can stably develop and how top-down directional attention can work. (Cavanough 1992; Goner etal 1986; Sekuer, Ball 1977; Stelmach etal 1994). Directional grouping network (MSTv) <-> Directional long-range filter (MT). Modulatory on-center selects chosen direction and preserves speed. Off-surround inhibits incompatible directions.
  • image p316fig08.36 How the directional grouping network, notably properties of the ART Matching Rule, enables a small set of amplified feature tracking signals at the ends of a line to select consistent directions in the line interior, while suppressing inconsistent directions.
    || Motion capture by directional grouping feedback. Directional grouping network (MSTv) <-> Directional long-range filter (MT). It takes longer to capture ambiguous motion signals in the line interior as the length of the line increases cf (Castet etal 1993)
  • image p354fig10.01 The lamiar cortical circuit that realizes how we pay attention to an object sends signals from layer 6 of a higher cortical level to layer 6 of a lower cortical level and then back up to layer 4. This "folded feedback" circuit realizes a top-down, modulatory on-center, off-surround circuit that realizes the ART Matching Rule.
    || Top-down attention and folded feedback. Attentional signals also feed back into 6-to-4 on-center off-surround. 1-to-5-to-6 feedback path: Macaque (Lund, Booth 1975) cat (Gilbert, Wiesel 1979). V2-to-V1 feedback is on-center off-surround and affects layer 6 of V1 the most (Bullier etal 1996; Sandell, Schiller 1982). Attended stimuli enhanced, ignored stimuli suppressed. This circuit supports the predicted ART Matching Rule! [LGN, V[1,2][6->1]]
  • image p360fig10.08 The bottom-up on-center off-surround from LGN-to-6-to-4 has a modulatory on-center because of its role in realizing the ART Matching Rule and, with it, the ability of the cortex to dynamically stabilize its learned memories.
    || Modulation of priming by 6-to-4 on-center (Stratford etal 1996; Callaway 1998). On-center 6-to-4 excitation is inhibited down to being modulatory (priming, subthreshold). On-center 6-to-4 excitation cannot activate layer 4 on its own. Clarifies need for direct path. Prediction: plays key role in stable grouping, development and learning. ART Matching Rule!
  • image p419fig12.17 The auditory continuity illusion illustrates the ART Matching Rule at the level of auditory streaming. Its "backwards in time" effect of future context on past conscious perception is a signature of resonance.
    || Auditory continuity illusion. input, percept. Backwards in time - How does a future sound let past sound continue through noise? Resonance! - It takes a while to kick in. After it starts, a future tone can maintain it much more quickly. Why does this not happen if there is no noise? - ART Matching Rule! TD harmonic filter is maodulatory without BU input. It cannot create something out of nothing.
  • image p420fig12.18 The ARTSTREAM model explains and simulates the auditory continuity illusion as an example of a spectral-pitch resonance. Interactions of ART Matching Rule and asymmetric competition mechanisms in cortical strip maps explain how the tone selects the consistent frequency from the noise in its own stream while separating the rest of the noise into another stream.
    || ARTSTREAM model (Grossberg 1999; Grossberg, Govindarajan, Wyse, Cohen 2004). SPINET. Frequency and pitch strips. Bottom Up (BU) harmonic sieve. Top Down (TD) harmonic ART matching. Exclusive allocation. Learn pitch categories based on early harmonic processing. A stream is a Spectral-Pitch Resonance!
  • image p473fig12.69 Error rate and mean reaction time (RT) data from the lexical decision experiments of (Schvanevelt, McDonald 1981). ART Matching Rule properties explain these data in (Grossberg, Stone 1986).
    || (left) Error rate vs type of prime [R, N, U], [non,] word. (right) Mean RT (msec) vs type of prime [R, N, U], [non,] word.
  • image p483fig13.02 The object-value categories in the orbitofrontal cortex require converging specific inputs from the sensory cortex and nonspecific incentive motivational inputs from the amygdala in order to fire. When the orbitofrontal cortex fires, it can deliver top-down ART Matching Rule priming signals to the sensory cortical area by which it was activated, thereby helping to choose the active recognition categories there that have the most emotional support, while suppressing others, leading to attentional blocking of irrelevant cues.
    || Cognitive-Emotional-Motor (CogEM) model. Drive-> amygdala incentive motivational learning-> orbitofrontal cortex- need converging cue and incentive iputs to fire <-> sensory cortex- conditioned renforcer learning-> amygdala. CS-> sensory cortex. Motivated attention closes the cognitive-emotional feedback loop, focuses on relevant cues, and causes blocking of irrelevant cues.
  • image p548fig15.16 Homologous recognition learning and reinforcement learning macrocicuits enable adaptively timed conditioning in the reinforcement learning circuit to increase inhibition of the orienting system at times when a mismatch in the recognition system would have reduced inhibition of it.
    || Homolog between ART and CogEM model, complementary systems. [Recognition, Reinforcement] learning vs [Attentional, Orienting] system. Reinforcement: timing, drive representation.
  • image p600fig16.36 The entorhinal-hipppocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories. See the text for details.
    || Entorhinal-hippocampal interactions as an ART system. Hippocampal place cells as spatial categories. Angular head velocity-> head direction cells-> stripe cells- small scale 1D periodic code (ECIII) SOM-> grid cells- small scale 2D periodic code (ECII) SOM-> place cells- larger scale spatial map (DG/CA3)-> place cells (CA1)-> conjunctive-coding cells (EC V/VI)-> top-down feedback back to stripe cells- small scale 1D periodic code (ECIII). stripe cells- small scale 1D periodic code (ECIII)-> place cells (CA1).
  • image p613fig16.43 The main visual form and motion processing stream mechanisms of SOVEREIGN, many of them described at length in previous chapters.
    || Render 3-D scene (R3DS), figure-ground separation (FGS), log-polar transform (LPT), Gaussian coarse-coding (GCC), Invariant visual target map (IVTM), What Fuzzy ART (WhatFuzz), body spatial coordinates (BSC), where reactive visual TPV storage (WRVTS), Directional transient cell network (DTCN), Motion direction hemifild map (MDHM), Hemifiled left/right scoring (HLRS), reactive visual control signal (RVCS), Parvo/Magno/Erg competition (PMEC), Approach and Orient GOp (AOGp), GOm (GOm). R3DS [parvo-> FGS, magno-> DTCN], FGS-> [LPT, WRVTS], LPT-> GCC-> IVTM-> WhatFuzz, BSC-> [RVTS, PMEC], PMEC-> [gateRVTS-> RVTS, gateRVCS-> RVCS], DTCN-> MDHM-> HLRS, HLRS-> [PMEC, RVCS], AOGp-> gateRVTS, GOm-> gateRVCS.
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
    bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :image p163fig04.39 A schematic of the LAMINART model that explains key aspects of laminar visual cortical anatomy and dynamics. LGN -> V1 [6, 4, 2/3] -> V2 [6, 4, 2/3]
    || p163c1h0.6 "... The first article abount laminar computing ... proposed how the laminar cortical model could process 2D pictures using bottom-up filtering and horizontal bipole grouping interactions (Grossbeerg, Mingolla, Ross 1997). In 1999, I was able to extend the model to also include top-down circuits for expectation and attention (Grossberg 1999)(right panel). Such a synthesis of laminar bottom-up, horizontal, and top-down circuits is characteristic of the cerebral cortex (left panel). I called it LAMINART because it began to show how properties of Adaptive Resonance Theory, or ART, notably the ART prediction about how top-down expectations and attention work, and are realized by identical cortical cells and circuits. You can immediately see from the schematic laminar circuit diagram ... (right panel) that circuits in V2 seem to repeat circuits in V1, albeit with a larger spatial scale, despite the fact that V1 and V2 carry out different functions, How this anatomical similarity can coexist with functional diversity will be clarified in subsequent sections and chapters. It enable dfifferent kinds of biological intelligence to communicate seamlessly while carrying out their different psychological functions. ..."
  • image p174fig04.52 An example of how the 3D LAMINART model can transform the two monocular images of the random dot stereogream in the top row into the three depth-separated surfaces representations in the bottom row.
    || Stereogram surface percepts: surface lightnesses are segregated in depth (Fang and Grossberg 2009). [left, right] inputs, [far, fixation, near] planes. Contrast algorithms that just compute disparity matches and let computer code build the surface eg (Marr, Poggio, etal 1974).
  • image p182fig04.58 LAMINART model processing stage that are sufficient to explain many percepts of transparency, including those summarized in Figure 4.57.
    || [left, right] eye, [LGN, V1 [6, 4, 3B, 2/3 A], V2 [4, 2/3]], [mo, bi]nocular cart [simple, complex] cells, [excita, inhibi]tory cart [connection, cell]s.
  • image p230fig05.38 The Synchronous Matching ART, or SMART, model includes spiking neurons in a laminar cortical hierarchy. I developed SMART with my PhD student Massimiliano Versace. By unlumping LAMINART to include spiking neurons, finer details of neurodynamics, such as the existence of faster gamma oscillations during good enough matches, and slower beta oscillations during bad enough mismatches, could be shown as emergent properties of network interactions.
    || Second order thalamus -> specific thalamic nucleus -> Thalamic reticulate nucleus -> neocortical laminar circuit [6ll, 6l, 5, 2/3, 1] -> Higher order cortex. Similar for First order thalamus -> First order cortex, with interconnection to Second order, nonspecific thalamic nucleus
  • image p231fig05.39 The SMART hypothesis testing and learning cycle predicts that vigilance increases when a mismatch in subcortical regions like the nonspecific thanlamus activates the nucleus basalis of Meynert which, in turn, broadcasts a burst of the neurotransmitter acetylcholine, or ACh, to deeper cortical layers. Due to the way in which LAMINART proposes that cortical matching and mismatching occurs, this ACh burst can increase vigilance and thereby trigger a memory search. See the text for details.
    || [BU input, [, non]specific thalamic nucleus, thalamic reticulate nucleus, neocortical laminar circuit] cart [Arousal, Reset, Search, Vigilance]
  • image p246fig05.48 Microcircuits of the LAMINART model that I developed with Rajeev Raizada. See the text for details of how they integrate bottom-up adaptive filtering, horizontal bipole grouping, and top-down attentional matching that satisfied the ART Matching Rule.
    ||
  • image p248fig05.49 This circuit of the LAMINART model helps to explain properties of Up and Down states during slow wave sleep, and how disturbances in ACh dynamics can disrupt them.
    ||
  • image p375fig11.07 The 3D LAMINART model uses both monocular and binocular simple cells to binocularly fuse like image contrasts. The remainder of the model generates 3D boundary and surface representations of multiple kinds of experiments as well as of natural scenes.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A], V2 thin stripe [4->2/3A], V4]. V1 blob [V1-4 monocular, V1 interior binocular] simple cells. [complex, simple, inhibitory] cells, on-center off-surround
  • image p376fig11.10 The 3D LAMINART model shows how the disparity filter can be integrated into the circuit that completes 3D boundary representations using bipole grouping cells. It also explains how surface contours can strengthen boundaries that succeed in generating closed filling-in domains.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A] surface contour, V2 thin stripe (monocular surface) [4->2/3A], V2 interior [disynaptic inhibitory interneurons, bipole grouping cells, disparity filter, V4 binocular surface]. [complex, simple, inhibitory] cells, on-center off-surround
  • image p378fig11.12 How monocular and binocular information are combined in V1 and V2 in the 3D LAMINART model.
    || Model utilizes monocular information. [left, right] eye cart V1-[4 monocular simple, 3B binocular simple, complex2/3A [mo,bi]nocular] cells, V2-4 binocular complex cells. black = monocular cells, blue = binocular cells. In V2, monocular inputs add to binocular inputs along the line of sight and contribute to depth perception.
  • image p379fig11.13 How the 3D LAMINART model explains DaVinci stereopsis. All the stages of boundary and surface formation are color coded to clarify their explanation. Although each mechanism is very simple, when all of them act together, the correct depthful surface representation is generated. See the text for details.
    || DaVinci stereopsis (Nakayama, Shimojo 1990). An emergent property of the previous simple mechanisms working together. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] pair [Binocular match: boundaries of thick bar -> Add monocular boundaries along lines-of-sight -> Line-of-sight inhibition kills weaker vertical boundaries -> 3D surface percept not just a disparity match!] pair [Binocular match: right edge of thin and thick bars -> Strongest boundaries: binocular and monocular boundaries add -> Vertical boundaries from monocular left edge of thin bar survive -> Filling-in contained by connected boundaries]. cart [very near, near, fixation plane, far, very far]
  • image p380fig11.14 The model explanation of DaVinci stereopsis when the input stimuli have opposite contrast polarities.
    || Polarity-reversed Da Vinci stereopsis (Nakayama, Shimojo 1990). Same explanation! (... as Figure 11.13 ...) [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p395fig11.34 A comparison of the properties of other rivalry models with those of the 3D LAMINART model (surrounded by red border). Significantly, only 3D LAMINART explains both stable vision and rivalry (green border).
    || Comparison of rivalry models
  • image p401fig11.41 The 3D LAMINART model proposes how angle cells and disparity-gradient interact through learning to generate 3D representations of slanted objects.
    || 3D LAMINART model. [LGN, V1, V2, V4] Four key additions: 1. Angle cells - tuned to various angles; 2. Disparity-gradient cells - tuned to disparity gradients in the image; 3. weights from [angle to disparity-gradient] cells - learned while viewing 3D image; Colinear grouping between [angle to disparity-gradient] cells - disambiguates ambiguous groupings.
  • image p402fig11.44 3D scenic reconstruction of the image in Figure 11.43 by the 3D LAMINART model.
    || Disparity [5, 6, 8, 10, 11, 14]: images of objects in common depth planes
  • image p445fig12.43 The LIST PARSE laminar cortical Cognitive Working Memory circuit, that is proposed to occur in ventrolateral prefrontal cortex, is homologous to the LAMINART circuit circuit that models aspects of how visual cortex sees. The Motor Working Memory, VITE Trajectory Generator, and Variable-Rate Volitional Control circuits model how other brain regions, including dorsolateral prefrontal cortex, motor cortex, cerebellum, and basal ganglia, interact with the Cognitive Working Memory to control working memory storage and variable-rate performance of item sequences.
    || List parse circuit diagram. Connectivity convention. sequence chunks [<- BU filter, TD expectation ->] working memory. Working memory and sequence chunking circuit is homologous to visual LAMINART circuit!
  • image p498fig13.22 (left column, top row) Adaptive filtering and conditioned arousal are both needed to regulate what cues can learn to activate particular space-time patterns. These developments lead inexorably to basic cognitive abilities, as embodied in the 3D LAMINART models for 3D vision and figure-ground perception (Chapter 11) and the 3D ARTSCAN SEARCH model for invariant object learning, recognition, and 3D search (Chapter 6). (right column, top row) Conditioned arousal enables only emotionally important cues to activate a motivationally relevant space-time pattern. (bottom row) Conditioned arousal and drive representations arise naturally from the unlumping of avalanche circuits to make them selective to motivationally important cues. The MOTIVATOR model is a natural outcome of this unlumping process (this chapter).
    || (top) Adaptive filtering and Conditioned arousal. Towards Cognition: need to filter inputs to the command cell. Towards Emotion: important signals turn arousal ON and OFF. (bottom) Conditioned arousal and Drive representations. Competition between conditioned arousal sources at drive representations, eg amygdala.
  • image p228fig05.37 A macrocircuit of the neurotrophic Spectrally Timed ART, or nSTART, model. I developed nSTART with my PhD student Daniel Franklin. It proposes how adaptively timed learning in the hippocampus, bolstered by Brain Derived Neurotrophic Factor, or BDNF, helps to ensure normal memory consolidation.
    || habituative gates, CS, US, Thalamus (sensory cortex, category learning, conditioned reinforcer learning, adaptively time learning and BDNF), Amygdala (incentive motivation learning), Hippocampus (BDNF), Prefrontal Cortex (attention), Pontine nuclei, Cerebellum (adaptively timed motor learning)
  • image p541fig15.02 The neurotrophic Spectrally Timed Adaptive Resonance Theory, or nSTART, model of (Franklin, Grossberg 2017) includes hippocampus to enable adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between CS and US.
    || Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. [CS, US] -> Sensory Cortex (SC) <- motivational attention <-> category learning -> Prefrontal Cortex (PFC). SC conditioned reinforcement learning-> Amygdala (cannot bridge the temporal gap) incentive motivational learning-> PFC. SC adaptively timer learning and BDNF-> Hippocampus (can bridge the temporal gap) BDNF-> PFC. PFC adaptively timed motor learning-> cerebellum.
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
  • bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :image p221fig05.31 A system like Fuzzy ARTMAP can learn to associate learned categories in one ART network with learned categories in a second ART network. Because both bottom-up and top-down interactions occur in both networks, a bottom-up input pattern to the first ART network can learn to generate a top-down output pattern from the second ART network.
    || Fuzzy ARTMAP. Match tracking realizes minimax learning principle: vigilence increases to just above the match ratio of prototype / exemplar, thereby triggering search
  • image p225fig05.33 Some early ARTMAP benchmark studies. These successes led to the use of ARTMAP, and many variants that we and other groups have developed, in many large-scale applications in engineering and technology that has not abated even today.
    || see Early ARTMAP benchmark studies
  • image p225fig05.34 ARTMAP was successfully used to learn maps of natural terrains with many advantages over those of mapping projects that used AI expert systems. The advantages are so great that many mapping projects started to use this technology.
    || AI expert system - 1 year: field identification of natural regions; derivation of ad hoc rules for each region by expert geographers; correct 80,000 of 250,000 site labels; 230m (site-level) scale. ARTMAP system - 1 day: rapid, automatic, no natural regions or rules; confidence map; 30m (pixel-level) scale can see roads; equal accuracy at test sites
  • image p242fig05.46 Computer simulations of how two variants of Distributed ARTMAP incrementally learn the 5-4 category structure. See the text for details.
    || Distributed ARTMAP with [self-supervised learning, post-training LTM noise]
  • image p436fig12.30 The conscious ARTWORD, or cARTWORD, laminar cortical speech model simulates how future context can disambiguate noisy past speech sounds in such a way that the completed percept is consciously heard to proceed from past to future as a feature-item-list resonant wave propagates through time.
    || cARTWORD: Laminar cortical model macrocircuit (Grossberg, Kazerounian 2011) Simulates PHONEMIC RESTORATION: Cognitive Working Memory (processed item sequences) - [Excitatory-> inhibitory-> habituative-> adaptive filter-> adaptive filter-> adaptive filter with depletable synapse-> Acoustic [item, feature]
  • image p453fig12.49 The ARTWORD model that I published in 2000 with my PhD student Christopher Myers simulates data such as the (Repp etal 1978) data in Figure 12.68. See the text for details.
    || ARTWORD model (Grossberg, Myers 2000). Input phonetic features-> Phonemic item working memory-> Masking Field unitized lists-> Automatic gain control-> Phonemic item working memory. [habituative gate, adaptive filter]s.
  • image p453fig12.50 The ARTWORD perception cycle shows how sequences of items activate possible list chunks, which compete among each other and begin to send their top-down expectations back to the item working memory. An item-list resonance develops through time as a result.
    || ARTWORD perception cycle. (a) bottom-up activation (b) list chunk competition (c) item-list resonance (d) chunk reset due to habituative collapse.
  • image p455fig12.52 Simulation of cARTWORD dynamics in response to the complete list /1/-/2/-/3/. The relevant responses are surrounded by a red box.
    || Presentation of a normal sequence: input /1/-/2/-/3/. |c(i,1)-5| vs time (msec). List chunks select most predictive code. Order stored in WM layers. Resonant activity of /1/-/2/-/3/ in item and feature layers corresponds to conscious speech percept.
  • image p456fig12.53 Simulation of cARTWORD dynamics in response to the partial list /1/-silence-/3/ with /2/ replaced by silence. Only the representations of these items can be seen in the red box.
    || Presentation with silence duration: input /1/-silence-/3/. |c(i,1)-5| vs time (msec). List chunks select most predictive code. Order stored in WM layers. Gap in resonant activity of /1/-silence-/3/ in item and feature layers corresponds to perceived silence.
  • image p456fig12.54 Item /2/ is restored in the correct list position in response to the list /1/-noise-/3/.
    || Presentation with noise: input /1/-noise-/3/. |c(i,1)-5| vs time (msec). List chunks select the most predictive code. Order restored in WM layers. Resonant activity of /1/-/2/-/3/ in item and feature layers corresponds to restoration of item /2/ replaced by noise in input.
  • image p457fig12.55 Item /4/ is restored in the correct list position in response to the list /1/-noise-/5/. This and the previous figure show how future context can disambiguate past noisy sequences that are otherwise identical.
    || Presentation with noise: input /1/-noise-/5/. |c(i,1)-5| vs time (msec). List chunks select the most predictive code. Order restored in WM layers. Resonant activity of /1/-/4/-/3/ in item and feature layers corresponds to restoration of item /4/ replaced by noise in input.
  • image p444fig12.42 The LIST PARSE laminar cortical model of working memory and list chunking that I published with Lance Pearson in 2008 simulated the Averbeck etal data in Figure 12.41, as in the left column of the figure. It also simulated cognitive data about working memory storage by human subjects. See the text for details.
    || LIST PARSE: Laminar cortical model of working memory and list chunking (Grossberg, Pearson 2008). Simulates data about: [immediate, delayed, continuous] distractor free recall; immediate serial recall; and variable-speed sequential perfomance of motor acts. [velocity, acceleration] vs time (ms) from recall cue.
  • image p445fig12.43 The LIST PARSE laminar cortical Cognitive Working Memory circuit, that is proposed to occur in ventrolateral prefrontal cortex, is homologous to the LAMINART circuit circuit that models aspects of how visual cortex sees. The Motor Working Memory, VITE Trajectory Generator, and Variable-Rate Volitional Control circuits model how other brain regions, including dorsolateral prefrontal cortex, motor cortex, cerebellum, and basal ganglia, interact with the Cognitive Working Memory to control working memory storage and variable-rate performance of item sequences.
    || List parse circuit diagram. Connectivity convention. sequence chunks [<- BU filter, TD expectation ->] working memory. Working memory and sequence chunking circuit is homologous to visual LAMINART circuit!
  • image p446fig12.44 (left column, top row) LIST PARSE can model linguistic data from human subjects. In this figure, model parameters are fixed to enable a close fit to data about error-type distributions in immediate free recall experiments, notably transposition errors. (right column, top row) Simulation and data showing bowing of the serial position curve, including an extended primacy gradient. (left column, bottom row) The simulation curve overlays data about list length effects, notably the increasing recall difficulty of longer lists during immediate serial recall (ISR). (right column, bottom row) Simulation (bottom image) and data (top image) of the limited temporal extent for recall.
    || (1. TL) Error-type distributions in immediate serial recall (Hanson etal 1996). % occurence vs serial position. Graph convention: Data- dashed lines; Simulations- solid lines. Six letter visual ISR. Order errors- transpositions of neighboring items are the most common. Model explanation: Noisy activation levels change relative oorder in primacy gradient. Similar activation of neighboring items most susceptible to noise. Model paramenters fitted on these data. (2. TR) Bowing of serial position curve (Cowan etal 1999). % correct vs serial position. Auditory ISR with various list lengths (graphs shifted rightward): For [, sub-]span lists- extended primacy, with one (or two) item recency; Auditory presentation- enhanced performance for last items. LIST PARSE: End effects- first and last items half as many members; Echoic memory- last presented item retained in separate store. (3. BL) List length effects, circles (Crannell, Parrish 1968), squares (Baddeley, Hitch 1975), solid line- simulation. % list correct vs list length. Variable list length ISR: longer lists are more difficult to recall. LIST PARSE: More items- closer activation levels and lower absolute activity level with enough inputs; Noise is more likely to produce order errors, Activity levels more likely to drop below threshold;. (4. BR) Limited temporal extent for recall (Murdoch 1961). % recalled vs retention interval (s). ISR task with distractor-filled retention intervals (to prevent rehersal): Increasing retention interval - decreases probabilty of recalling list correctly; Load dependence- longer list more affected by delays; Performance plateau- subjects reach apparent asymptote. LIST PARSE: Increase convergence of activities with time; loss of order information;.
  • image p447fig12.45 (left column) LIST PARSE simulations of the proportion of order errors as a function of serial position for 6 item lists with (a) an extended pause of 7 time units between the third and fourth items, and (b) pauses of 5 time units (solid curve) and 10 time units (dashed curve) between all items. (right column) Simulations (solid curves) and data (dashed curves) illustrating close model fits in various immediate free recall tasks.
    || (Left) Temporal grouping and presentation variability. Temporal grouping: Inserting an extended pause leads to inter-group bowing; Significantly different times of integration and activity levels across pause, fewer interchanges. (Right) Immediate free recall, and [delayed, continuous] distractor-free recall. Overt rehersal IFR task with super-span (ie 20 item) lists: Extended recency- even more extended with shorter ISIs; Increased probability of recall with diminished time from last rehearsal; Early items in list rehearsed most;. LIST PARSE (unique) for long lists: Incoming items from recency gradient; Rehearsal (re-presentation) based upon level of activity.
  • image p254fig06.03 These interactions of the ARTSCAN Search model enable it to learn to recognize and name invariant object categories. Interactions between spatial attention in the Where cortical stream, via surface-shroud resonances, and object attention in the What cortical stream, that obeys the ART Matching Rule, coordinate these learning, recognition, and naming processes.
    || Retinal image -> On & OFF cell contrast normalization (retina/LGN) -> polarity [, in]sensitive contrast enhancement (V1) -> object [boundary (V2), surface (V2/V4)] -> surface contour (V2); What stream categories: volition control (BG). object boundary (V2) <-> view (ITp) <-> view integrator (ITp) <-> object (ITa) <-> [object-value (ORB), value (Amyg)] <-> name (PFC)
  • image p255fig06.04 The ARTSCAN Search model can also search for a desired target object in a scene, thereby clarifying how our brains solve the Where
  • image p259fig06.08 The distributed ARTSCAN, or dARTSCAN, model includes spatial attention in both PPC and PFC, and both fast-acting attention, triggered by transient cells in Where cortical areas such as MT, and slower-acting surface-shroud resonances in What cortical areas such as V4 and PPC. See the text for details.
    || dARTSCN spatial attention hierarchy, Fast (Where stream) Slow (What stream) (Foley, Grossberg, and Mingolia 2012). [transient cells (MT) ->, object surfaces (V4) <->] [object shrouds (PPC) <-> spatial shrouds (PPC/PFC)]
  • image p271fig06.17 Persistent activity in IT cells is just what is needed to enable view-invariant object category learning by ARTSCAN to be generalized to [view, position, size]-invariant category learning by positional ARTSCAN, or pARTSCAN. See the text for details.
    || Persistent activity in IT. Physiological data show that persistent activity exist in IT (Fuester and Jervey 1981, Miyashita and Chang 1988, Tomita etal 1999). Adapted from (Tomita etal 1999 Nature)
  • image p272fig06.18 The pARTSCAN model can learn [view, position, size]-invariant categories by adding view category integrator cells that have the properties of persistent neurons in IT. These integrator cells get reset with the invariant object category, not the view category.
    || pARTSCAN: positionally-invariant object learning. (Cao, Grossberg, Markowitz 2011). IT cells with persistent activities are modeled by view category integrators in ITp. View-specific category cells are RESET as the eyes move within the object. View category integrator cells are NOT RESET when the view-specific category is reset. They are RESET along with invariant object category cells when a spatial attention shift occurs.
  • image p273fig06.20 pARTSCAN can simulate the IT cell recoding that Li and DiCarlo reported in their swapping experiments because the swapping procedure happens without causing a parietal reset burst to occur. Thus the originally activated invariant category remains activated and can get associated with the swapped object features.
    || Simulation of Li and DiCarlo swapping data. data (Li and DiCarlo 2008), model (Cao, Grossberg, Markowitz 2011). normalized response vs. exposure (swaps and/or hours)
  • image p274fig06.21 pARTSCAN can also simulate the trade-off in IT cell responses between position invariance and selectivity that was reported by Zoccolan etal 2007. This trade-off limits the amount of position invariance that can be learned by a cortical area like V1 that is constrained by the cortical magnification factor.
    || Trade-off in IT cell response properties. Inferotemporal cortex cells with greater position invariance respond less selectively to natural objects. invariance-tolerance, selectivity-sparseness. data (Zoccolan etal 2007) model (Grossberg, Markowitzm, Cao 2011). position tolerance (PT, degrees) vs sparseness (S)
  • image p274fig06.22 pARTSCAN can simulate how IT cortex processes image morphs, when it learns with high vigilance. See the text for details.
    || Akrami etal simulation: a case of high vigilance. tested on morphs between image pairs
  • image p498fig13.22 (left column, top row) Adaptive filtering and conditioned arousal are both needed to regulate what cues can learn to activate particular space-time patterns. These developments lead inexorably to basic cognitive abilities, as embodied in the 3D LAMINART models for 3D vision and figure-ground perception (Chapter 11) and the 3D ARTSCAN SEARCH model for invariant object learning, recognition, and 3D search (Chapter 6). (right column, top row) Conditioned arousal enables only emotionally important cues to activate a motivationally relevant space-time pattern. (bottom row) Conditioned arousal and drive representations arise naturally from the unlumping of avalanche circuits to make them selective to motivationally important cues. The MOTIVATOR model is a natural outcome of this unlumping process (this chapter).
    || (top) Adaptive filtering and Conditioned arousal. Towards Cognition: need to filter inputs to the command cell. Towards Emotion: important signals turn arousal ON and OFF. (bottom) Conditioned arousal and Drive representations. Competition between conditioned arousal sources at drive representations, eg amygdala.
  • image p531fig14.06 Classification of scenic properties as texture categories by the ARTSCENE model. See the text for details.
    || Image-> Feature extraction (texture principal component rankings)-> Learning feature-to-scene mapping (texture category principal component rankings)<- scene class. Large-to-small attentional shrouds as principle component higher.
  • image p531fig14.07 Voting in the ARTSCENE model achieves even better prediction of scene type. See the text for details.
    || Image-> Feature extraction (texture principal component rankings)-> Learning feature-to-scene mapping (texture category principal component rankings)-> evidence accumulation (sum)-> scene class winner-take-all inference. Large-to-small attentional shrouds as principle component higher.
  • image p532fig14.08 Macrocircuit of the ARTSCENE Search neural model for learning to search for desired objects by using the sequences of already experienced objects and their locations to predict what and where the desired object is. V1 = First visual area or primary visual cortex; V2 = Second visual area; V4 = Fourth visual area; PPC = Posterior Parietal Cortex; ITp = posterior InferoTemporal cortex; ITa = anterior InferoTemporal cortex; MTL = Medial Temporal Lobe; PHC = ParaHippoCampal cortex; PRC = PeriRhinal Cortex; PFC = PreFrontal Cortex; DLPFC = DorsoLateral PreFrontal Cortex; VPFC = Ventral PFC; SC = Superior Colliculus.
    ||
  • image p533fig14.09 Search data and ARTSCENE Search simulations of them in each pair of images from (A) to (F). See the text for details.
    || 6*[data vs simulation], [Response time (ms) versus epoch].
  • image p214fig05.26 When a big enough mismatch occurs, the orienting system is activated and sends a burst of nonspecific arousal to the category level. This Mismatch Detector has properties of the N200 ERP.
    || Mismatch triggers nonspecific arousal. Mismatch at F1 eleicits a nonspecific event at F2. Call this event nonspecific arousal. N200 ERP Naatanen etal: 1. bottom-up, 2. unconditionable, 3. nonspecific, 4. mismatch
  • image p230fig05.38 The Synchronous Matching ART, or SMART, model includes spiking neurons in a laminar cortical hierarchy. I developed SMART with my PhD student Massimiliano Versace. By unlumping LAMINART to include spiking neurons, finer details of neurodynamics, such as the existence of faster gamma oscillations during good enough matches, and slower beta oscillations during bad enough mismatches, could be shown as emergent properties of network interactions.
    || Second order thalamus -> specific thalamic nucleus -> Thalamic reticulate nucleus -> neocortical laminar circuit [6ll, 6l, 5, 2/3, 1] -> Higher order cortex. Similar for First order thalamus -> First order cortex, with interconnection to Second order, nonspecific thalamic nucleus
  • image p231fig05.39 The SMART hypothesis testing and learning cycle predicts that vigilance increases when a mismatch in subcortical regions like the nonspecific thanlamus activates the nucleus basalis of Meynert which, in turn, broadcasts a burst of the neurotransmitter acetylcholine, or ACh, to deeper cortical layers. Due to the way in which LAMINART proposes that cortical matching and mismatching occurs, this ACh burst can increase vigilance and thereby trigger a memory search. See the text for details.
    || [BU input, [, non]specific thalamic nucleus, thalamic reticulate nucleus, neocortical laminar circuit] cart [Arousal, Reset, Search, Vigilance]
  • image p232fig05.40 Computer simulation of how the SMART mode generates (a) gamma oscillations if a good enough match occurs, or (c) beta oscillations if a bad enough match occurs. See the text for details.
    || Brain oscillations during match/mismatch, data, simulation. (a) TD corticothalamic feedback increases synchrony (Sillito etal 1994) (b) Match increases γ oscillations (c) Mismatch increases θ,β oscillations
  • image p232fig05.41 (a)-(c). The sequence of interlaminar events that SMART predicts during a mismatch reset. (d) Some of the compatible neurophysiological data.
    || Mismatch causes layer 5 dendritic spike that trigger reset. (a) Arousal causes increase in nonspecific thalamic nuclei firing rate and layer 5 dendritic and later somatic spikes (Karkum and Zhu 2002, Williams and Stuart 1999) (b) Layer 5 spikes reach layer 4 via layer 6i and inhibitory neurons (Lund and Boothe 1975, Gilbert and Wiesel 1979) (c) habituative neurotransmitters in layer 6i shift the balance of active cells in layer 4 (Grossberg 1972, 1976) (d) Dendritic stimulation fires layer 5 (Larkum and Zhu 2002) stimulation apical dendrites of nonspecific thalamus
  • Grossberg 2021 p229c2h0.60 SMART computer simulations demonstrate that a good enough match of a top-down expectation with a bottom-up feature pattern generates an attentive resonance during which the spikes of active cells synchronize in the gamma frequency range of 20-70 Hz (Figure 5.40). Many labs have reported a link between attention and gamma oscillations in the brain, including two articles published in 2001, one from the laboratory of Robert Desimone when he was at the the National Institute of Mental Health in Bethseda (Fries, Reynolds, Rorie, Desimone 2001), and the other from the laboratory of Wolf Singer in Frankfurt (Engel, Fries, Singer 2001). You
  • image p468fig12.65 Linguistic properties of the PHONET model and some of the data that it simulates. The upper left image summarizes the asymmetric transient-to-sustained gain control that helps to create invariant intraword ratios during variable-rate speech. The lower left image summarizes the rate-dependent gain control of the ARTPHONE model that creates rate-invariant working memory representations in response to sequences of variable-rate speech. The right image summarizes the kind of paradoxical VC-CV category boundary data of (Repp 1980) that ARTPHONE simulates. See the text for details.
    || (left upper) [transient, sustained] [working memory, filter, category]. (left lower) phone inputs-> [input rate estimate, features], Features w <- habituative transmitter gates -> categories-> rate invariant phonetic output, input rate estimate-> gain control-> [features, categories] rate-dependent integration of categories and features. (right) % 2-stop vs VC-CV silent interval (msec): [ib-ga, ib-ba, iga, iba].
  • image p420fig12.18 The ARTSTREAM model explains and simulates the auditory continuity illusion as an example of a spectral-pitch resonance. Interactions of ART Matching Rule and asymmetric competition mechanisms in cortical strip maps explain how the tone selects the consistent frequency from the noise in its own stream while separating the rest of the noise into another stream.
    || ARTSTREAM model (Grossberg 1999; Grossberg, Govindarajan, Wyse, Cohen 2004). SPINET. Frequency and pitch strips. Bottom Up (BU) harmonic sieve. Top Down (TD) harmonic ART matching. Exclusive allocation. Learn pitch categories based on early harmonic processing. A stream is a Spectral-Pitch Resonance!
  • image p422fig12.19 The ARTSTREAM model includes mechanisms for deriving streams both from pitch and from source direction. See the text for details.
    || [left, right] cart Peripheral processing = [input signal-> outer & middle ear preemphasis-> basilar membrane gammatone filterbank-> energy measure]. Spectral stream layer-> spectral summation layer-> delays-> [f-, tau] plane-> pitch stream layer-> pitch summation layer.
  • image p423fig12.20 The Spatial Pitch Network, or SPINET, model shows how a log polar spatial representation of the sound frequency spectrum can be derived from auditory signals occuring in time. The spatial representation allows the ARTSTREAM model to compute spatially distinct auditory streams.
    || SPINET model (Spatial Pitch Network) (Cohen, Grossberg, Wyse 1995). 1. input sound 2. Gamma-tone filter bank 3. Shaort-term average energy spectrum 4. MAP transfer function 5. On-center off-surround and rectification 6. Harmonic weighting 7. Harmonic summation and competition -> PITCH
  • image p424fig12.21 One of the many types of data about pitch processing that are simulated by the SPINET model. See the text for details.
    || Pitch shifts with component shifts (Patterson, Wightman 1976; Schouten 1962). Pitch vs lowest harmonic number.
  • image p425fig12.23 ARTSTREAM simulations of the auditory continuity illusion and other streaming properties (left column, top row). When two tones are separated by silence (Input), a percept of silence also separates them in a spectral-pitch resonance. (left column, bottom row). When two tones are separated by broadband noise, the percept of tone continues through the noise in one stream (stream 1) while the remainder of the noise occurs in a different stream (stream 2). (right column) Some of the other streaming properties that have been simulated by the ARTSTREAM model.
    || Auditory continuity does not occur without noise. Auditory continuity in noise. Other simulated streaming data.
  • image p431fig12.27 The strip maps that occur in ARTSTREAM and NormNet are variants of a cortical design that aalso creates ocular dominance columns in the visual cortex.
    || Adult organization of V1 (Grinvald etal http://www.weizmann.ac.il/brain/images/cubes.html). (1) Occular dominance columns (OCDs): Alternating strips of cortex respond preferentially to visual inputs of each eye (R/L corresponds to Right and Left eye inputs in the figure); Orientation columns: A smooth pattern of changing orientation preference within each ODC. Organized in a pinwheel like fashion.
  • p370 Chapter 11 means (Grossberg 2021) page 370, Chapter 11
    p002sec Illusion and realitymeans (Grossberg 2021) page 2, section Illusion and reality
    p013fig01.09means (Grossberg 2021) page 13, Figure 1.09 (1.9 as in book)
    p030tbl01.02 means (Grossberg 2021) page 30, Table 1.02 (1.2 as in book)
    p111c2h0.5means (Grossberg 2021) page 111, column 2, height from top as fraction of page height
    || text...Are notes in addition to [figure, table] captions, mostly comprised of text within the image, but also including quotes of text in the book. Rarely, it includes comments by Howell preceded by "Howell". The latter are distinct from "readers notes" (see, for example : reader Howell notes).
    p044 Howell: grepStr
  • p001 Chapter 1 Overview - From Complementary Computing and Adaptive Resonance to conscious awareness
  • p250 Chapter 6 Conscious seeing and invariant recognition - Complementary cortical streams coordinate attention for seeing and recognition
  • p539 Chapter 15 Adaptively timed learning - How timed motivation regulates conscious learning and memory consolidation
    p370 Chapter 11 means (Grossberg 2021) page 370, Chapter 11
    p002sec Illusion and realitymeans (Grossberg 2021) page 2, section Illusion and reality
    p013fig01.09means (Grossberg 2021) page 13, Figure 1.09 (1.9 as in book)
    p030tbl01.02 means (Grossberg 2021) page 30, Table 1.02 (1.2 as in book)
    p111c2h0.5means (Grossberg 2021) page 111, column 2, height from top as fraction of page height
    || text...Are notes in addition to [figure, table] captions, mostly comprised of text within the image, but also including quotes of text in the book. Rarely, it includes comments by Howell preceded by "Howell". The latter are distinct from "readers notes" (see, for example : reader Howell notes).
    p044 Howell: grepStr
  • image p039tbl01.03 The link between consciousness and movement
    ||
    VISUALseeing, knowing, and reaching
    AUDITORYhearing, knowing, and speaking
    EMOTIONALfeeling, knowing, and acting
  • image p042tbl01.04 The six main kinds of resonances which support different kinds of conscious awareness that will be explained and discussed in this book.
    ||
    type of resonancetype of consciousness
    surface-shroudsee visual object or scene
    feature-categoryrecognize visual object or scene
    stream-shroudhear auditory object or stream
    spectral-pitch-and-timbrerecognize auditory object or stream
    item-listrecognize speech and language
    cognitive-emotionalfeel emotion and know its source
  • image p270fig06.16 The same target position signal that can command the next saccade also updates a gain field that predictively maintains the attentional shroud in head-centered coordinates, even before the eye movement is complete. This process keeps the shroud invariant under eye movements, so that it can continue to inhibit reset of an emerging invariant category as t is associated with multiple object views, even while the conscious surface representation shifts with each eye movement in retinotopic coordinates. This pdating process is often called predictive re mapping.
    || Predictive remapping of eye movements! From V3A to LIP. [spatial attention, object attention, figure-ground separation, eye movement remapping, visual search]. (Beauvillaib etal 2005, Carlson-Radvansky 1999, Cavanaugh etal 2001, Fecteau & Munoz 2003, Henderson & Hollingworth 2003, Irwin 1991)
  • image p278fig06.27 A surface-shroud resonance through the Where stream enables us to consciously see an object while a feature-category resonance into the What stream enables us to recognize it. Both kinds of resonances can synchronize via visual cortex so that we can know what an object is when we see it.
    || What kinds of resonances support knowing vs seeing? What stream [knowing, feature-prototype resonance], Where stream [seeing, surface-shroud resonance]
  • image p278fig06.28 If the feature-category resonances cannot form, say due to a lesion in IT, then a surface-shroud resonance can still support conscious seeing of an attended object, and looking at or reaching for it, even if the individual doing so knows nothing about the object, as occurs during visual agnosia. The surface-shroud resonance supports both spatial attention and releases commands that embody the intention to move towards the attended object.
    || What kinds of resonances support knowing vs seeing? visual agnosia: reaching without knowing Patient DF (Goodale etal 1991). Attention and intention both parietal cortical functions (Anderson, Essick, Siegel 1985; Gnadt, Andersen 1988; Synder, Batista, Andersen 1997, 1998)
  • image p355fig10.02 Distinguishing processes of seeing vs knowing has been difficult because they interact so strongly.
    || Seeing vs. Knowing. Seeing and knowing [operate at different levels of the brain, use specialized circuits], but they [interact via feedback, use similar cortical designs, feedback is needed for conscious perception]. Cerebral Cortex: Seeing [V1-V4, MS-MST], Knowing [IT, PFC].
  • image p369fig10.19 Data from (Watanabe etal 2001) showing perceptual learning of the coherent motion direction, despite the lack of extra-foveal attention and awareness of the moving stimuli.
    || Unconscious perceptual learning of motion direction, % correct for two tests, compared to chance level results.
  • image p396fig11.35 Three properties of bipole boundary grouping in V2 can explain how boundaries oscillate in response to rivalry-inducing stimuli. Because all boundaries are invisible, however, these properties are not sufficient to generate a conscious percept of rivalrous surfaces.
    || 3 V2 boundary properties cause binocular rivalry. 1. Bipole grouping, 2. Orientational competition, 3. Actovity-dependent habituation
  • image p419fig12.17 The auditory continuity illusion illustrates the ART Matching Rule at the level of auditory streaming. Its "backwards in time" effect of future context on past conscious perception is a signature of resonance.
    || Auditory continuity illusion. input, percept. Backwards in time - How does a future sound let past sound continue through noise? Resonance! - It takes a while to kick in. After it starts, a future tone can maintain it much more quickly. Why does this not happen if there is no noise? - ART Matching Rule! TD harmonic filter is maodulatory without BU input. It cannot create something out of nothing.
  • image p436fig12.30 The conscious ARTWORD, or cARTWORD, laminar cortical speech model simulates how future context can disambiguate noisy past speech sounds in such a way that the completed percept is consciously heard to proceed from past to future as a feature-item-list resonant wave propagates through time.
    || cARTWORD: Laminar cortical model macrocircuit (Grossberg, Kazerounian 2011) Simulates PHONEMIC RESTORATION: Cognitive Working Memory (processed item sequences) - [Excitatory-> inhibitory-> habituative-> adaptive filter-> adaptive filter-> adaptive filter with depletable synapse-> Acoustic [item, feature]
  • image p455fig12.52 Simulation of cARTWORD dynamics in response to the complete list /1/-/2/-/3/. The relevant responses are surrounded by a red box.
    || Presentation of a normal sequence: input /1/-/2/-/3/. |c(i,1)-5| vs time (msec). List chunks select most predictive code. Order stored in WM layers. Resonant activity of /1/-/2/-/3/ in item and feature layers corresponds to conscious speech percept.
  • image p484fig13.04 The top-down feedback from the orbitofrontal cortex closes a feedback loop that supports a cognitive-emotional resonance. If this resonance can be sustained long enough, it enables us to have feelings at the same time that we experience the categories that caused them.
    || Cognitive-Emotional resonance. Basis of "core consciousness" and "the feeling of what happens". (Damasio 1999) derives heuristic version of CogEM model from his clinical data. Drive-> amygdala-> prefrontal cortex-> sensory cortex, resonance around the latter 3. How is this resonance maintained long enough to become conscious?
  • Figure 2.37 Modelling method and cycle, one starts with Behavioral Data from scores or hundreds of psychological experiments. These data are analyszed as the result of an individual adapting autonomously in real time to a changing world. This is the Arty of Modeling. It requires that one be able to infer from static data curves the dynamical processes that control individual behaviors occuring in real time. One of the hardest things that I teach to my students to do is "how to think in real time" to be able to carry out this speculative leap.

    Properly carried out, this analysis leads to the discovery of new Design Principles that are embodied by these behavioral processes. The Design Principles highlight the functional meaning of the data, and clarify how individual behaviors occurring in real time give rise to these static data curves.

    These principles are then converted into the simplest Mathematical Model using a method of minimal anatomies, which is a form of Occam
  • image p484fig13.04 The top-down feedback from the orbitofrontal cortex closes a feedback loop that supports a cognitive-emotional resonance. If this resonance can be sustained long enough, it enables us to have feelings at the same time that we experience the categories that caused them.
    || Cognitive-Emotional resonance. Basis of "core consciousness" and "the feeling of what happens". (Damasio 1999) derives heuristic version of CogEM model from his clinical data. Drive-> amygdala-> prefrontal cortex-> sensory cortex, resonance around the latter 3. How is this resonance maintained long enough to become conscious?
  • image p514fig13.44 Analog of the COgEM model in Figure 6.1 of (Damasio 1999).
    || (a) map of object X-> map of proto-self at inaugural instant-> [, map of proto-self modified]-> assembly of second-order map. (b) map of object X enhanced-> second-order map imaged.
  • image p105fig03.23 The pointillist painting A Sunday on la Grande Jatte by Georges Seurat illustrates how we group together both large-scale coherence among the pixels of the painting, as well as forming small groupings around the individual dabs of color.
    ||
  • image p107fig03.25 The Roofs of Collioure by Matisse. See the text for details
    || p107c1h0.6 "... [Matisse] showed how patches of pure color, when laid down properly on a canvas, could be grouped by the brain into emergent boundarues, without the intervention of visible outlines. ... The trick was that these emergent boundaries, being invisible, or amodal, did not darken the colors in the surface representations. In this sense, Matisse intuitively realized that "all boundaries are invisible" through the masterful way in which he arranged his colors on canvas to generate boundaries that could support compelling surface representations. ..."
  • image p108fig03.27 Matisse
  • image p110fig03.32 Claude Monet
  • image p120fig03.43 Four paintings by Monet of the Rouen cathedral under different lighting conditions (top row) and their monochromatic versions (bottom row). See the text for details.
    || p119c2h0.25 "... Monet uses nearby colors that are nearly equiluminant, and sharp, high-contrast luminance defined edges are sparse. He hereby creates weaker boundary signals within and between the parts of many forms, and stronger boundary signals between the forms. This combination facilitates color spreading within the forms and better separation of brightness and collor differences between forms. ... The grayscale versions of these paintings demonstrate the near equiluminance of the brushstrokes within forms, and places in which brightness and color differences significantly influence the groupings that differentiate between forms, including the differentiation between the cathedral and the sky. ..."
  • image p120fig03.44 The Rouen cathedral at sunset generates very different boundary webs than it does in full sunlight, as illustrated by Figure 3.45.
    || Rouen Cathedral at sunset (Monet 1892-1894).
    • Lighting almost equiluminant
    • Most boundaries are thus caused by color differences, not luminance differences
    • Fine architectural details are obscured, leading to...
    • Coarser and more uniform boundary webs, so...
    • Less depth in the painting.
  • image p121fig03.45 The Rouen cathedral in full sunlight.
    || Rouen Cathedral full sunlight (Monet 1892-1894).
    • Lighting is strongly non-uniform across most of the painting
    • Strong boundaries due to both luminance and color differences
    • Fine architectural details are much clearer, leading to...
    • Finer and more non-uniform boundary webs, so...
    • Much more detail and depth
  • image p121fig03.46 The Rouen cathedral in full sunlight contains T-Junctions that are not salient in the painting of it at sunset. These are among the painting
  • image p171fig04.49 An example of DaVinci stereopsis in which the left eye sees more of the wall between A and C than the right eye does. The region between B and C is seen only by the left eye because the nearer wall between C and D occludes it from the right eye view.
  • image p377fig11.11 DaVinci stereopsis phenomena occur when only one eye can receive visual inputs from part of a 3D scene due to occlusion by a nearer surface.
    || How does monocular information contribute to depth perception? DaVinci steropsis (Gillam etal 1999). Only by utilizing monocular information can visual system create correct depth percept. [left, right] eye view
  • image p379fig11.13 How the 3D LAMINART model explains DaVinci stereopsis. All the stages of boundary and surface formation are color coded to clarify their explanation. Although each mechanism is very simple, when all of them act together, the correct depthful surface representation is generated. See the text for details.
    || DaVinci stereopsis (Nakayama, Shimojo 1990). An emergent property of the previous simple mechanisms working together. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] pair [Binocular match: boundaries of thick bar -> Add monocular boundaries along lines-of-sight -> Line-of-sight inhibition kills weaker vertical boundaries -> 3D surface percept not just a disparity match!] pair [Binocular match: right edge of thin and thick bars -> Strongest boundaries: binocular and monocular boundaries add -> Vertical boundaries from monocular left edge of thin bar survive -> Filling-in contained by connected boundaries]. cart [very near, near, fixation plane, far, very far]
  • image p380fig11.14 The model explanation of DaVinci stereopsis when the input stimuli have opposite contrast polarities.
    || Polarity-reversed Da Vinci stereopsis (Nakayama, Shimojo 1990). Same explanation! (... as Figure 11.13 ...) [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p381fig11.15 The same model mechanisms explain the surface percept that is generated by the variant of DaVinci stereopsis that Gillam, Blackburn, and Nakayama studied in 1999.
    || DaVinci stereopsis (Gillam, Blackburn, Nakayama 1999). same model mechanisms. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p382fig11.16 The version of DaVinci steropsis wherein three narrow rectangles are binocularly matched with one thick rectangle can also be explained is a similar way.
    || DaVinci stereopsis of [3 narrow, one thick] rectangles (Gillam, Blackburn, Nakayama 1999). [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • p618 Chapter 17 A universal development code - Mental measurements embody universal laws of cell biology and physics
  • image p073fig02.19 Computing with cells: infinity does not exist in biology!
    || Computing in a bounded activity domain, Gedanken experiment (Grossberg 1970). Vm sub-areas [xm, B - xm], I(all m)], m=[1, i, B].
    Bexcitable sites
    xi(t)excited sites (activity, potential)
    B - xi(t)unexcited sites
  • image p082fig02.37 My models begin with behavioral data, since brains are designed to achieve behavioral success. The text explains how models evolve in stages, through a process of successive refinements, or unlumpings. These unlumpings together carry out a kind of conceptual evolution, leading to models that can explain and predict ever larger psychological and neurobiological databases.
    || Modelling method and cycle.
    Behavioral data -(art of modeling)-> Design principles <- Neural data <-(brain predictions)- Mathematical model and analysis -(behavioral predictions)-> Behavioural data
    Operationalizes "proper level of abstraction"
    Operationalizes that you cannot "derive a brain" in one step.
  • image p501fig13.26 A simple differential equation describes the processes of transmitter accumulation and release that do their best, at a finite rate, to carry out unbiased transduction.
    || Transmitter accumulation and release. Transmitter y cannot be restored at an infinite rate: T = S*ym y ~= B, Differential equations: d[dt: y] = A*(B - y) - S*y = accumulate - release. Transmitter y tries to recover to ensure unbiased transduction. What if it falls behind? Evolution has exploited the good properties that happen then.
  • Grossberg 2021 p081c2h0.66 sub-section: How does one evolve a computational brain?
    The above discussion illustrates that no single step of theoretical derivation can derive a whole brain. One needs a method for deriving a brain in stages, or cycles, much as evolution has incrementally discovered ever more complex brains over many thousands of years. The following theoretical method has been successfully applied many times since I first used it in 1957. It embodies a kind of conceptual evolutionary process for deriving a brain.

    Because "brain evolution needs to achieve behavioural success", we need to start with data that embodiey indices of behavioral success. That is why, as illustrated in Figure 2.37 Modelling method and cycle, one starts with Behavioral Data from scores or hundreds of psychological experiments. These data are analyszed as the result of an individual adapting autonomously in real time to a changing world. This is the Arty of Modeling. It requires that one be able to infer from static data curves the dynamical processes that control individual behaviors occuring in real time. One of the hardest things that I teach to my students to do is "how to think in real time" to be able to carry out this speculative leap.

    Properly carried out, this analysis leads to the discovery of new Design Principles that are embodied by these behavioral processes. The Design Principles highlight the functional meaning of the data, and clarify how individual behaviors occurring in real time give rise to these static data curves.

    These principles are then converted into the simplest Mathematical Model using a method of minimal anatomies, which is a form of Occam
  • image p248fig05.49 This circuit of the LAMINART model helps to explain properties of Up and Down states during slow wave sleep, and how disturbances in ACh dynamics can disrupt them.
    ||
  • image p278fig06.28 If the feature-category resonances cannot form, say due to a lesion in IT, then a surface-shroud resonance can still support conscious seeing of an attended object, and looking at or reaching for it, even if the individual doing so knows nothing about the object, as occurs during visual agnosia. The surface-shroud resonance supports both spatial attention and releases commands that embody the intention to move towards the attended object.
    || What kinds of resonances support knowing vs seeing? visual agnosia: reaching without knowing Patient DF (Goodale etal 1991). Attention and intention both parietal cortical functions (Anderson, Essick, Siegel 1985; Gnadt, Andersen 1988; Synder, Batista, Andersen 1997, 1998)
  • image p557fig15.26 Brain regions and processes that contribute to autistic behavioral symptoms when they become imbalanced in prescribed ways.
    || Basal Gamglia prolonged gate opening <-> { Amygdala emotionally depressed-> [hippocampus- hyperspecific learning; Cerebellum- adaptive timing fails; hypofrontal blocking fails, no Theory of Mind]-> Neocortex; Neocortex- rewards not received-> Amygdala}.
  • image p189fig05.04 The hippocampus is one of several brain regions that are important in learning and remembering about objects and events that we experience throughout life. The book will describe several hippocampal processes that contribute to this achievement in different ways.
    || hypothalmic nuclei, amygdala, hippocampus, cingulate gyrus, corpus callosum, thalamus
  • image p228fig05.37 A macrocircuit of the neurotrophic Spectrally Timed ART, or nSTART, model. I developed nSTART with my PhD student Daniel Franklin. It proposes how adaptively timed learning in the hippocampus, bolstered by Brain Derived Neurotrophic Factor, or BDNF, helps to ensure normal memory consolidation.
    || habituative gates, CS, US, Thalamus (sensory cortex, category learning, conditioned reinforcer learning, adaptively time learning and BDNF), Amygdala (incentive motivation learning), Hippocampus (BDNF), Prefrontal Cortex (attention), Pontine nuclei, Cerebellum (adaptively timed motor learning)
  • image p233fig05.42 Mismatch-induced beta oscillations have been reported in at least three parts of the brain: V1, V4, and hippocampus. Althpough there may be other reasons for beta oscillations in the brain, those that are caused by a mismatch should be studied in concert with the gamma oscillations that occur during a good enough match. See tyhe text for details.
    || Is there evidence for the [gamma, beta] prediction? Yes, in at least three parts of the brain, (Buffalo EA, Fries P, Ladman R, Buschman TJ, Desimone R 2011, PNAS 108, 11262-11267) Does this difference in average oscillation frequencies in the superficial and deep layers reflect layer 4 reset? Superficial recording γ (gamma), Deep recording β (beta) (Berke etal 2008, hippocampus; Buschman and Miller 2009, FEF)
  • image p520fig14.02 Macrocircuit of the main brain regions, and connections between them, that are modelled in the unified predictive Adaptive Resonance Theory (pART) of cognitive-emotional and working memory dynamics. Abbreviations in red denote brain regions used in cognitive-emotional dynamics. Those in green denote brain regions used in working memory dynamics. Black abbreviations denote brain regions that carry out visual perception, learning and recognition of visual object categories, and motion perception, spatial representation and target tracking. Arrow denote non-excitatory synapses. Hemidiscs denote adpative excitatory synapses. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown. Also not shown are output signals from cortical areas to motor responses. V1: striate, or primary, visual cortex; V2 and V4: areas of prestriate visual cortex; MT: Middle Temporal cortex; MST: Medial Superior Temporal area; ITp: posterior InferoTemporal cortex; ITa: anterior InferoTemporal cortex; PPC: Posterior parietal Cortex; LIP: Lateral InterParietal area; VPA: Ventral PreArculate gyrus; FEF: Frontal Eye Fields; PHC: ParaHippocampal Cortex; DLPFC: DorsoLateral Hippocampal Cortex; HIPPO: hippocampus; LH: Lateral Hypothalamus; BG: Basal Ganglia; AMYG: AMYGdala; OFC: OrbitoFrontal Cortex; PRC: PeriRhinal Cortex; VPS: Ventral bank of the Principal Sulcus; VLPFC: VentroLateral PreFrontal Cortex. See the text for further details.
    ||
  • image p532fig14.08 Macrocircuit of the ARTSCENE Search neural model for learning to search for desired objects by using the sequences of already experienced objects and their locations to predict what and where the desired object is. V1 = First visual area or primary visual cortex; V2 = Second visual area; V4 = Fourth visual area; PPC = Posterior Parietal Cortex; ITp = posterior InferoTemporal cortex; ITa = anterior InferoTemporal cortex; MTL = Medial Temporal Lobe; PHC = ParaHippoCampal cortex; PRC = PeriRhinal Cortex; PFC = PreFrontal Cortex; DLPFC = DorsoLateral PreFrontal Cortex; VPFC = Ventral PFC; SC = Superior Colliculus.
    ||
  • image p541fig15.02 The neurotrophic Spectrally Timed Adaptive Resonance Theory, or nSTART, model of (Franklin, Grossberg 2017) includes hippocampus to enable adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between CS and US.
    || Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. [CS, US] -> Sensory Cortex (SC) <- motivational attention <-> category learning -> Prefrontal Cortex (PFC). SC conditioned reinforcement learning-> Amygdala (cannot bridge the temporal gap) incentive motivational learning-> PFC. SC adaptively timer learning and BDNF-> Hippocampus (can bridge the temporal gap) BDNF-> PFC. PFC adaptively timed motor learning-> cerebellum.
  • image p543fig15.06 The circuit between dentate granule cells and CA1 hippocampal pyramid cells seems to compute spectrally timed responses. See the text for details.
    || Hippocampal interpretation. 1. Dentate granule cells (Berger, Berry, Thompson 1986): "increasing firing...in the CS period...the latency...was constant". 2. Pyramidal cells: "Temporal model" Dentate granule cells-> CA3 pyramids. 3. Convergence (Squire etal 1989): 1e6 granule cells, 1.6e5 CA3 pyramids. 80-to-1 (ri).
  • image p549fig15.19 How the adaptively timed hippocampal spectrum T inhibits (red arrow) the orienting system A as motivated attention in orbitofrontal cortex Si(2) peaks at the ISI.
    || Conditioning, Attention, and Timing circuit. Hippocampus spectrum-> Amgdala orienting system-> neocortex motivational attention. Adaptive timing inhibits orienting system and maintains adaptively timed Motivated Attention on the CS.
  • image p557fig15.26 Brain regions and processes that contribute to autistic behavioral symptoms when they become imbalanced in prescribed ways.
    || Basal Gamglia prolonged gate opening <-> { Amygdala emotionally depressed-> [hippocampus- hyperspecific learning; Cerebellum- adaptive timing fails; hypofrontal blocking fails, no Theory of Mind]-> Neocortex; Neocortex- rewards not received-> Amygdala}.
  • image p573fig16.01 The experimental chamber (A) and neurophysiological recordings from a rat hippocampus (B) that led to the discovery of place cells. See the text for details.
    ||
  • image p575fig16.03 As a rat navigates in its experimental chamber (black curves), neurophysiological recordings disclose the firing patterns (in red) of (a) a hippocampal place cell and (b) an entrorhinal grid cell.
    ||
  • image p578fig16.04 Cross-sections of the hippocampal regions and the inputs to them. See the text for details.
    || EC-> CA1-> CA3-> DG. Layers [V/V1, II, II].
  • image p583fig16.10 The GRIDSmap model is embedded into a more complete representation of the processing stages from receipt of angular head velocity and linear velocity signals to this learning of place cells.
    || GRIDSmap. Pre-wired 2D stripe cells, learns 2D grid cells. vestibular cells [angular head velocity-> head direction cells, linear velocity]-> stripe cells- small scale 1D periodic spatial code (ECIII)-> SOM grid cells entorhinal cortex- small scale 2D periodic spatial scale-> SOM place cells hippocampal cortex- large scale 2D spatial code (dentate/CA3). Unified hierarchy of SOMs.
  • image p600fig16.36 The entorhinal-hipppocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories. See the text for details.
    || Entorhinal-hippocampal interactions as an ART system. Hippocampal place cells as spatial categories. Angular head velocity-> head direction cells-> stripe cells- small scale 1D periodic code (ECIII) SOM-> grid cells- small scale 2D periodic code (ECII) SOM-> place cells- larger scale spatial map (DG/CA3)-> place cells (CA1)-> conjunctive-coding cells (EC V/VI)-> top-down feedback back to stripe cells- small scale 1D periodic code (ECIII). stripe cells- small scale 1D periodic code (ECIII)-> place cells (CA1).
  • image p602fig16.37 Data showing the effect of hippocampal inactivation by muscimol on grid cell firing before, during, and six hours after the muscimol, reading from left to right.
    || Hippocampal inactivation disrupts grid cells (Bonnevie etal 2013). muscimole inactivation. spikes on trajectory: [before, after min [6-20, 20-40, 40-60, 6h]]. rate map (Hz) [18.6, 11.4, 9.5, 6.7, 10.8]. spatial autocorrelogram g=[1.12, 0.05, -0.34, 0.09, 1.27].
  • image p603fig16.38 Role of hippocampal feedback in maintaining grid fields. (a) Data showing the effect of hippocampal inactivation before and during muscimol inhibition of hippocampal cells, as in Figure 16.37. (b) Model simulation with normal grid fields. (c) Model simulation that emulates the effect of hippocampal inhibition on grid fields.
    || (a) Data: hippocampal inactivation [before, after] cart [spikes on trajectory (p: [18.6, 6.7] Hz), spatial autocorrelogram (g= [1.12, 0.09])]. (b) Model: noise-free path integration, [spikes on trajectory (p: 14.56 Hz), rate map, spatial autocorrelogram (g= 1.41), dynamic autocorrelogram (g=0.6)]. (c) Model: noisy path integration + non-specific tonic inhibition, [spikes on trajectory (p: 11.33 Hz), rate map, spatial autocorrelogram (g= 0.05), dynamic autocorrelogram (g=0.047)].
  • image p617fig16.50 The perirhinal and parahippocampal cortices enable adaptively timed reinforcement learning and spatial navigational processes that are modeled by Spectral Spacing models in the What and Where cortical streams, respectively, to be fused in the hippocampus.
    || What and Where inputs to the hippocampus (Diana, Yonelinas, Ranganath 2007). Adaptively timed conditioning and spatial naviga039tbl01.03 tion. Hippocampus <-> Entorhinal Cortex <-> [Perirhinal Cortex <-> what, Parahippocampal Cortex <-> where].
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
    bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :image p419fig12.17 The auditory continuity illusion illustrates the ART Matching Rule at the level of auditory streaming. Its "backwards in time" effect of future context on past conscious perception is a signature of resonance.
    || Auditory continuity illusion. input, percept. Backwards in time - How does a future sound let past sound continue through noise? Resonance! - It takes a while to kick in. After it starts, a future tone can maintain it much more quickly. Why does this not happen if there is no noise? - ART Matching Rule! TD harmonic filter is maodulatory without BU input. It cannot create something out of nothing.
  • image p420fig12.18 The ARTSTREAM model explains and simulates the auditory continuity illusion as an example of a spectral-pitch resonance. Interactions of ART Matching Rule and asymmetric competition mechanisms in cortical strip maps explain how the tone selects the consistent frequency from the noise in its own stream while separating the rest of the noise into another stream.
    || ARTSTREAM model (Grossberg 1999; Grossberg, Govindarajan, Wyse, Cohen 2004). SPINET. Frequency and pitch strips. Bottom Up (BU) harmonic sieve. Top Down (TD) harmonic ART matching. Exclusive allocation. Learn pitch categories based on early harmonic processing. A stream is a Spectral-Pitch Resonance!
  • image p425fig12.23 ARTSTREAM simulations of the auditory continuity illusion and other streaming properties (left column, top row). When two tones are separated by silence (Input), a percept of silence also separates them in a spectral-pitch resonance. (left column, bottom row). When two tones are separated by broadband noise, the percept of tone continues through the noise in one stream (stream 1) while the remainder of the noise occurs in a different stream (stream 2). (right column) Some of the other streaming properties that have been simulated by the ARTSTREAM model.
    || Auditory continuity does not occur without noise. Auditory continuity in noise. Other simulated streaming data.
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
  • bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :p404 Chapter 12From seeing and reaching to hearing and speaking - Circular reaction, streaming, working memory, chunking, and number
  • image p030tbl01.02 The What and Where cortical processing streams obey complementary laws. These laws enable the What stream to rapidly and stably learn invariant object categories without experiencing catastrophic forgetting, while the Where stream learns labile spatial and action representations to control actions that are aimed towards these objects.
    ||
  • WHATWHERE
    spatially-invariant object learning and recognitionspatially-variant reaching and movement
    fast learning without catastrophic forgettingcontinually update sensory-motor maps and gains
    IT InterferoTemporal CortexPPC Posterior Parietal Cortex
    WhatWhere
    matchingexcitatoryinhibitory
    learningmatchmismatch
  • image p032fig01.21 At least three parallel visual cortical streams respond to visual inputs that reach the retina. Two parvocellular streams process visual surfaces (blob stream) and visual boundaries (interblob stream). The magnocellular stream processes visual motion.
    || [Retina, LGNs, V[1,2,3,4], MT] to [What- inferotemporal areas, Where- parietal areas]: visual parallel streams [2x blob, 1x bound]
  • image p039tbl01.03 The link between consciousness and movement
    ||
    VISUALseeing, knowing, and reaching
    AUDITORYhearing, knowing, and speaking
    EMOTIONALfeeling, knowing, and acting
  • image p092fig03.05 A cross-section of the retinal layer. Note that light stimuli need to go through all retinal layers before they reach the photoreceptor layer at which the light signals are registered.
    || light stimuli ->
    retinal layerscellular composition
    inner limiting membrane
    retinal nerve fibreganglion nerve fibres
    ganglion cellganglion
    inner plexiformamacrine
    inner nuclearhorizontal
    outer plexiform
    outer limiting membrane
    photoreceptorrod
    photoreceptorcone
    retinal pigment epithelium
    <- signal transduction. http://brain.oxfordjournals.org/content/early/2011/01/20/brain.awq346
  • image p232fig05.41 (a)-(c). The sequence of interlaminar events that SMART predicts during a mismatch reset. (d) Some of the compatible neurophysiological data.
    || Mismatch causes layer 5 dendritic spike that trigger reset. (a) Arousal causes increase in nonspecific thalamic nuclei firing rate and layer 5 dendritic and later somatic spikes (Karkum and Zhu 2002, Williams and Stuart 1999) (b) Layer 5 spikes reach layer 4 via layer 6i and inhibitory neurons (Lund and Boothe 1975, Gilbert and Wiesel 1979) (c) habituative neurotransmitters in layer 6i shift the balance of active cells in layer 4 (Grossberg 1972, 1976) (d) Dendritic stimulation fires layer 5 (Larkum and Zhu 2002) stimulation apical dendrites of nonspecific thalamus
  • image p278fig06.28 If the feature-category resonances cannot form, say due to a lesion in IT, then a surface-shroud resonance can still support conscious seeing of an attended object, and looking at or reaching for it, even if the individual doing so knows nothing about the object, as occurs during visual agnosia. The surface-shroud resonance supports both spatial attention and releases commands that embody the intention to move towards the attended object.
    || What kinds of resonances support knowing vs seeing? visual agnosia: reaching without knowing Patient DF (Goodale etal 1991). Attention and intention both parietal cortical functions (Anderson, Essick, Siegel 1985; Gnadt, Andersen 1988; Synder, Batista, Andersen 1997, 1998)
  • image p303fig08.20 The G-wave speeds up with the distance between flashes at a fixed delay, and has a consitent motion across multiple spatial scales.
    || G-wave properties (Grossberg 1977). Theorem 2 (Equal half-time property) The time at which the motion signal reaches position w=L/2. Apparent motion speed-up with distance: this half-time is independent of the distance L between the two flashes. Consistent motion across scales: half-time is independent of the scale size K. Method of proof: elementary algebra and calculus (Grossberg, Rudd 1989 appendix)
  • image p304fig08.21 A computer simulation of the equal half-time property whereby the apparent motions within different scales that respond to the same flashes all reach the half-way point in the motion trajectory at the same time.
    || Equal half-time property: how multiple scales cooperate to generate motion percept. Travelling waves from Gaussian filters of different sizes bridge the same distance in comparable time. The time needed to bridge half the distance between flashes is the same.
  • image p335fig08.61 Behavioral data (left image) and simulation (right image) about speed in correct and error trials of the RT task. See text for details.
    || Behavioral data: speed, correct and error trials (RT task) (Roltman, Shadien 2002). More coherence in the motion causes faster reaction time.
  • image p350fig09.22 How the negative Gaussian of an obstacle causes a peak shift to avoid the obstacle without losing sight of how to reach the goal.
    || Steering dynamics: obstacle avoidance. body-centered coordinates [obstacle, goal, heading] -> steering
  • image p351fig09.25 By the time MT+ is reached, directional transient cells and directional filters have begun to extract more global directional information from the image.
    || M+ computes global motion estimate. Estimate global motion from noisy local motion estimates.
  • image p414fig12.11 Neurophysiological data from cortical areas 4 and 5 (every other column) and simulations thereof (other columns) during a reach.
    || activation vs time. (a) area 4 phasic RT (IFV) (b) area 4 tonic (OPV) (c) area 4 phasic-tonic (OFPV) (d) area 4 phasic MT (DVV) (e) area 5 phasic (DV) (f) area 5 tonic (PPV)
  • image p416fig12.13 The DIRECT model learns, using a circular reaction that is energized by an Endogenous Random Generator, or ERG, to make motor-equivalent volitionally-activated reaches. This circular reaction learns a spatial representation of a target in space. It can hereby make accurate reaches with clamped joints and on its first try using a tool under visual guidance; see Figure 12.16.
    || DIRECT model (Bulloch, Grossberg, Guenther 1993). learns by circular reaction. learns spatial reresentation to me4diate between vision and action. motor-equivalent reaching. can reach target with clamped joints. can reach target with a TOOL on the first try under visual guidance. How did tool use arise?!
  • image p416fig12.14 Computer simulations of DIRECT reaches with (b) a tool, (c) a clamped elbow, and (d) with a blindfold, among other constraints.
    || Computer simulationsd of direct reaches [unconstrained, with TOOL, elbow clamped at 140°, blindfolded]
  • image p417fig12.15 The DIRECT and DIVA models have homologous circuits to learn and control motor-equivalent reaching and speaking, with tool use and coarticulation resulting properties. See the text for why.
    || From Seeing and Reaching to Hearing and Speaking, Circular reactions (Piaget 1945, 1951, 1952). Homologous circuits for development and learning of motor-equivalent REACHING and SPEAKING. DIRECT TOOL use (Bullock, Grossberg, Guenther 1993), DIVA Coarticulation (Guenther 1995)
  • image p428fig12.25 (left architecture) Auditory-articulatory feedback loop whereby babbled sounds active learning in an imitative map that is later used to learn to reproduce the sounds of other speakers. An articulatory-to-auditory expectation renders learning possible by making the auditory and motor data dimensionally consistent, as in the motor theory of speech. (right architecture) Parallel streams in the ARTSPEECH model for learning speaker-independent speech and language meaning, including a mechanism for speaker normalization (right cortical stream) and for learning speaker-dependent vocalic qualities (left cortical stream).
    || left: Speaker-dependent vocalic qualities; right: Speaker-independent speech and language meaning
  • image p430fig12.26 The NormNet model shows how speaker normalization can be achieved using specializations of the same mechanisms that create auditory streams. See the text for how.
    || [Anchor vs Stream] log frequency map. -> diagonals-> Speaker-independent acoustic item information-> [BU adaptive filter, TD learned expectation]-> leaned item recognition categories
  • image p446fig12.44 (left column, top row) LIST PARSE can model linguistic data from human subjects. In this figure, model parameters are fixed to enable a close fit to data about error-type distributions in immediate free recall experiments, notably transposition errors. (right column, top row) Simulation and data showing bowing of the serial position curve, including an extended primacy gradient. (left column, bottom row) The simulation curve overlays data about list length effects, notably the increasing recall difficulty of longer lists during immediate serial recall (ISR). (right column, bottom row) Simulation (bottom image) and data (top image) of the limited temporal extent for recall.
    || (1. TL) Error-type distributions in immediate serial recall (Hanson etal 1996). % occurence vs serial position. Graph convention: Data- dashed lines; Simulations- solid lines. Six letter visual ISR. Order errors- transpositions of neighboring items are the most common. Model explanation: Noisy activation levels change relative oorder in primacy gradient. Similar activation of neighboring items most susceptible to noise. Model paramenters fitted on these data. (2. TR) Bowing of serial position curve (Cowan etal 1999). % correct vs serial position. Auditory ISR with various list lengths (graphs shifted rightward): For [, sub-]span lists- extended primacy, with one (or two) item recency; Auditory presentation- enhanced performance for last items. LIST PARSE: End effects- first and last items half as many members; Echoic memory- last presented item retained in separate store. (3. BL) List length effects, circles (Crannell, Parrish 1968), squares (Baddeley, Hitch 1975), solid line- simulation. % list correct vs list length. Variable list length ISR: longer lists are more difficult to recall. LIST PARSE: More items- closer activation levels and lower absolute activity level with enough inputs; Noise is more likely to produce order errors, Activity levels more likely to drop below threshold;. (4. BR) Limited temporal extent for recall (Murdoch 1961). % recalled vs retention interval (s). ISR task with distractor-filled retention intervals (to prevent rehersal): Increasing retention interval - decreases probabilty of recalling list correctly; Load dependence- longer list more affected by delays; Performance plateau- subjects reach apparent asymptote. LIST PARSE: Increase convergence of activities with time; loss of order information;.
  • image p231fig05.39 The SMART hypothesis testing and learning cycle predicts that vigilance increases when a mismatch in subcortical regions like the nonspecific thanlamus activates the nucleus basalis of Meynert which, in turn, broadcasts a burst of the neurotransmitter acetylcholine, or ACh, to deeper cortical layers. Due to the way in which LAMINART proposes that cortical matching and mismatching occurs, this ACh burst can increase vigilance and thereby trigger a memory search. See the text for details.
    || [BU input, [, non]specific thalamic nucleus, thalamic reticulate nucleus, neocortical laminar circuit] cart [Arousal, Reset, Search, Vigilance]
  • image p232fig05.41 (a)-(c). The sequence of interlaminar events that SMART predicts during a mismatch reset. (d) Some of the compatible neurophysiological data.
    || Mismatch causes layer 5 dendritic spike that trigger reset. (a) Arousal causes increase in nonspecific thalamic nuclei firing rate and layer 5 dendritic and later somatic spikes (Karkum and Zhu 2002, Williams and Stuart 1999) (b) Layer 5 spikes reach layer 4 via layer 6i and inhibitory neurons (Lund and Boothe 1975, Gilbert and Wiesel 1979) (c) habituative neurotransmitters in layer 6i shift the balance of active cells in layer 4 (Grossberg 1972, 1976) (d) Dendritic stimulation fires layer 5 (Larkum and Zhu 2002) stimulation apical dendrites of nonspecific thalamus
  • image p461fig12.58 The lisTELOS model built upon key processes that were earlier modeled by the TELOS model. See the text for details.
    || TELOS model (Brown, Bulloch, Grossberg 1999, 2004). shows [BG nigro-[thalamic, collicular], FEF, ITa, PFC, PNR-THAL, PPC, SEF, SC, V1, V4/ITp, Visual Cortex input] and [GABA].
  • image p523fig14.03 (a) The MOTIVATOR neural model generalizes CogEM by also including the basal ganglia. It can hereby explain and simulate complementary functions of the amygdala and basal ganglia (SNc) during conditioning and learned performance. The basal ganglia generate Now Print signals in response to unexpected rewards. These signals modulate learning of new associations in many brain regions. The amygdala supports motivated attention to trigger actions that are expected to occur in response to conditioned or unconditioned stimuli. Object Categories represent visual or gustatory inputs in anterior inferotemporal (ITA) and rhinal (RHIN) cortices, respectively. Value Categories represent the value of anticipated outcomes on the basis of hunger and satiety inputs, in amygdala (AMYG) and lateral hypothalamus (LH). Object-Value Categories resolve the value of competing perceptual stimuli in medial (MORB) and lateral (ORB) orbitofrontal cortex. The Reward Expectation Filter detects the omission or delivery of rewards using a circuit that spans ventral striatum (VS), ventral pallidum (VP), striosomal delay (SD) cells in the ventral striatum, the pedunculopontine nucleus (PPTN) and midbrain dopaminergic neurons of the substantia nigra pars compacta/ventral tegmental area (SNc/VTA). The circuit that processes CS-related visual information (ITA, AMTG, ORB) operates in parallel with a circuit that processes US-related visual and gustatory information (RHIN, AMYG, MORB). (b) Reciprocal adaptive connections between hypothalamus and amygdala enable amygdala cells to become learned value categories. The bottom region represents hypothalmic cells, which receive converging taste and metabolite inputs whereby they become taste-drive cells. Bottom-up signals from activity patterns across these cells activate competing value category, or US Value Representations, in the amygdala. A winning value category learns to respond selectively to specific combinations of taste-drive activity patterns and sends adaptive top-down priming signals back to the taste-drive cells that activated it. CS-activated conditioned reinforcer signals are also associatively linked to value categories. Adpative connections end in (approximately) hemidiscs. See the text for details.
    ||
  • image p524fig14.04 (a) Model basal ganglia circuit for the control of dopaminergic Now Print signals from the substantia nigra pars compacta, or SNc, in response to unexpected rewards. Cortical inputs (Ii), activated by conditioned stimuli, learn to excite the SNc via a multi-stage pathway from the vantral striatum (S) to the ventral pallidum and then on to the PPTN (P) and the SNc (D). The inputs Ii excite the ventral striatum via adaptive weights WIS, and the ventral striatum excites the SNc with strength W_PD. The striosomes, which contain an adaptive spectral timing mechanism [xij, Gij, Yij, Zij], learn to generate adaptively timed signals that inhibit reward-related activation of the SNc. Primary reward signals (I_R) from the lateral hypothalamus both excite the PPTN directly (with strength W_RP) and act as training signals to the ventral striatum S (with strength W_RS) that trains the weights W_IS. Arrowheads denote excitatory pathways, circles denote inhibitory pathways, and hemidiscs denote synapses at which learning occurs. Thick pathways denote dopaminergic signals.
    ||
  • image p559fig15.27 Brain regions and processes that contribute to the release of dopaminergic Now Print signals by the substantia nigra pars compacta, or SNc, in response to unexpected reinforcing events. See the text for details.
    || Model of spectrally timed SNc learning (Brown, Bulloch, Grossberg 1999). Delayed inhibitory expectations of reward. Dopamine cells signal an error in reqard prediction timing or magnitude. Immediate excitatory predictions of reward. Lateral hypothalamus (Primary Reward Input)-> [(+)ventral striatum <-> ventral pallidium (+)-> PPTN(+)-> SNc]. SNc-> [dopamine signal -> ventral striatum, Striosomal cells]. Conditioned Stimuli (CS)(+)-> [ventral striatum, striosomal cells]. Striosomal cells(-)-> SNc.
  • image p560fig15.29 Excitatory pathways that support activation of the SNc by a US and the conditioning of a CS to the US.
    || Excitatory pathway. Primary reward (apple juice) briefly excites lateral hypothalamus. Hypothalamic-PPTN excitation causes SNc dopamine burst. Hypothalamic activity excites ventral striatum for training. Active CS working memory signals learn to excite ventral striatum. Lateral hypothalamus (Primary Reward Input)-> [(+)ventral striatum <-> ventral pallidium(+)-> PPTN(+)-> SNc]. SNc-> [dopamine signal -> ventral striatum. Conditioned Stimuli working memory trace (CS)(+)-> ventral striatum.
  • image p560fig15.30 The inhibitory pathway from striosomal cells to the SNc is able to inhibit the SNc when a reward occurs with expected timing and magnitude.
    || Inhibitory pathway. Learning: CS-striosomal LTP occurs due to a three-way coincidence [An active CS working memory input, a Ca2+ spike, a dopamine burst]; Signaling: The delayed Ca2+ spike facilitates striosomal-SNc inhibition;. Striosomal cells learn to predict both timing and magnitude of reward signal to cancel it: reward expectation;. Conditioned stimuli (CS) LTP-> Striosomal cells <- dopamine | (-)-> SNc->.
  • image p561fig15.32 The SNc can generate both dopamine bursts and dips in response to rewards whose amplitude is unexpectedly large or small.
    || Inhibitory pathway: expectation magnitude. 1. If reward is greater than expected, a dopamine burst causes striosomal expectation to increase. 2. If reward is less than expected, a dopamine dip causes striosomal expectation to decrease. 3. This is a negative feedback control system for learning. Conditioned stimuli (CS)-> Striosomal cells <- dopamine | (-)-> SNc->.
  • image p569fig15.40 The direct and indirect basal ganglia circuits that control GO and STOP movement signals. See the text for details.
    || [Direct path GO(+), Indirect path STOP(+), dopamine from SNc(+-)]-> striatum. GO-> GPi/SNr-> Thalamus (VA/Vlo) <-> frontal cortex. STOP-> GPe <-> STN-> GPi/SNr. NAc-> GPi/SNr.
  • image p375fig11.06 The contrast constraint on binocular fusion is realized by obligate cells in layer 3B of cortical area V1.
    || Model implements contrast constraint on binocular fusion (cf. "obligate" cells Poggio 1991). An ecological constraint on cortical development. [left, right] eye cart V1-[4 monocular simple, 3B binocular simple, complex2/3A] cells. Inhibitory cells (red) ensure that fusion occurs when contrasts in left and right eye are approximately equal.
  • image p376fig11.09 The disparity filter in V2 helps to solve the correspondence problem by eliminating spurious contrasts using line-of-sight inhibition.
    || Model V2 disparity filter solves the correspondence problem. An ecological constraint on cortical development. [left, right] eye view: False matches (black) suppressed by line-of-sight inhibition (green lines). "Cells that fire together wire together".
  • image p581fig16.06 The learning of hexagonal grid cell receptive fields as an animal navigates an open field is a natural consequence of simple trigonometric properties of the positions at which the firing of stripe cells that are tuned to different directions will co-occur.
    || The Trigonometry of spatial navigation. Coactivation of stripe cells.
  • image p583fig16.09 The GRIDSmap model used algorithmically defined stripe cells to process realistic rat trajectories. The stripe cell outputs then formed inputs to the adaptive filter of a self-organizing map which learned hexagonal grid cell receptive fields.
    || GRIDSmap. Self-organizing map receives inputs from stripe cells and learns to respond to most frequent co-activation patterns. Stripe cells combine speed and head direction to create a periodic 1D position code. Virtual rat navigated using live rat trajectories from Moser Lab. Speed and head direction drives stripe cells.
  • image p584fig16.11 GRIDSmap simulation of the learning of hexagonal grid fields. See the text for details.
    || Simulation results. Multiple phases per scale. response vs lenght scale (0.5m+).
  • image p585fig16.13 Hexagonal grid cell receptive fields develop if their stripe cell directional preferences are separated by 7, 10, 15, 20, or random numbers degrees. The number and directional selectivities of stripe cells can thus be chosen within broad limits without undermining grid cell development.
    ||
  • image p585fig16.14 Superimposing firing of stripe cells whose directional preferences differ by 60 degrees supports learning hexagonal grid cell receptive fields in GRIDSmap.
    || GRIDSmap: from stripe cells to grid cells. Grid-cell Regularity from Integrated Distance through Self-organizing map. Superimposing firing of stripe cells oriented at intervals of 60 degrees. Hexagonal grid!
  • image p586fig16.15 Superimposing stripe cells oriented by 45 degrees does not lead to learning of rectangular grids in GRIDSmap, but it does in an oscillatory inference model.
    || Why is a hexagonal grid favored? Superimposing firing of stripe cells oriented at intervals of 45 degrees. Rectangular grid. This and many other possibilities do not happen in vivo. They do happen in the oscillatory inference model. How are they prevented in GRIDSmap?
  • image p587fig16.17 A finer analysis of the 2D trigonometry of spatial navigation showed that both the frequency and amplitude of coactivations by stripe cells determine the learning of hexagonal grid fields.
    || A refined analysis: SOM amplifies most frequent and energetic coactivations (Pilly, Grossberg 2012). [linear track, 2D environment]. (left) Stripe fields separated by 90°. 25 coactivations by 2 inputs. (right) Stripe fields separated by 60°. 23 coactivations by 3 inputs.
  • image p588fig16.18 Simulations of coordinated learning of grid cell receptive fields (second row) and unimodal place cell receptive fields (third row) by the hierarchy of SOMs in the GridPlaceMap model. Note the exquisite regularity of the hexagonal grid cell firing fields.
    || [stripe, grid, place] cells vs [spikes on trajectory, unsmoothed rate map, smoothed rate map].
  • image p605fig16.39 Data showing effects of medial septum (MS) inactivation on grid cells and network theta oscillations in medial entorhinal cortex (MEC). (A) Examples of disruption in the spatial expression of the hexagonal grid structure for two grid cells (Brandon etal 2011). (B) Temporal reduction in the power and frequency of network theta oscillations (Koenig etal 2011). (C) Temporary reduction in the gridness score, mean firing rate, and patial stability of grid cells (Koenig etal 2011).
    || Disruptive effects of Medial Septum inactivation in Medial Entorhinal Cortex (Brandon etal 2011; Koenig etal 2011). (A) Rate map [rate map, spatial autocorrelations, trajectory] vs [baseline, sub-sampled, medial septum inactivation, 3-6 hour recovery, 24 hour recovery], [rate map (Hz- m, p), spatial autocorrelations (gridness)][ 1.2, 7.2, 1.1; 0.25, 1.7, 0.6; 0.25, 2.5, -0.53; 0.7, 5.1, 0.55; 1.0, 5.3, 1.3; 2.1, 15, 0.19; 1.7, 12, 0.71; 1.7, 3.2, -0.22; 1.8, 9.1, 0.68; 2.5, 13, 0.46]. (B) [normalized power at 7-9 Hz, frequency (Hz)] vs 5-minute periods. (C) [mean gridness score (+-SEM), mean firing rate (% of baseline), mean correlation coeff (+-SEM)] vs 10-minute periods.
  • p353 Chapter 10 Laminar computing by cerebral cortex - Towards a unified theory of biologucal and artificial intelligence
  • image p011fig01.07 The choice of signal function f determines how an initial activity pattern will be transformed and stored in short-term memory (STM). Among [same, slower, faster]-than-linear signal functions, only the last one can suppress noise. It does so as it chooses the population that receives the largest input for storage, while suppressing the activities of all other population, thereby giving rise to a winner-take-all choice.
    || initial pattern (xi(0) vs i):
    fXi(∞)= xi(∞)/sum[j: xj(∞)]x(∞)
    linearperfect storage of any patternamplifies noise (or no storage)
    slower-than-linearsaturatesamplifies noise
    faster-than-linearchooses max [winner-take-all, Bayesian], categorical perceptionsuppresses noise, [normalizes, quantizes] total activity, finite state machine
  • image p357fig10.04 Laminar Computing achieves its properties by computing in a new way that sythesizes the best properties of feedforward and feedback interactions, analog and digital computations, and preattentive and attentive learning. The property of analog coherence enables coherent groupings and decisions to form without losing sensitivity to the amount of evidence that supports them.
    || Laminar Computing: a new way to compute. 1. Feedforward and feedback: a) Fast feedforward processing when data are unambiguous (eg Thorpe etal), b) slower feedback chooses among ambiguous alternatives [self-normalizing property, real-time probabiligy theory], c) A self-organizing system that trades certainty against speed: Goes beyond Bayesian models! 2. Analog and Digital: Analog Coherence combines the stability of digital with the sensitivity of analog. 3. Preattentive and Attentive Learning: Reconciles the differences of (eg) Helmholtz and Kanizsa, "A preattentive grouping is its own
  • image p230fig05.38 The Synchronous Matching ART, or SMART, model includes spiking neurons in a laminar cortical hierarchy. I developed SMART with my PhD student Massimiliano Versace. By unlumping LAMINART to include spiking neurons, finer details of neurodynamics, such as the existence of faster gamma oscillations during good enough matches, and slower beta oscillations during bad enough mismatches, could be shown as emergent properties of network interactions.
    || Second order thalamus -> specific thalamic nucleus -> Thalamic reticulate nucleus -> neocortical laminar circuit [6ll, 6l, 5, 2/3, 1] -> Higher order cortex. Similar for First order thalamus -> First order cortex, with interconnection to Second order, nonspecific thalamic nucleus
  • image p232fig05.40 Computer simulation of how the SMART mode generates (a) gamma oscillations if a good enough match occurs, or (c) beta oscillations if a bad enough match occurs. See the text for details.
    || Brain oscillations during match/mismatch, data, simulation. (a) TD corticothalamic feedback increases synchrony (Sillito etal 1994) (b) Match increases γ oscillations (c) Mismatch increases θ,β oscillations
  • image p233fig05.42 Mismatch-induced beta oscillations have been reported in at least three parts of the brain: V1, V4, and hippocampus. Althpough there may be other reasons for beta oscillations in the brain, those that are caused by a mismatch should be studied in concert with the gamma oscillations that occur during a good enough match. See tyhe text for details.
    || Is there evidence for the [gamma, beta] prediction? Yes, in at least three parts of the brain, (Buffalo EA, Fries P, Ladman R, Buschman TJ, Desimone R 2011, PNAS 108, 11262-11267) Does this difference in average oscillation frequencies in the superficial and deep layers reflect layer 4 reset? Superficial recording γ (gamma), Deep recording β (beta) (Berke etal 2008, hippocampus; Buschman and Miller 2009, FEF)
  • image p296fig08.07 When two flashes turn on and off out of phase with the correct range of interstimulus intervals, and not too far from one another, then either beta motion of phi motion are perceived.
    || Beta and Phi motion percepts. Beta motion: percepts of continuous motion of a well-defined object across empty intervening space. Phi motion: sense of "pure" motion without a concurrent percept of moving object. (Exner 1875) http://www.yorku.ca/eye/balls.htm
  • image p297fig08.08 When a second flash is more intense than the first flash, then apparent motion may occur from the second to the first flash.
    || Delta motion: motions from the second to the first flash. Data: (Kolers 1972; Korte 1915). Simulation: (Grossberg, Rudd 1992). This occurs when the luminance or contrast of the second flash is large compared to that of the first flash. Sustained and transient cells obey shunting dynamics whose averaging rates speed up with output intensity. The first flash to wane is the one that will be the source of the G-wave.
  • image p340fig09.07 Log polar remapping from the retina to cortical area V1 and beyond converts expansion, translation, and spiral flows on the retina into parallel flows, with different orientations, on the cortical map.
    || Log polar remapping of optic flow. retina -> cortex. Any combination of expansion and circular motion centered on the fovea maps to cortex as a single direction. Retinal Cartesian coordinates (x,y) map to cortical polar coordinates (r,theta). This makes it easy to compute directional receptive fields in the cortex!
  • image p423fig12.20 The Spatial Pitch Network, or SPINET, model shows how a log polar spatial representation of the sound frequency spectrum can be derived from auditory signals occuring in time. The spatial representation allows the ARTSTREAM model to compute spatially distinct auditory streams.
    || SPINET model (Spatial Pitch Network) (Cohen, Grossberg, Wyse 1995). 1. input sound 2. Gamma-tone filter bank 3. Shaort-term average energy spectrum 4. MAP transfer function 5. On-center off-surround and rectification 6. Harmonic weighting 7. Harmonic summation and competition -> PITCH
  • image p598fig16.34 The spiking GridPlaceMap model generates theta-modulated place and grid cell firing, unlike the rate-based model.
    || Theta-modulated cells in spiking model. [place, grid] cell vs [membrane potential (mV vs time), frequency vs inter-spike intervals (s), power spectra (normalized power vs frequency (Hz))].
  • image p605fig16.39 Data showing effects of medial septum (MS) inactivation on grid cells and network theta oscillations in medial entorhinal cortex (MEC). (A) Examples of disruption in the spatial expression of the hexagonal grid structure for two grid cells (Brandon etal 2011). (B) Temporal reduction in the power and frequency of network theta oscillations (Koenig etal 2011). (C) Temporary reduction in the gridness score, mean firing rate, and patial stability of grid cells (Koenig etal 2011).
    || Disruptive effects of Medial Septum inactivation in Medial Entorhinal Cortex (Brandon etal 2011; Koenig etal 2011). (A) Rate map [rate map, spatial autocorrelations, trajectory] vs [baseline, sub-sampled, medial septum inactivation, 3-6 hour recovery, 24 hour recovery], [rate map (Hz- m, p), spatial autocorrelations (gridness)][ 1.2, 7.2, 1.1; 0.25, 1.7, 0.6; 0.25, 2.5, -0.53; 0.7, 5.1, 0.55; 1.0, 5.3, 1.3; 2.1, 15, 0.19; 1.7, 12, 0.71; 1.7, 3.2, -0.22; 1.8, 9.1, 0.68; 2.5, 13, 0.46]. (B) [normalized power at 7-9 Hz, frequency (Hz)] vs 5-minute periods. (C) [mean gridness score (+-SEM), mean firing rate (% of baseline), mean correlation coeff (+-SEM)] vs 10-minute periods.
  • Grossberg 2021 p229c2h0.60 SMART computer simulations demonstrate that a good enough match of a top-down expectation with a bottom-up feature pattern generates an attentive resonance during which the spikes of active cells synchronize in the gamma frequency range of 20-70 Hz (Figure 5.40). Many labs have reported a link between attention and gamma oscillations in the brain, including two articles published in 2001, one from the laboratory of Robert Desimone when he was at the the National Institute of Mental Health in Bethseda (Fries, Reynolds, Rorie, Desimone 2001), and the other from the laboratory of Wolf Singer in Frankfurt (Engel, Fries, Singer 2001). You
  • p618 Chapter 17 A universal development code - Mental measurements embody universal laws of cell biology and physics
  • image p025fig01.16 (left panel) The main processing stages of the Cognitive-Emotional-Motor (CogEM) model have anatomical interpretations in terms of sensory cortex, amygdala, and prefrontal cortex. Chapter 13 will describe in greater detail how CS cues activate invariant object categories in the sensory cortex, value categories in the amygdala, and object-value categories in the prefrontal cortex, notably the orbitofrontal cortex. The amygdala is also modulated by internal drive inputs like hunger and satiety. (right panel) Anatomical data support this circuit, as do many neurophysiological data.
    || drive -> amygdala -> prefrontal cortex <-> sensory cortex -> amygdala. [visual, somatosensory, auditory, gustatory, olfactory] cortex -> [amygdala, Orbital Prefrontal Cortex]. amygdala -> Lateral Prefrontal Cortex
  • image p058fig02.04 Serial learning paradigm: Learning the temporal order of events by practicing them in the order that they occur in time.
    || Learning a global arrow in time. How do we learn to encode the temporal order of events in LTM? serial learning. [w=intra, W=inter]trial interval. "... data about serial verbal learning (Figure 2.4) seemed to suggest that events can go "backwards in time". ..."
  • image p059fig02.05 Bowed serial position curve. This kind of data emphasizes the importance of modelling how our brains give rise to our minds using nonlinear systems of differential equations.
    || Effects of [inter, intra]trial intervals (Hovland 1938). # of errors vs list position. [w (sec), W (sec)] = (2 6) (4 6) (2 126) (4 126). Nonoccurance of future items reduces the number of errors in response to past items. Thes data require a real-time theory for their explanation! that is, DIFFERENTIAL equations.
  • image p059fig02.06 The bowed serial position curve illustrates the sense in which "events can go backwards in time" during serial learning.
    || Bow due to backward effect in time. If the past influenced the future, but no conversely: # of errors vs list position; Data (Hoyland Hull, Underwood, etc).
  • image p071fig02.16 To solve the noise-saturation dilemma, individual neurons in a network that is receiving a distributed spatial patterns of inputs need to remain sensitive to the ratio of input to them divided by all the inputs in that spatial pattern. Although the inputs are delivered to a finite number of neurons, the input and activity patterns are drawn continuously across the cells for simplicity.
    || Noise-Saturation Dilemma. [Ii, xi] vs t. [Input, Activity] pattern [small -> noise, large -> saturation]. Problem: remain sensitive to input RATIOS θi = Ii / sum[j: Ij] as total input I = sum[j: Ij] -> ∞. Many kinds of data exhibit sensitivity to ratios of inputs.
  • image p073fig02.19 Computing with cells: infinity does not exist in biology!
    || Computing in a bounded activity domain, Gedanken experiment (Grossberg 1970). Vm sub-areas [xm, B - xm], I(all m)], m=[1, i, B].
    Bexcitable sites
    xi(t)excited sites (activity, potential)
    B - xi(t)unexcited sites
  • image p082fig02.37 My models begin with behavioral data, since brains are designed to achieve behavioral success. The text explains how models evolve in stages, through a process of successive refinements, or unlumpings. These unlumpings together carry out a kind of conceptual evolution, leading to models that can explain and predict ever larger psychological and neurobiological databases.
    || Modelling method and cycle.
    Behavioral data -(art of modeling)-> Design principles <- Neural data <-(brain predictions)- Mathematical model and analysis -(behavioral predictions)-> Behavioural data
    Operationalizes "proper level of abstraction"
    Operationalizes that you cannot "derive a brain" in one step.
  • image p085fig02.38 Our models have been used in many large-scale applications to engineering and technology. Linking brain to behavior explains how brain mechanisms give rise to psychological functions, and do so autonomously. The combination of mechanism, function, and autonomy helps to explain their value in helping to solve outstanding problems in technology.
    || Modeling method and cycle.
    Behavioral data -(art of modeling)-> Design principles <- Neural data <-(brain predictions)- Mathematical model and analysis -(behavioral predictions)-> Behavioural data
    Technology: Mathematical model and analysis <-> Technological applications
    At every stage, spin off new model designs and mechanisms to technologist who need autonomous intelligent applications.
  • image p134fig04.14 The kinds of displays that Michael Paradiso and Ken Nakayamas used to catch filling-in "in the act" and which Karl Arrington then simulated using the Grossberg and Todorovic 1988 model.
    || Experiments on filling-in. Catching "filling0in" in the act (Paradiso, Nakayama 1991). (Arrington 1994 Vision Research 34, 3371-3387) simulated these data using the model of Grossberg and Todorovic 1988.
  • image p145fig04.23 If end gaps were not closed by end cuts, then color would flow out of every line end!
    || A perceptual disaster in the feature contour system. feature contour, line boundary. input -> [boundary, surface]. boundary -> surface. Color would flow out of every line end! as it does during neon color spreading.
  • image p151fig04.29 Experimental evidence of bipole cells in cortical area V2 was reported by Von der Heydt, Peterhans, and Baumgarter (1984).
    || Bipoles: first neurophysiological evidence (V2) (von der Heydt, Peterhans, Baumgartner 1984, Peterhans, von der Heydt 1988). (Grossberg 1984) prediction.
    Ordering:
    Stimulus (S)
    probe location *
    cells in V2
    response?
    ...(S)*...YES
    ...*...(S)NO
    (S)...*...NO
    (S)...*...(S)YES
    (S)...*...
    (more contrast)
    NO
    (S)...*.....(S)YES
    Evidence for receptive field.
  • image p151fig04.30 Anatomical evidence for long-range horizontal connections has also been reported, as illustrated by the example above from (Bosking etal 1997).
    || Anatomy: horizontal connections (V1) (Bosking etal 1997). tree shrew. [10, 20]*[20, 10, 0, -10, -20] (degrees).
  • image p152fig04.31 The predicted bipole cell receptive field (upper left corner) has been supported by both neurophysiological data and psychophysical data, and used in various forms by many modelers. See the text for details.
    || Bipoles through the ages. (Grossberg 1984; Grossberg, Mongolla 1985). (Field, Hayes, Hess 1993) "association field". (Heitger, von der Heydt 1993). (Williams, Jacobs 1997). cf. "relatability" geometric constraints on which countours get to group (Kellman & Shipley 1991). Also "tensor voting" (Ullman, Zucker, Mumford, Guy, Medioni, ...).
  • image p159fig04.36 Graffiti art by Banksy exploits properties of amodal boundary completion and spatial impenetrability.
    || p159c1h0.75 perceptual psychologist Nava Rubin "... When the wall is smooth, Banksy leaves the regions previously covered by stencil unpainted, relying of observers
  • image p162fig04.38 How long-range cooperation among bipole cells and short-range competition by hypercomplex cells work together to generate the inverted-U in boundary strength that is found in the data of Figure 4.37 (right panel).
    || Cooperation and competition during grouping.
    few lineswide spacing, inputs outside spatial range of competition, more inputs cause higher bipole activity
    more linesnarrower spacing, slightly weakens net input to bipoles from each inducer
    increasing line densitycauses inhibition to reduce net total input to bipoles
  • image p163fig04.39 A schematic of the LAMINART model that explains key aspects of laminar visual cortical anatomy and dynamics. LGN -> V1 [6, 4, 2/3] -> V2 [6, 4, 2/3]
    || p163c1h0.6 "... The first article abount laminar computing ... proposed how the laminar cortical model could process 2D pictures using bottom-up filtering and horizontal bipole grouping interactions (Grossbeerg, Mingolla, Ross 1997). In 1999, I was able to extend the model to also include top-down circuits for expectation and attention (Grossberg 1999)(right panel). Such a synthesis of laminar bottom-up, horizontal, and top-down circuits is characteristic of the cerebral cortex (left panel). I called it LAMINART because it began to show how properties of Adaptive Resonance Theory, or ART, notably the ART prediction about how top-down expectations and attention work, and are realized by identical cortical cells and circuits. You can immediately see from the schematic laminar circuit diagram ... (right panel) that circuits in V2 seem to repeat circuits in V1, albeit with a larger spatial scale, despite the fact that V1 and V2 carry out different functions, How this anatomical similarity can coexist with functional diversity will be clarified in subsequent sections and chapters. It enable dfifferent kinds of biological intelligence to communicate seamlessly while carrying out their different psychological functions. ..."
  • image p165fig04.41 The Kanizsa-Minguzzi ring. See the text for details.
    || p165c1h0.6 "... (left panel), the annulus is divided by two line segments into annular sectors of unequal area. Careful viewing shows that the smaller sector looks a little brighter than the larger one. (Kanizsa, Minguzzi 1986) noted that "this unexpected effect is not easily explained. In fact, it cannot be accounted for by any simple psychological mechanism such as lateral inhibition or freuency filtering. Furthermore, it does not seem obvious to invoke oganizational factors, like figural belongingness or figure-ground articulation."". p165c2h0.35 "... (Grossberg, Todorovic 1988). Our main claim is that the two radial lines play two roles, one in the formation of boundaries with which to contain the filling-in process, and the other as a source of feature contour signals that are filled-in within the annular regions to create a surface brightness percept. ..."
  • image p232fig05.40 Computer simulation of how the SMART mode generates (a) gamma oscillations if a good enough match occurs, or (c) beta oscillations if a bad enough match occurs. See the text for details.
    || Brain oscillations during match/mismatch, data, simulation. (a) TD corticothalamic feedback increases synchrony (Sillito etal 1994) (b) Match increases γ oscillations (c) Mismatch increases θ,β oscillations
  • image p232fig05.41 (a)-(c). The sequence of interlaminar events that SMART predicts during a mismatch reset. (d) Some of the compatible neurophysiological data.
    || Mismatch causes layer 5 dendritic spike that trigger reset. (a) Arousal causes increase in nonspecific thalamic nuclei firing rate and layer 5 dendritic and later somatic spikes (Karkum and Zhu 2002, Williams and Stuart 1999) (b) Layer 5 spikes reach layer 4 via layer 6i and inhibitory neurons (Lund and Boothe 1975, Gilbert and Wiesel 1979) (c) habituative neurotransmitters in layer 6i shift the balance of active cells in layer 4 (Grossberg 1972, 1976) (d) Dendritic stimulation fires layer 5 (Larkum and Zhu 2002) stimulation apical dendrites of nonspecific thalamus
  • image p252fig06.01 A surface-shroud resonance begins to form when the surface representations of objects bid for spatial attention. In addition to these topographic excitatory inputs, there is long-range inhibition of the spatial attention cells that determines which inputs will attract spatial attention.
    || Bottom-up spatial attention competition. [more, less] luminous perceptual surfaces -> competition -> spatial attention
  • image p253fig06.02 After bottom-up surface inputs activate spatial attentional cells, they send top-down topographic excitatory signals back to the surface representations. This recurrent shunting on-center off-surround network contrast enhances larger attentional activities while approximately normalizing the total spatial attentional activity. A surface-shroud resonance hereby forms that selects an attentional shroud, enhances the perceived contrast of the attended surface (light blue region), and maintains spatial attention on it.
    || Surface-shroud resonance. perceptual surfaces -> competition -> spatial attention. (Carrasco, Penpeci-Talgar, and Eckstein 2000, Reynolds and Desimone 2003)
  • image p257fig06.05 A curve tracing task with monkeys was used by Roelfsema, Lamme, and Spekreijse in 1998 to demonstrate how spatial attention can flow along object boundaries. See the text for details.
    || Attention flows along curves: Roelfsema etal 1998: Macaque V1. fixation (300ms) -> stimulus (600ms RF - target curve, distractor) -> saccade. Crossed-curve condition: attention flows across junction between smoothly connected curve segments, Gestalt good continuation
  • image p258fig06.06 Neurophysiological data and simulation of how attention can flow along a curve. See the text for details.
    || Simulation of Roelfsema etal 1998, data & simulation. Attention directed only to far end of curve. Propagates along active layer 2/3 grouping to distal neurons.
  • image p265fig06.13 The basal ganglia gate perceptual, cognitive, emotional, and more processes through parallel loops.
    || [motor, ocularmotor, dorsolateral, ventral-orbital, anterior cingulate] vs. [Thalamus, pallidum-subs, nigra, Striatum, Cortex]
  • image p267fig06.14 Feedback from object surfaces to object boundaries uses surface contours. This feedback assures complementary consistency and enables figure-ground separation. A corollary discharge of the surface contours can be used to compite salient object feature positions.
    || Perceptual consistency and figure-ground separation.
  • image p271fig06.17 Persistent activity in IT cells is just what is needed to enable view-invariant object category learning by ARTSCAN to be generalized to [view, position, size]-invariant category learning by positional ARTSCAN, or pARTSCAN. See the text for details.
    || Persistent activity in IT. Physiological data show that persistent activity exist in IT (Fuester and Jervey 1981, Miyashita and Chang 1988, Tomita etal 1999). Adapted from (Tomita etal 1999 Nature)
  • image p273fig06.20 pARTSCAN can simulate the IT cell recoding that Li and DiCarlo reported in their swapping experiments because the swapping procedure happens without causing a parietal reset burst to occur. Thus the originally activated invariant category remains activated and can get associated with the swapped object features.
    || Simulation of Li and DiCarlo swapping data. data (Li and DiCarlo 2008), model (Cao, Grossberg, Markowitz 2011). normalized response vs. exposure (swaps and/or hours)
  • image p274fig06.21 pARTSCAN can also simulate the trade-off in IT cell responses between position invariance and selectivity that was reported by Zoccolan etal 2007. This trade-off limits the amount of position invariance that can be learned by a cortical area like V1 that is constrained by the cortical magnification factor.
    || Trade-off in IT cell response properties. Inferotemporal cortex cells with greater position invariance respond less selectively to natural objects. invariance-tolerance, selectivity-sparseness. data (Zoccolan etal 2007) model (Grossberg, Markowitzm, Cao 2011). position tolerance (PT, degrees) vs sparseness (S)
  • image p275fig06.23 Data from (Akrami etal 2009) and our simulation of it. See the text for details.
    || IT responses to image morphs. data vs model
  • image p284fig07.02 Psychophysical data (top row) and simulation (bottom row) of how persistence decreases with flash illuminance and duration.
    || Persistence data and simulations. (Francis, Grossberg, Mingolia 1994 Vision Research, 34, 1089-1104). Persistence decreases with flash illuminance and duration (Bowen, Pola, Matin 1974; Breitmeyer 1984; Coltheart 1980). Higher luminance or longer duration habituates the gated dipole ON channel more. Causes larger and faster rebound in the OFF channel to shut persisting ON activity off.
  • image p285fig07.03 Persistence decreases with flash illuminance and duration due to the way in which habituative transmitters regulate the strength of the rebound in response to offset of a stimulating input, and how this rebound inhibits previously activated bipole cells.
    || Persistence data and simulations (Francis, Grossberg, Mingolia 1994 Vision Research, 34, 1089-1104). Persistence decreases with flash illuminance and duration. Horizontal input excites a horizontal bipole cell, which supports persistence. Offset of the horizontal input causes a rebound of activity in the vertical pathway, which inhibits the horizontal bipole cell, thereby terminating persistence.
  • image p286fig07.04 Illusory contours persist longer than real contours because real contours have more inducers whose rebound at contour offset can cause faster boundary reset. Illusory contours also take longer to form than real contours, which explains the increasing portion of the curve.
    || Persistence data and simulations (Meyer, Ming 1988; Reynolds 1981). Increasing portion of curve is due to formation time of the illusory contour. Longer persistence is due to fewer bottom-up inducers of an illusory contour that has the same length as a real contour: only illuminance-derived edges generate reset signals. When bottom-up inducers are inhibited by OFF cell rebounds, their offset gradually propagates to the center of the illusory contour.
  • image p286fig07.05 This figure shows the propagation through time of illusory contour offset from the rebounded cells that got direct inputs to the center of the contour.
    || Persistence data and simulations. Illusory contours persist longer than real contours (Meyer, Ming 1988; Reynolds 1981). When bottom-up inducers are inhibited by OFF cell rebounds, their offset gradually propagates to the center of the illusory contour.
  • image p287fig07.06 The relative durations of persistence that occur due to an adaptation stimulus of the same or orthogonal orientation follow from the properties of the habituative gated dipoles that are embedded in the boundary completion system.
    || Persistence data and simulations. Change in persistence depends on whether adaptation stimulus has same or orthogonal orientation as test grating (Meyer, Lawson, Cohen 1975). If adaptation stimulus and test stimulus have the same orientation, they cause cumulative habituation, which causes a stronger reset signal, hence less persistence. When they are orthogonal, the competition on the ON channel is less, hence more persistence.
  • image p287fig07.07 Persistence increases with distance between a target and a masking stimulus due to weakening of the spatial competition in the first competitive stage of hypercomplex cells.
    || Persistence data and simulations. Persistence increases with distance between a target and a masking stimulus (Farrell, Pavel, Sperling 1990). There is less spatial competition from the masker to the target when they are more distant, hence the target is more persistent.
  • image p297fig08.08 When a second flash is more intense than the first flash, then apparent motion may occur from the second to the first flash.
    || Delta motion: motions from the second to the first flash. Data: (Kolers 1972; Korte 1915). Simulation: (Grossberg, Rudd 1992). This occurs when the luminance or contrast of the second flash is large compared to that of the first flash. Sustained and transient cells obey shunting dynamics whose averaging rates speed up with output intensity. The first flash to wane is the one that will be the source of the G-wave.
  • image p297fig08.09 Simulation of motion in opposite directions that is perceived when two later flashes occur on either side of the first flash.
    || Split motion. Data: (H.R. Silva 1926), Simulation: (Grossberg, Rudd 1992)
  • image p298fig08.10 Simulation of the motion speed-up that is perceived when flash duration decreases.
    || "The less you see it, the faster it moves". Data: (Giaschi, Anstis 1989), Simulation: (Grossberg, Rudd 1992). ISI = 0, flash duration decreases; SOA = constant, flash duration decreases
  • image p304fig08.22 Data (top image) and simulation (bottom image) of Korte
  • image p311fig08.30 The data of (Castet etal 1993) in the left image was simulated in the right image by the 3D FORMOTION model that I developed with my PhD student Jonathan Chey. These data provide insight into how feature tracking signals propagate from the ends of a line to its interior, where they capture consistent motion directional signals and inhibit inconsistent ones.
    || Solving the aperture problem. A key design problem: How do amplified feature tracking signals propagate within depth to select the cirrect motion directions at ambiguous positions? This propagation from feature tracking signals to the line interior determines perceived speed in Castet etal data, which is why speed depends on line tilt and length. Data: (Castet etal 1993), Simulation: (Chey etal 1997)
  • image p319fig08.38 The neurophysiological data from MT (left image) confirms the prediction embodied in the simulation of MT (right image) concerning the fact that it takes a long time for MT to compute an object
  • image p333fig08.58 Neurophysiological data (left image) and simulation (right image) of LIP data during correct trials on the RT task. See the text for details.
    || LIP responses during RT task correct trials (Roltman, Shadlen 2002). More coherence in favored direction causes faster cell activation. More coherence in opposite direction causes faster cell inhibition. Coherence stops playing a role in the final stages of LIP firing.
  • image p334fig08.59 Neurophysiological data (left column) and simulations (right column) of LIP responses for the FD task during both [correct, error] trials. See the text for details.
    || LIP responses for the FD task during both [correct, error] trials (Shadlen, Newsome 2001). LIP encodes the perceptual decision regardless of the true direction of the dots. Predictiveness of LIP responses on error trials decreases with increasing coherence.
  • image p334fig08.60 Behavioral data (left image) and simulation (right image) about accuracy in both the RT and FD tasks. See text for details
    || Behavioral data: % correct vs % coherence (Mazurek etal 2003; Roltman, Shadien 2002). More coherence in the motion causes more accurate decisions. RT task accuracy at weaker coherence levels is slightly better than FD task accuracy.
  • image p335fig08.61 Behavioral data (left image) and simulation (right image) about speed in correct and error trials of the RT task. See text for details.
    || Behavioral data: speed, correct and error trials (RT task) (Roltman, Shadien 2002). More coherence in the motion causes faster reaction time.
  • image p335fig08.62 More remarkable simulation fits (right column) to LIP neurophysiology data (left column) about where and when to move the eyes.
    || LIP encodes not only where, but also when, to move the eyes. ...No Bayes(Roltman, Shadien 2002). Firing rate (sp/s) vs time (ms). Slope of firing rate (sp/s^2) vs % correct.
  • image p342fig09.11 Psychophysical data (left panel) and computer simulation (right column) of the importance of efference copy in real movements. See the text for details.
    || Heading: move to wall and fixate stationary object (adapted from Warren, Hannon 1990). Inaccurate for simulated eye rotation, accurate for real eye rotation, need confirmation by efference copy!
  • image p343fig09.13 When one scans the three different types of pears in the left image, as illustrated by the jagged blue curve with red movement end positions, and transforms the resulting retinal images via the cortical magnification factor, or log polar mapping, the result is the series of images in the right column. How do our brains figure out from such confusing data which views belong to which pear?
    || View-invariant object learning and recognition Three pears: Anjou, Bartlett, Comice. Which is the Bartlett pear? During unsupervised scanning and learning about the world, no one tells the brain what views belong to which objects while it learns view-invariant object categories. Cortical magnificantion in V1.
  • image p349fig09.20 Using virtual reality displays (left image), (Fajen, Warren 2003) collected data (right two images) about how observers avoid obstacles (open circular disks) as a function of their distance and angular position as they navigate towards a fixed goal (x). These data illustrate how goals act as attractors while obstacles act as repellers.
    || Steering from optic flow (Fajen, Warren 2003). goals are attractors, obstacles are repellers. Damped spring model explains human steering data.
  • image p352fig09.26 The final stage of the model computes a beautiful expansion optic flow that permits an easy estimate of the heading direction, with an accuracy that matches that of human navigators.
    || The model generates accurate heading (Warren, Hannon 1990; Royden, Crowell, Banks 1994). Maximally active MSTd cell = heading estimate. Accuracy matches human data. Random dots [mean +-1.5°, worst +-3.8°], Random dots with rotation [accurate with rotations <1°/s, rotation increases, error decreases], OpenGL & Yosemite benchmark +-1.5°, Driving video +-3°.
  • image p356fig10.03 Laminar computing achieves at least three basic properties of visual processing that have analogs in all biologically intelligent behaviors. These properties may be found in all cortical circuits in specialized form.
    || What does Laminar Computing achieve? 1. Self-stabilizing development and learning; 2. Seamless fusion of a) pre-attentive automatic bottom-up processing, b) attentive task-selective top-down processing; 3. Analog coherence: Solution of Binding Problem for perceptual grouping without loss of analog sensitivity. Even the earliest visual cortical stages carry out active adaptive information processing: [learn, group, attention]ing
  • image p357fig10.04 Laminar Computing achieves its properties by computing in a new way that sythesizes the best properties of feedforward and feedback interactions, analog and digital computations, and preattentive and attentive learning. The property of analog coherence enables coherent groupings and decisions to form without losing sensitivity to the amount of evidence that supports them.
    || Laminar Computing: a new way to compute. 1. Feedforward and feedback: a) Fast feedforward processing when data are unambiguous (eg Thorpe etal), b) slower feedback chooses among ambiguous alternatives [self-normalizing property, real-time probabiligy theory], c) A self-organizing system that trades certainty against speed: Goes beyond Bayesian models! 2. Analog and Digital: Analog Coherence combines the stability of digital with the sensitivity of analog. 3. Preattentive and Attentive Learning: Reconciles the differences of (eg) Helmholtz and Kanizsa, "A preattentive grouping is its own
  • image p360fig10.09 Perceptual grouping is carried out in layer 2/3 by long-range horizontal excitatory recurrent connections, supplemented by short-range disynaptic inhibitory connections that together realize the bipole grouping properties that are diagrammed in Figure 10.10.
    || Grouping starts in layer 2/3. LGN-> 6-> 4-> 2/3: 1. Long-range horizontal excitation links collinear, coaxial receptive fields (Gilbert, Wiesel 1989; Bosking etal 1997; Schmidt etal 1997) 2. Short-range disynaptic inhibition of target pyramidal via pool of intraneurons (Hirsch, Gilbert 1991) 3. Unambiguous groupings can form and generate feedforward outputs quickly (Thorpe etal 1996).
  • image p361fig10.10 Bipole grouping is achieved by long-range horizontal recurrent connections that also give rise to short-range inhibitory interneurons which inhibit nearby bipole cells as well as each other.
    || Bipole property controls perceptual grouping. Collinear input on both sides. Excitatory inputs summate. Inhibitory inputs normalize, Shunting inhibition! Two-against-one. Cell is excited.
  • image p367fig10.15 Data (left column) and simulation (right column) of how attention prevents a masking stimulus from inhibiting the response to the on-center of the cell from which the recording was made.
    || Attention protects target from masking stimulus (Reynolds etal 1999; Grossberg, Raizada 2000).
  • image p367fig10.16 Neurophysiological data (left image) and simulation (right image) of how a low-contrast target can be facilitated if it is surrounded by a paid (31May2023 Howell - is word correct?) of collinear flankers, and suppresssed by them if it has high contrast.
    || Flankers can enhance or suppress targets (Polat etal 1998; Grossberg, Raizada 2000). target alone, target + flankers, flankers alone.
  • image p368fig10.17 Neurophysiological data (left image) and simulation (right image) showing that attention has a greater effect on low contrast than high contrast targets.
    || Attention has greater effect on low contrast targets (DeWeerd etal 1999; Raizada, Grossberg 2001). Threshold increase (deg) vs Grating contrast (%), [no, with] attention
  • image p368fig10.18 Neurophysiological data (left image) and simulation (right image) of relative on-cell activities when the input to that cell may also be surroubded by iso-orientation or perpendicular textures.
    || Texture reduces response to a bar: iso-orientation suppression (Knierim, van Essen 1992), perpendicular suppression (Raizada, Grossberg 2001)
  • image p369fig10.19 Data from (Watanabe etal 2001) showing perceptual learning of the coherent motion direction, despite the lack of extra-foveal attention and awareness of the moving stimuli.
    || Unconscious perceptual learning of motion direction, % correct for two tests, compared to chance level results.
  • image p393fig11.31 (Todd, Akerstrom 1987) created a series of 2D images from discrete black patches on a white disk and showed how the perceived depth varies with the factors summarized in the figure. The LIGHTSHAFT model quantitatively simulated their data.
    || Factors determining depth-from-texture percept. Perceived depth varies with texture element width, but only when elements are elongated and sufficiently aligned with one another to form long-range groupings. Data of (Todd, Akerstrom 1987) simulated by the LIGHTSHAFT model of (Grossberg, Kuhlmann 2007). [HP, LP, CCE, CCS, RO]
  • image p399fig11.39 Simulation of the eye rivalry data of (Lee, Blake 1999).
    || [Binocular, [left, right] eye] activity
  • image p402fig11.43 A pair of disparate images of a scene from the University of Tsukuba. Multiview imagre database.
    || input [left, right]
  • image p407fig12.03 Neurophysiological data showing how motor cortical cells code different vectors that are sensitive to both the direction of the commanded movement and its length.
    || (a) Single primary motor cortex neuron, onset of movement -> on..., radial architecture... (b) Motor cortex neuronal population, radial architecture...
  • image p409fig12.04 (top half) Neurophysiological data of vector cell responses in motor cortex. (bottom half) VITE model simulations of a simple movement in which the model
  • image p414fig12.11 Neurophysiological data from cortical areas 4 and 5 (every other column) and simulations thereof (other columns) during a reach.
    || activation vs time. (a) area 4 phasic RT (IFV) (b) area 4 tonic (OPV) (c) area 4 phasic-tonic (OFPV) (d) area 4 phasic MT (DVV) (e) area 5 phasic (DV) (f) area 5 tonic (PPV)
  • image p424fig12.21 One of the many types of data about pitch processing that are simulated by the SPINET model. See the text for details.
    || Pitch shifts with component shifts (Patterson, Wightman 1976; Schouten 1962). Pitch vs lowest harmonic number.
  • image p425fig12.23 ARTSTREAM simulations of the auditory continuity illusion and other streaming properties (left column, top row). When two tones are separated by silence (Input), a percept of silence also separates them in a spectral-pitch resonance. (left column, bottom row). When two tones are separated by broadband noise, the percept of tone continues through the noise in one stream (stream 1) while the remainder of the noise occurs in a different stream (stream 2). (right column) Some of the other streaming properties that have been simulated by the ARTSTREAM model.
    || Auditory continuity does not occur without noise. Auditory continuity in noise. Other simulated streaming data.
  • image p428fig12.25 (left architecture) Auditory-articulatory feedback loop whereby babbled sounds active learning in an imitative map that is later used to learn to reproduce the sounds of other speakers. An articulatory-to-auditory expectation renders learning possible by making the auditory and motor data dimensionally consistent, as in the motor theory of speech. (right architecture) Parallel streams in the ARTSPEECH model for learning speaker-independent speech and language meaning, including a mechanism for speaker normalization (right cortical stream) and for learning speaker-dependent vocalic qualities (left cortical stream).
    || left: Speaker-dependent vocalic qualities; right: Speaker-independent speech and language meaning
  • image p432fig12.28 (left image) The SpaN model simulates how spatial representations of numerical quantities are generated in the parietal cortex. (right image) Behavior numerosity data and SpaN model simulations of it.
    || (Left) preprocessor-> spatial number map-> Comparison wave. (Right) data axis: number of lever presses; model axis: node position in the spatial number axis
  • image p437fig12.32 Data from a free recall experiment illustrate the bowed serial position curve.
    || Serial position function for free recall Data: (Murdock 1962 JEP 64, 482-488). % correct vs position of word on a 40-word list. Primacy gradient can be a mixture of STM and LTM read-out.
  • image p437fig12.33 Item and Order working memory models explain free recall data, as well as many other psychological and neurobiological data, by simulating how temporal series of events are stored as evolving spatial patterns of activity at content-addressable item categories. The categories with the largest activities are rehearsed first, and self-inhibit their activity as they do so in order to prevent tem from being rehearsed perseveratively. The laws whereby the items are stored in working memory obey basic design principles concerning list categories, or chunks, of sequences of stored items can be stably remembered.
    || Working memory models: item and order, or competitive queuing (Grossberg 1978; Houghton 1990; Page, Norris 1998). Event sequence in time stored as an evolving spatial pattern of activity. Primacy gradient of working memory activation stores correct temporal order at content-addressable cells. Maximally activated cell populations is performed next when a rehearsal wave is turned on. Output signal from chosen cell population inhibits its own activity to prevent perseveration: inhibition of return. Iterate until entire sequence is performed.
  • image p443fig12.41 Neurophysiological data from the Averbeck etal sequential copying experiments show the predicted primacy gradient in working memory and the self-inhibition of activity as an item is stored. When only the last item remains stored, it has the highest activity becasuse it has been freed from inhibition by earlier items.
    || Neurophysiology of sequential copying
  • image p444fig12.42 The LIST PARSE laminar cortical model of working memory and list chunking that I published with Lance Pearson in 2008 simulated the Averbeck etal data in Figure 12.41, as in the left column of the figure. It also simulated cognitive data about working memory storage by human subjects. See the text for details.
    || LIST PARSE: Laminar cortical model of working memory and list chunking (Grossberg, Pearson 2008). Simulates data about: [immediate, delayed, continuous] distractor free recall; immediate serial recall; and variable-speed sequential perfomance of motor acts. [velocity, acceleration] vs time (ms) from recall cue.
  • image p446fig12.44 (left column, top row) LIST PARSE can model linguistic data from human subjects. In this figure, model parameters are fixed to enable a close fit to data about error-type distributions in immediate free recall experiments, notably transposition errors. (right column, top row) Simulation and data showing bowing of the serial position curve, including an extended primacy gradient. (left column, bottom row) The simulation curve overlays data about list length effects, notably the increasing recall difficulty of longer lists during immediate serial recall (ISR). (right column, bottom row) Simulation (bottom image) and data (top image) of the limited temporal extent for recall.
    || (1. TL) Error-type distributions in immediate serial recall (Hanson etal 1996). % occurence vs serial position. Graph convention: Data- dashed lines; Simulations- solid lines. Six letter visual ISR. Order errors- transpositions of neighboring items are the most common. Model explanation: Noisy activation levels change relative oorder in primacy gradient. Similar activation of neighboring items most susceptible to noise. Model paramenters fitted on these data. (2. TR) Bowing of serial position curve (Cowan etal 1999). % correct vs serial position. Auditory ISR with various list lengths (graphs shifted rightward): For [, sub-]span lists- extended primacy, with one (or two) item recency; Auditory presentation- enhanced performance for last items. LIST PARSE: End effects- first and last items half as many members; Echoic memory- last presented item retained in separate store. (3. BL) List length effects, circles (Crannell, Parrish 1968), squares (Baddeley, Hitch 1975), solid line- simulation. % list correct vs list length. Variable list length ISR: longer lists are more difficult to recall. LIST PARSE: More items- closer activation levels and lower absolute activity level with enough inputs; Noise is more likely to produce order errors, Activity levels more likely to drop below threshold;. (4. BR) Limited temporal extent for recall (Murdoch 1961). % recalled vs retention interval (s). ISR task with distractor-filled retention intervals (to prevent rehersal): Increasing retention interval - decreases probabilty of recalling list correctly; Load dependence- longer list more affected by delays; Performance plateau- subjects reach apparent asymptote. LIST PARSE: Increase convergence of activities with time; loss of order information;.
  • image p447fig12.45 (left column) LIST PARSE simulations of the proportion of order errors as a function of serial position for 6 item lists with (a) an extended pause of 7 time units between the third and fourth items, and (b) pauses of 5 time units (solid curve) and 10 time units (dashed curve) between all items. (right column) Simulations (solid curves) and data (dashed curves) illustrating close model fits in various immediate free recall tasks.
    || (Left) Temporal grouping and presentation variability. Temporal grouping: Inserting an extended pause leads to inter-group bowing; Significantly different times of integration and activity levels across pause, fewer interchanges. (Right) Immediate free recall, and [delayed, continuous] distractor-free recall. Overt rehersal IFR task with super-span (ie 20 item) lists: Extended recency- even more extended with shorter ISIs; Increased probability of recall with diminished time from last rehearsal; Early items in list rehearsed most;. LIST PARSE (unique) for long lists: Incoming items from recency gradient; Rehearsal (re-presentation) based upon level of activity.
  • image p452fig12.48 (left column) In experiments of (Repp etal 1978), the silence duration between the words GRAY and SHIP was varied, as was the duration of the fricative noise in S, with surprising results. (right column) The red arrow directs our attention to surprising perceptual changes as silence and noise durations increase. See the text for details.
    || Perceptual integration of acoustic cues, data (Repp etal 1978). GRAY-> silence duration-> SHIP (noise duration from start of word). Noise duration vs silence duration: GRAY SHIP <-> [GREAT SHIP <-> GRAY CHIP] <-> GREAT CHIP.
  • image p453fig12.49 The ARTWORD model that I published in 2000 with my PhD student Christopher Myers simulates data such as the (Repp etal 1978) data in Figure 12.68. See the text for details.
    || ARTWORD model (Grossberg, Myers 2000). Input phonetic features-> Phonemic item working memory-> Masking Field unitized lists-> Automatic gain control-> Phonemic item working memory. [habituative gate, adaptive filter]s.
  • image p465fig12.63 Neurophysiological data (left image) and lisTELOS stimulation (right figure) showing how microstimulation biases saccadic performance order but not the positions to which the saccades will be directed. See the text for details.
    || Saccade trajectories converge to a single location in space. Microstimulation biased selection so saccade trajectories converged toward a single location in space. [Data, model] contra <-> Ipsi (msec)
  • image p468fig12.65 Linguistic properties of the PHONET model and some of the data that it simulates. The upper left image summarizes the asymmetric transient-to-sustained gain control that helps to create invariant intraword ratios during variable-rate speech. The lower left image summarizes the rate-dependent gain control of the ARTPHONE model that creates rate-invariant working memory representations in response to sequences of variable-rate speech. The right image summarizes the kind of paradoxical VC-CV category boundary data of (Repp 1980) that ARTPHONE simulates. See the text for details.
    || (left upper) [transient, sustained] [working memory, filter, category]. (left lower) phone inputs-> [input rate estimate, features], Features w <- habituative transmitter gates -> categories-> rate invariant phonetic output, input rate estimate-> gain control-> [features, categories] rate-dependent integration of categories and features. (right) % 2-stop vs VC-CV silent interval (msec): [ib-ga, ib-ba, iga, iba].
  • image p469fig12.66 (left column) A schematic of how preserving relative duration, as in the first and third images, of consonant and vowel pairs can preserve a percept, in this case of /ba/, but not doing so, as in the first and second images, can cause a change in percept, as from /ba/ to /wa/, as in the data of (Miller, Liberman 1979) that PHONET simulates. (right column) Changing frequency extent can also cause a /ba/ - /wa/ transition, as shown in data of (Schwab, Sawusch, Nusbaum 1981) that PHONET also simulates.
    || (left image) Maintaining relative duration as speech speeds up preserves percept (Miller, Liberman 1979). frequency vs time- [/ba/, /wa/, /ba/] (right image) Changing frequency extent causes /b/-/wa/ transition (Schwab, Sawusch, Nusbaum 1981). frequency vs time- [/ba/, /wa/] Dt extent.
  • image p473fig12.69 Error rate and mean reaction time (RT) data from the lexical decision experiments of (Schvanevelt, McDonald 1981). ART Matching Rule properties explain these data in (Grossberg, Stone 1986).
    || (left) Error rate vs type of prime [R, N, U], [non,] word. (right) Mean RT (msec) vs type of prime [R, N, U], [non,] word.
  • image p474fig12.70 The kind of model macrocircuit that was used in (Grossberg, Stone 1986) to explain lexical decision task data.
    || inputs-> A1 <-> A2 oconic sensory features <-> A3 item and order in sensory STM <-> A4 list parsing in STM (masking field) <-> A5 semantic network (self-feedback). [A4, A5] <-> V* visual object recognition system. M1-> [outputs, A1]. M1 <-> M2 iconic motor features <-> M3 item and order in motor STM. A2-> M2. A3-> M3.
  • image p476fig12.71 Word frequency data of (Underwood, Freund 1970) that were explained in (Grossberg, Stone 1986).
    || percent errors vs frequency of old words [L-H to H-H, L-L to H-L].
  • image p484fig13.04 The top-down feedback from the orbitofrontal cortex closes a feedback loop that supports a cognitive-emotional resonance. If this resonance can be sustained long enough, it enables us to have feelings at the same time that we experience the categories that caused them.
    || Cognitive-Emotional resonance. Basis of "core consciousness" and "the feeling of what happens". (Damasio 1999) derives heuristic version of CogEM model from his clinical data. Drive-> amygdala-> prefrontal cortex-> sensory cortex, resonance around the latter 3. How is this resonance maintained long enough to become conscious?
  • image p485fig13.06 (left column) An inverted-U occurs in conditioned reinforcer strength as a function of the ISI between the CS and the US. Why is learning attenuated at 0 ISI? (right column) Some classical conditioning data that illustrate the inverted-U in conditioning as a function of the ISI.
    || InterStimulus Interval (ISI) effect. Data from (Dmith etal 1969; Schneiderman, Gormezano 1964).
  • image p490fig13.15 The CogEM circuit is an ancient design that is found even in mollusks like Aplysia. See the text for details.
    || Aplysia (Buononamo, Baxter, Byrne, Neural Networks 1990; Grossberg, Behavioral and Brain Sciences 1983). Facilitator neuron ~ drive representation.
  • image p504fig13.31 Behavioral contrast can occur during reinforcement learning due to decreases in either positive or negative reinforcers. See Figure 13.32 for illustrative operant conditioning data.
    || Behavioral contrast: rebounds! Shock level vs trials. 1. A sudden decrease in frequency or amount of food can act as a negative reinforcer: Frustration. 2. A sudden decrease in frequency or amount of shock can act as a positive reinforcer: Relief.
  • image p523fig14.03 (a) The MOTIVATOR neural model generalizes CogEM by also including the basal ganglia. It can hereby explain and simulate complementary functions of the amygdala and basal ganglia (SNc) during conditioning and learned performance. The basal ganglia generate Now Print signals in response to unexpected rewards. These signals modulate learning of new associations in many brain regions. The amygdala supports motivated attention to trigger actions that are expected to occur in response to conditioned or unconditioned stimuli. Object Categories represent visual or gustatory inputs in anterior inferotemporal (ITA) and rhinal (RHIN) cortices, respectively. Value Categories represent the value of anticipated outcomes on the basis of hunger and satiety inputs, in amygdala (AMYG) and lateral hypothalamus (LH). Object-Value Categories resolve the value of competing perceptual stimuli in medial (MORB) and lateral (ORB) orbitofrontal cortex. The Reward Expectation Filter detects the omission or delivery of rewards using a circuit that spans ventral striatum (VS), ventral pallidum (VP), striosomal delay (SD) cells in the ventral striatum, the pedunculopontine nucleus (PPTN) and midbrain dopaminergic neurons of the substantia nigra pars compacta/ventral tegmental area (SNc/VTA). The circuit that processes CS-related visual information (ITA, AMTG, ORB) operates in parallel with a circuit that processes US-related visual and gustatory information (RHIN, AMYG, MORB). (b) Reciprocal adaptive connections between hypothalamus and amygdala enable amygdala cells to become learned value categories. The bottom region represents hypothalmic cells, which receive converging taste and metabolite inputs whereby they become taste-drive cells. Bottom-up signals from activity patterns across these cells activate competing value category, or US Value Representations, in the amygdala. A winning value category learns to respond selectively to specific combinations of taste-drive activity patterns and sends adaptive top-down priming signals back to the taste-drive cells that activated it. CS-activated conditioned reinforcer signals are also associatively linked to value categories. Adpative connections end in (approximately) hemidiscs. See the text for details.
    ||
  • image p533fig14.09 Search data and ARTSCENE Search simulations of them in each pair of images from (A) to (F). See the text for details.
    || 6*[data vs simulation], [Response time (ms) versus epoch].
  • image p542fig15.04 Conditioning data from (Smith 1968; Millenson etal 1977). The former shows the kind of Weber Law and inverted U that were simulated in Figure 15.3. The latter shows that, if there are two ISIs during an experiment, then the animals learn to adaptively time their responses with two properly scaled Weber laws.
    || (left) One ISI (Smith 1968) [mean membrane extension (mm) versus time after CS onset (msec)]. (right) Two ISIs (Millenson etal 1977) [200, 100] msec CS test trials, [mean momentary CS amplitude (mm) vs time after CS onset (msec)]. (bottom) Conditioned eye blinks, made with nictitating membrane and/or eyelid, are adaptively timed: peak closure occurs at expected time(s) of arrival of the US following the CS and obeys a Weber Law.
  • image p543fig15.05 Simulation of conditioning with two ISIs that generate their own Weber Laws, as in the data shown in Figure 15.4.
    || Learning with two ISIs: simulation: R = sum[all: f(xi)*yi*xi] vs msec. Each peak obeys Weber Law! strong evidence for spectral learning.
  • image p556fig15.24 (a) Data showing normally timed responding (solid curve) and short latency responses after lesioning cerebellar cortex (dashed curve). (b) computer simulation of short latency response after ablation of model cerebellar cortex.
    ||
  • image p559fig15.28 Neurophysiological data (left column) and model simulations (right column) of SNc responses. See the text for details.
    || membrane potential vs time
  • image p573fig16.01 The experimental chamber (A) and neurophysiological recordings from a rat hippocampus (B) that led to the discovery of place cells. See the text for details.
    ||
  • image p574fig16.02 Neurophysiological recordings of 18 different place cell receptive fields. See the text for details.
    ||
  • image p575fig16.03 As a rat navigates in its experimental chamber (black curves), neurophysiological recordings disclose the firing patterns (in red) of (a) a hippocampal place cell and (b) an entrorhinal grid cell.
    ||
  • image p582fig16.08 Some experimental evidence for stripe-like cell receptive fields has been reported. The band cells posited by Neil Burgess also exhibit the one-dimensional firing symmetry of stripe cells, but are modeled by oscillatory intererence. See the text for details.
    || Evidence for stripe-like cells. Entorhinal cortex data (Sargolini, Fyhn, Hafting, McNaughton, Witter, Moser, Moser 2006; Krupic, Burgess, O
  • image p589fig16.19 Neurophysiological data showing the smaller dorsal grid cell scales and the larger ventral grid cell scales.
    || Spatial scale of grid cells increase along the MEC dorsoventral axis (Hafting etal 2005; Sargolini etal 2006; Brun etal 2008). [dorsal (left), ventral (right)] cart [rate map, autocortelogram]. How does the spatial scale increase along the MEC dorsoventral axis?
  • image p593fig16.26 Data (left column) and simulations (right column) of the gradient of increasing grid cell spacing along the dorsoventral axis of MEC.
    || Gradient of grid spacing along dorsoventral axis of MEC (Brun etal 2008). data-[Distance (m?), Median grid spacing (m?)] simulations-[Grid spacing (cm), Grid spacing (cm)] vs response rate.
  • image p594fig16.27 Data (left column) and simulations (right column) of the gradient of increasing grid cell field width along the dorsoventral axis of MEC.
    || Gradient of field width along dorsoventral axis of MEC (Brun etal 2008). data-[Distance (m?), Width autocorr peak (m?)] simulations-[Grid field width (cm), Width autocorr peak (cm)] vs response rate.
  • image p595fig16.28 Data (left column) and simulations (right column) about peak and mean grid cell response rates along the dorsoventral axis of MEC.
    || Peak and mean rates at different locations along DV axis of MEC (Brun etal 2008). Peak rate (Hz) vs [data- DV quarter, simulations- Response rate].
  • image p596fig16.29 Data (top row) and simulations (bottom row) showing decreasing frequency of subthreshold membrane potential oscillations along the DV axis of MEC.
    || Subthreshold membrane potential oscillations at different locations along DV axis of MEC (Giocomo etal 2020; Yoshida etal 2011). Data [oscillations (Hz) vs distance from dorsal surface (mm) @[-50, -45] mV, Frequency (Hz) vs [-58, -54, -50] mV]. Simulations MPO frequency (Hz) s [response, habituation] rate.
  • image p596fig16.30 Data (top row) and simulations (bottom row) of spatial phases of learned grid and place cells.
    || Spatial phases of learned grid and place cells (Hafting etal 2005). Data: Cross-correlogram of rate maps of two grid cells; Distribution of phase difference: distance from origin to nearest peak in cross-correlogram. Simulations: Grid cell histogram of spatial correlation coefficients; Place cell histogram of spatial correlation coefficients.
  • image p597fig16.31 Data (a) and simulations (b-d) about multimodal place cell receptive fields in large spaces. The simulations are the result of learned place fields.
    || Multimodal place cell firing in large spaces (Fenton etal 2008; Henriksen etal 2010; Park etal 2011). Number of cells (%) vs Number of place fields. [2, 3] place fields, 100*100 cm space.
  • image p597fig16.32 Data (top row) and simulations (bottom row) about grid cell development in juvenile rats. Grid score increases (a-b and d), whereas grid spacing remains fairly flat (c and e).
    || Model fits data about grid cell development (Wills etal 2010; Langston etal 2010). Data: [Gridness, grid score, inter-field distance (cm)]. Simulations: [Gridness score, Grid spacing (cm)] vs trial.
  • image p598fig16.33 Data (top row) and simulations (bottom row) of changes in place cell properties in juvenile rats, notably about spatial information (a,c) and inter-trial stability (b,d).
    || Model fits data about grid cell development (Wills etal 2010). [Data, Simulation] vs [spatial information, inter-trial stability]. x-axis [age (postnatal day), trial].
  • image p599fig16.35 Data (a) and simulations (b,c) about anatomically overlapping grid cell modules. (a) shows the anatomical distribution of grid cells belonging to different modules in one animal. DV location (mm) vs postrhinal border. (b) shows the simulated distribution of learned grid cell spacings from two stripe cell scales. frequency (%) vs grid spacing (cm). mu = [1, 0.6]. (c) shows what happens when half the cells respond with one rate and half another rate. (d) shows the same with three rates. (e-g) show spatial maps and autocorrelograms of grid cells that arise from the different rates in (d). [rate map, autocorelogram] vs [score [1.07, 0.5, 0.67], spacing (cm) [23.58, 41, 63.64]].
    ||
  • image p602fig16.37 Data showing the effect of hippocampal inactivation by muscimol on grid cell firing before, during, and six hours after the muscimol, reading from left to right.
    || Hippocampal inactivation disrupts grid cells (Bonnevie etal 2013). muscimole inactivation. spikes on trajectory: [before, after min [6-20, 20-40, 40-60, 6h]]. rate map (Hz) [18.6, 11.4, 9.5, 6.7, 10.8]. spatial autocorrelogram g=[1.12, 0.05, -0.34, 0.09, 1.27].
  • image p603fig16.38 Role of hippocampal feedback in maintaining grid fields. (a) Data showing the effect of hippocampal inactivation before and during muscimol inhibition of hippocampal cells, as in Figure 16.37. (b) Model simulation with normal grid fields. (c) Model simulation that emulates the effect of hippocampal inhibition on grid fields.
    || (a) Data: hippocampal inactivation [before, after] cart [spikes on trajectory (p: [18.6, 6.7] Hz), spatial autocorrelogram (g= [1.12, 0.09])]. (b) Model: noise-free path integration, [spikes on trajectory (p: 14.56 Hz), rate map, spatial autocorrelogram (g= 1.41), dynamic autocorrelogram (g=0.6)]. (c) Model: noisy path integration + non-specific tonic inhibition, [spikes on trajectory (p: 11.33 Hz), rate map, spatial autocorrelogram (g= 0.05), dynamic autocorrelogram (g=0.047)].
  • image p605fig16.39 Data showing effects of medial septum (MS) inactivation on grid cells and network theta oscillations in medial entorhinal cortex (MEC). (A) Examples of disruption in the spatial expression of the hexagonal grid structure for two grid cells (Brandon etal 2011). (B) Temporal reduction in the power and frequency of network theta oscillations (Koenig etal 2011). (C) Temporary reduction in the gridness score, mean firing rate, and patial stability of grid cells (Koenig etal 2011).
    || Disruptive effects of Medial Septum inactivation in Medial Entorhinal Cortex (Brandon etal 2011; Koenig etal 2011). (A) Rate map [rate map, spatial autocorrelations, trajectory] vs [baseline, sub-sampled, medial septum inactivation, 3-6 hour recovery, 24 hour recovery], [rate map (Hz- m, p), spatial autocorrelations (gridness)][ 1.2, 7.2, 1.1; 0.25, 1.7, 0.6; 0.25, 2.5, -0.53; 0.7, 5.1, 0.55; 1.0, 5.3, 1.3; 2.1, 15, 0.19; 1.7, 12, 0.71; 1.7, 3.2, -0.22; 1.8, 9.1, 0.68; 2.5, 13, 0.46]. (B) [normalized power at 7-9 Hz, frequency (Hz)] vs 5-minute periods. (C) [mean gridness score (+-SEM), mean firing rate (% of baseline), mean correlation coeff (+-SEM)] vs 10-minute periods.
  • image p607fig16.40 Effects of medial septum (MS) inactivation on grid cells. (a) Each row shows data and different data-derived measures of grid cell responsiveness, starting from the left with the baseline response to the middle column with maximal inhibition. (b) Data showing the temporary reduction in the gridness scores during MS inactivation, followed by recovery. (c) Simulation of the collapse in gridness, achieved by reduction in cell response rates to mimic reduced cholinergic transmission. (d,e) Simulations of the reduction in gridness scores in (d) by reduction of cell response rates, in (e) by changing the leak conductance. See the text for details.
    ||
  • Grossberg 2021 p229c2h0.60 SMART computer simulations demonstrate that a good enough match of a top-down expectation with a bottom-up feature pattern generates an attentive resonance during which the spikes of active cells synchronize in the gamma frequency range of 20-70 Hz (Figure 5.40). Many labs have reported a link between attention and gamma oscillations in the brain, including two articles published in 2001, one from the laboratory of Robert Desimone when he was at the the National Institute of Mental Health in Bethseda (Fries, Reynolds, Rorie, Desimone 2001), and the other from the laboratory of Wolf Singer in Frankfurt (Engel, Fries, Singer 2001). You
  • Grossberg 2021 p081c2h0.66 sub-section: How does one evolve a computational brain?
    The above discussion illustrates that no single step of theoretical derivation can derive a whole brain. One needs a method for deriving a brain in stages, or cycles, much as evolution has incrementally discovered ever more complex brains over many thousands of years. The following theoretical method has been successfully applied many times since I first used it in 1957. It embodies a kind of conceptual evolutionary process for deriving a brain.

    Because "brain evolution needs to achieve behavioural success", we need to start with data that embodiey indices of behavioral success. That is why, as illustrated in Figure 2.37 Modelling method and cycle, one starts with Behavioral Data from scores or hundreds of psychological experiments. These data are analyszed as the result of an individual adapting autonomously in real time to a changing world. This is the Arty of Modeling. It requires that one be able to infer from static data curves the dynamical processes that control individual behaviors occuring in real time. One of the hardest things that I teach to my students to do is "how to think in real time" to be able to carry out this speculative leap.

    Properly carried out, this analysis leads to the discovery of new Design Principles that are embodied by these behavioral processes. The Design Principles highlight the functional meaning of the data, and clarify how individual behaviors occurring in real time give rise to these static data curves.

    These principles are then converted into the simplest Mathematical Model using a method of minimal anatomies, which is a form of Occam
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
    bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :image p085fig02.38 Our models have been used in many large-scale applications to engineering and technology. Linking brain to behavior explains how brain mechanisms give rise to psychological functions, and do so autonomously. The combination of mechanism, function, and autonomy helps to explain their value in helping to solve outstanding problems in technology.
    || Modeling method and cycle.
    Behavioral data -(art of modeling)-> Design principles <- Neural data <-(brain predictions)- Mathematical model and analysis -(behavioral predictions)-> Behavioural data
    Technology: Mathematical model and analysis <-> Technological applications
    At every stage, spin off new model designs and mechanisms to technologist who need autonomous intelligent applications.
  • image p225fig05.33 Some early ARTMAP benchmark studies. These successes led to the use of ARTMAP, and many variants that we and other groups have developed, in many large-scale applications in engineering and technology that has not abated even today.
    || see Early ARTMAP benchmark studies
  • image p490fig13.15 The CogEM circuit is an ancient design that is found even in mollusks like Aplysia. See the text for details.
    || Aplysia (Buononamo, Baxter, Byrne, Neural Networks 1990; Grossberg, Behavioral and Brain Sciences 1983). Facilitator neuron ~ drive representation.
  • image p563fig15.33 The basal ganglia gate neural processing in many parts of the brain. The feedback loop through the lateral orbitofrontal cortex (blue arrow, lateral orbitofrontal) is the one that MOTIVATOR models.
    || MOTIVATOR models one of several thalamocortical loops through basal ganglia (Adapted from Fundamental Neuroscience. 2002 Copyright Elsevier). [cortex-> striatum-> pallidum S. nigra-> thalamus] vs [motor, oculomotor, dorsolateral prefrontal, lateral orbitofrontal, anterior cingulate]. thalamus-> [striatum, cortex].
  • image p563fig15.34 The colored regions are distinct parts of the basal ganglia in the loops depicted in Figure 15.33.
    || Distinct basal ganglia zones for each loop (Adapted from Fundamental Neuroscience. 2002 Copyright Elsevier).
  • p370 Chapter 11 means (Grossberg 2021) page 370, Chapter 11
    p002sec Illusion and realitymeans (Grossberg 2021) page 2, section Illusion and reality
    p013fig01.09means (Grossberg 2021) page 13, Figure 1.09 (1.9 as in book)
    p030tbl01.02 means (Grossberg 2021) page 30, Table 1.02 (1.2 as in book)
    p111c2h0.5means (Grossberg 2021) page 111, column 2, height from top as fraction of page height
    || text...Are notes in addition to [figure, table] captions, mostly comprised of text within the image, but also including quotes of text in the book. Rarely, it includes comments by Howell preceded by "Howell". The latter are distinct from "readers notes" (see, for example : reader Howell notes).
    p044 Howell: grepStr
  • p00I PrefacePreface - Biological intelligence in sickness, health, and technology
  • p001 Chapter 1 Overview - From Complementary Computing and Adaptive Resonance to conscious awareness
  • p050 Chapter 2 How a brain makes a mind - Physics and psychology split as brain theories were born
  • p086 Chapter 3 How a brain sees: Constructing reality - Visual reality as illusions that explain how we see art
  • p122 Chapter 4 How a brain sees: Neural mechanisms - From boundary completion and surface flling-in to figure-ground perception
  • p184 Chapter 5 Learning to attend, recognize, and predict the world -
  • p250 Chapter 6 Conscious seeing and invariant recognition - Complementary cortical streams coordinate attention for seeing and recognition
  • p280 Chapter 7 How do we see a changing world? - How vision regulates object and scene persistence
  • p289 Chapter 8 How we see and recognize object motion - Visual form and motion perception obey complementary laws
  • p337 Chapter 9 Target tracking, navigation, and decision-making - Visual tracking and navigation obey complementary laws
  • p353 Chapter 10 Laminar computing by cerebral cortex - Towards a unified theory of biologucal and artificial intelligence
  • p370 Chapter 11 How we see the world in depth - From 3D vision to how 2D pictures induce 3D percepts
  • p404 Chapter 12From seeing and reaching to hearing and speaking - Circular reaction, streaming, working memory, chunking, and number
  • p480 Chapter 13 From knowing to feeling - How emotion regulates motivation, attention, decision, and action
  • p517 Chapter 14 How prefrontal cortex works - Cognitive working memory, planning, and emotion conjointly achieved valued goals
  • p539 Chapter 15 Adaptively timed learning - How timed motivation regulates conscious learning and memory consolidation
  • p572 Chapter 16 Learning maps to navigate space - From grid, place, and time cells to autonomous mobile agents
  • p618 Chapter 17 A universal development code - Mental measurements embody universal laws of cell biology and physics
  • p370 Chapter 11 means (Grossberg 2021) page 370, Chapter 11
    p002sec Illusion and realitymeans (Grossberg 2021) page 2, section Illusion and reality
    p013fig01.09means (Grossberg 2021) page 13, Figure 1.09 (1.9 as in book)
    p030tbl01.02 means (Grossberg 2021) page 30, Table 1.02 (1.2 as in book)
    p111c2h0.5means (Grossberg 2021) page 111, column 2, height from top as fraction of page height
    || text...Are notes in addition to [figure, table] captions, mostly comprised of text within the image, but also including quotes of text in the book. Rarely, it includes comments by Howell preceded by "Howell". The latter are distinct from "readers notes" (see, for example : reader Howell notes).
    p044 Howell: grepStr
  • image pxvifig00.01 Macrocircuit of the visual system
  • image p002fig01.01 The difference between seeing and recognizing.
    || (W. Epstein, R. Gregory, H. von Helmholtz, G. Kanizsa, P. Kellman, A. Michote...) Seeing an object vs Knowing what it is. Seeing Ehrenstein illusion (See, recognize) va Recognizing offset grating Do not see, recognize). offset grating: some boundaries are invisible or amodal.
  • image p002fig01.02 Dalmation in snow
    || p002c2h0.55 "...This image reminds us that invisible boundaries can sometimes be very useful in helping us to recognize visual objects in the world. ... When we first look at this picture, it may just look like an array of black splotches of different sizes, desities, and orientations across the picture. Gradually, however, we can recognize the Dalmatian in it as new boundaries form in our brain between the black splotches. ..."
  • image p003fig01.03 Amodal completion
    || p00c1h0.75 "... Figure 1.3 illustrates what I mean by the claim that percepts derived from pictures are often illusions. Figure 1.3 (left column) shows three rectangular shapes that abut one another. Our percept of this image irresitably creates a different interpretation, however. We perceive a horizontal bar lying in front of a partially occluded vertical bar that is amodally completed behind it. ..."
  • image p004fig01.04 (top row) Kanizsa stratification; (botton row) transparency images
    || [top row images] "... are called stratification percepts... This simple percept can ... be perceived either as a white cross in front of a white outline square, or as a white outline square in front of a white cross. The former percept usually occurs, but the percept can intermittently switch between these two interpretations. ...it is said to be a bistable percept. ..."
  • image p008fig01.05 Noise-saturation dilemma.
    || cell activity vs cell number; [minimum, equilibrium, current, maximal] activity
  • image p009fig01.06 Primacy gradient of activity stored in working memory within a recurrent shunting on-center off-surround network. Rehersal is controlled by a nonspecific rehersal wave and self-inhibitory feedback of the item that is currently being rehearsed. Rehearsal is controlled by a nonspecific rehearsal wave and self-inhibitory feedback of the item that is currently being rehearsed. Green = excitatory, red = inhibitory
    || inputs? -> item and order WM storage -> competitive selection-> rehearsal wave -> outputs
  • image p011fig01.07 The choice of signal function f determines how an initial activity pattern will be transformed and stored in short-term memory (STM). Among [same, slower, faster]-than-linear signal functions, only the last one can suppress noise. It does so as it chooses the population that receives the largest input for storage, while suppressing the activities of all other population, thereby giving rise to a winner-take-all choice.
    || initial pattern (xi(0) vs i):
    fXi(∞)= xi(∞)/sum[j: xj(∞)]x(∞)
    linearperfect storage of any patternamplifies noise (or no storage)
    slower-than-linearsaturatesamplifies noise
    faster-than-linearchooses max [winner-take-all, Bayesian], categorical perceptionsuppresses noise, [normalizes, quantizes] total activity, finite state machine
  • image p012fig01.08 A sigmoidal signal function is a hybrid signal that combines the best properties of [faster, same, slower]-than linear signals. It can suppress noise and store a partially contrast-enhanced activity pattern. slower-than-linear saturates pattern; approximately linear- preserves pattern and normalizes; faster-than-linear- noise suppression and contrast-enhancement.
    || Sigmoidal signal: a hybrid. (upper) saturates pattern- slower-than-linear; (middle) preserves pattern and normalizes- approximately linear. (lower) noise suppression and contrast enhancement- faster-than-linear.
  • image p013fig01.09 A sigmoid signal function generates a quenching threshold below which cell activities are treated like noise and suppressed. Activities that are larger than the quenching threshold are contrast enhanced and stored in short-term memory.
    || Quenching threshold xi(o) vs i.
    fXi(∞)= xi(∞)/sum[j: xj(∞)]x(∞)
    sigmoidtunable filter
    stores infinitely many contrast-enhanced patterns
    suppresses noise
  • image p016fig01.10 The blocking paradigm shows how sensory cues that are conditioned to predict specific consequences can attentionally block other cues that do not change those predictions. On the other hand, if the total cue context is changed by adding a cue that does not change the predicted consequences, then the new cues can be conditioned to the direction of that change. They can hereby learn, for example, to predict fear if the shock level unexpectedly increases, or relief if the shock level unexpectedly decreases.
    || Minimal adaptive prediction. blocking- CS2 is irrelevant, unblocking- CS2 predicts US change. Learn if CS2 predicts a different (novel) outcome than CS1. CS2 is not redundant.
  • image p016fig01.11 A sufficiently big mismatch between a bottom-up input pattern and a top-down expectation can activate the orienting system, which triggers a burst of nonspecific arousal that can reset the recognition category that read out the expectation. In this way, unexpected events can reset short-term memory and initiate a search for a category that better represents the current situation.
    || [category- top-down (TD) expectation; Bottom-up (BU) input pattern] -> Feature pattern -> BU-TD mismatch -> orienting system -> non-specific arousal -> category.
  • image p018fig01.12 Peak shift and behavioural contrast. When a negative generalization gradient (in red) is subtracted from a positive generalization gradient (in green), the net gradient (in purple) is shifted way from the negative gradient and has a width that is narrower than any of its triggering gradients. Because the total activity of the network tends to be normalized, the renormalized peak of the net gradient is higher than that of the rewarded gradient, thereby illustrating that we can prefer experiences that we have never previously experienced over those for which we have previously been rewarded.
    ||
  • image p019fig01.13 Affective circuits are organized into opponent channels, such as fear vs. relief, and hunger vs. frustration. On a larger scale of affective behaviours, exploration and consummation are also opponent types of behaviour. Exploration helps to discover novel sources of reward. Consummation enables expected rewards to be acted upon. Exploration must be inhibited to enable an animal to maintain attention long enough upon a stationary reward in order to consume it.
    || exploration vs consummation
  • image p023fig01.14 A gated dipole opponent process can generate a transient antagonistic reboubnd from its OFF channel in response to offset of an input J to its ON channel. sustained on-response; transient off-response; opponent process; gates arousal: energy for rebound.
    ||
  • image p024fig01.15 A REcurrent Associative Dipole, or READ, circuit is a recurrent shunting on-center off-surround network with habituative transmitter gates. Sensory cues sample it with LTM traces and thereby become conditioned reinforcers.
    ||
  • image p025fig01.16 (left panel) The main processing stages of the Cognitive-Emotional-Motor (CogEM) model have anatomical interpretations in terms of sensory cortex, amygdala, and prefrontal cortex. Chapter 13 will describe in greater detail how CS cues activate invariant object categories in the sensory cortex, value categories in the amygdala, and object-value categories in the prefrontal cortex, notably the orbitofrontal cortex. The amygdala is also modulated by internal drive inputs like hunger and satiety. (right panel) Anatomical data support this circuit, as do many neurophysiological data.
    || drive -> amygdala -> prefrontal cortex <-> sensory cortex -> amygdala. [visual, somatosensory, auditory, gustatory, olfactory] cortex -> [amygdala, Orbital Prefrontal Cortex]. amygdala -> Lateral Prefrontal Cortex
  • image p025fig01.17 Sensory-drive heterarchy vs. drive hierarchy. How cues and drives interact to choose the drive and motivation that will control behavioral choices.
    || [drive inputs, sensory cue [before, after] cross-over] -> incentive motivation [eat, sex].
  • image p026fig01.18 Inverted U as a function of arousal. A Golden Mean at intermediate levels of arousal generates a combination of behavioral threshold, sensitivity, and activation that can support typical behaviors. Both underarousal and overarousal lead to symptoms that are found in mental disorders.
    || Behavior vs arousal.
    depressionunder-arousedover-aroused
    thresholdelevatedlow
    excitable above thresholdHyperHypo
    "UPPER" brings excitability "DOWN".
  • image p027fig01.19 The ventral What stream is devoted to perception and categorization. The dorsal Where stream is devoted to spatial representation and action. The Where stream is also often called the Where/How stream because of its role in the control of action.
    ||
    Spatial representation of actionPerception categorization
    WHERE dorsalWHAT ventral
    Parietal pathway "where"Temporal pathway "what"
    Posterior Parietal Cortex (PPC)Inferior temporal Cortex (IT)
    Lateral Prefrontal Cortex (LPFC)Lateral Prefrontal Cortex (LPFC)
  • image p029tbl01.01 Some pairs of complementary processing streams.
    ||
    visual boundary:
    interblob stream V1-V2-V4
    visual surface:
    blob stream V1-V2-V4
    visual boundary:
    interblob stream V1-V2-V4
    visual motion:
    magno stream V1-MT-MST
    WHAT streamWHERE stream
    perception & recognition:
    interferotemporal & prefrontal areas
    space & action:
    parietal & prefrontal areas
    object tracking:
    MT interbands & MSTv
    optic flow navigation:
    MT+ bands & MSTd
    motor target position:
    motor & parietal cortex
    volitional speed:
    basal ganglia
  • image p030tbl01.02 The What and Where cortical processing streams obey complementary laws. These laws enable the What stream to rapidly and stably learn invariant object categories without experiencing catastrophic forgetting, while the Where stream learns labile spatial and action representations to control actions that are aimed towards these objects.
    ||
    WHATWHERE
    spatially-invariant object learning and recognitionspatially-variant reaching and movement
    fast learning without catastrophic forgettingcontinually update sensory-motor maps and gains
    IT InterferoTemporal CortexPPC Posterior Parietal Cortex
    WhatWhere
    matchingexcitatoryinhibitory
    learningmatchmismatch
  • image p030fig01.20 A schematic cross-section of a slice of laminar neocortex whose cells are organized in a characteristic way in six layers, which themselves may be organized into distinct sublaminae. The computational paradigm of Laminar Computing attempts to show how different parts of neocortex can represent and control very different kinds of behavior - including vision, speech, can cognition - using specializations of the same canonical laminar cortical design.
    || Projection fibres: Cortico[spinal, bulbar, pontine, striate, reticulat, etc]; Thalamocortical fibres; Diffuse cortical afferent fibres: [nonspecific thalamocortical, Cholinergic, Monoaminergic]; Corticocortical efferents; Projection [cell, fibre]; Corticocortical efferent terminals.
  • image p032fig01.21 At least three parallel visual cortical streams respond to visual inputs that reach the retina. Two parvocellular streams process visual surfaces (blob stream) and visual boundaries (interblob stream). The magnocellular stream processes visual motion.
    || [Retina, LGNs, V[1,2,3,4], MT] to [What- inferotemporal areas, Where- parietal areas]: visual parallel streams [2x blob, 1x bound]
  • image p035fig01.22 A classical example of phonemic restoration. The spectrogram of the word "legislatures" is either excised, leaving a silent interval, or filled with broad-band noise. A percept of the restored phoneme is heard when it is replaced by noise, but not by silence.
    || [normal, silence, noise replaced] presentations. frequency (Hz) vs time (sec).
  • image p036fig01.23 As more items are stored in working memory through time, they can select larger chunks with which to represent the longer list of stored items.
    || [x, y, z] -> [xy, xyz]
  • image p037fig01.24 Only three processing stages are needed to learn how to store and categorize sentences with repeated words in working memory. See the text for more discussion.
    || IOR working memory (item chunk-> sequences) <-> IOR masking field: [item->list]<->[list->list] chunks. (<-> signifies <- expectation/attention, adaptive filter ->)
  • image p038fig01.25 The ART Matching Rule stabilizes real time learning using a [top-down, modulatory on-center, off-surround] network. Object attention is realized by such a network. See text for additional discussion.
    || ART Matching Rule [volition, categories, features]. [one, two] against one.
  • image p039tbl01.03 The link between consciousness and movement
    ||
    VISUALseeing, knowing, and reaching
    AUDITORYhearing, knowing, and speaking
    EMOTIONALfeeling, knowing, and acting
  • image p042tbl01.04 The six main kinds of resonances which support different kinds of conscious awareness that will be explained and discussed in this book.
    ||
    type of resonancetype of consciousness
    surface-shroudsee visual object or scene
    feature-categoryrecognize visual object or scene
    stream-shroudhear auditory object or stream
    spectral-pitch-and-timbrerecognize auditory object or stream
    item-listrecognize speech and language
    cognitive-emotionalfeel emotion and know its source
  • image p051fig02.01 Along the boundaries between adjacent shades of gray, laterial inhibition makes the darker area appear even darker, and the lighter areas appear even lighter. (Ernst Mach bands)
    ||
  • image p052fig02.02 Feature-category resonances enable us to rapidly learn how to recognize objects without experiencing catastrophic forgetting. Attentive matching between bottom-up feature pattern inputs and top-down expectations prevent catastrophic forgetting by focussing object attention upon expected patterns of features, while suppressing outlier features that might otherwise have caused catastophic forgetting if they were learned also.
    || Adaptive Resonance. Attended feature clusters reactivate bottom-up pathways. Activated categories reactivate their top-down pathways. Categories STM, Feature patterns STM. Feature-Category resonance [synchronize, amplify, prolong]s system response. Resonance triggers learning in bottom-up and top-down adaptive weights: adaptive resonance!
  • image p057fig02.03 Some basic anatomical and physiological properties of individual neurons. See the text for additional discussion.
    ||
    physiologycell body potentialaxonal signalchemical transmitter
    anatomynerve cell bodyaxonsynaptic knob, synapse
  • image p058fig02.04 Serial learning paradigm: Learning the temporal order of events by practicing them in the order that they occur in time.
    || Learning a global arrow in time. How do we learn to encode the temporal order of events in LTM? serial learning. [w=intra, W=inter]trial interval. "... data about serial verbal learning (Figure 2.4) seemed to suggest that events can go "backwards in time". ..."
  • image p059fig02.05 Bowed serial position curve. This kind of data emphasizes the importance of modelling how our brains give rise to our minds using nonlinear systems of differential equations.
    || Effects of [inter, intra]trial intervals (Hovland 1938). # of errors vs list position. [w (sec), W (sec)] = (2 6) (4 6) (2 126) (4 126). Nonoccurance of future items reduces the number of errors in response to past items. Thes data require a real-time theory for their explanation! that is, DIFFERENTIAL equations.
  • image p059fig02.06 The bowed serial position curve illustrates the sense in which "events can go backwards in time" during serial learning.
    || Bow due to backward effect in time. If the past influenced the future, but no conversely: # of errors vs list position; Data (Hoyland Hull, Underwood, etc).
  • image p060fig02.07 Position-specific-forward and backward error gradients illustrate how associations can form in both the forward and backward directions in time before the list is completely learned.
    || Error gradients: depend on list position. # of responses vs list position:
    list beginninganticipatory errorsforward in time
    list middleanticipatory and perseverative errorsforward and backward in time
    list endperseverative errorsbackward in time
  • image p061fig02.08 The existence of forward and backward associations, such as from A to B and from B to A is naturally explained by a network of neurons with their own activities or STM traces, and bidirectional connections between them with their own adaptive weights or LTM traces.
    || How these results led to neural networks (Grossberg 1957). Networks can learn forward and backward associations! Practice A->B also learn B<-A. Because learning AB is not the same as learning BA, you need STM traces, or activations, xp at the nodes, or cells, and LTM traces, or adaptive weights, zg, for learning at the synapses.
  • image p063fig02.09 The Additive Model describes how multiple effects add up influence the activities, or STM, traces of neurons.
    || STM: Additive model (Grossberg, PNAS 1967, 1968).
    Short-term memory (STM)
    trace activation
    signaladaptive weightLong-term memory (LTM)
    trace
    xi(j)fi(xi(t))*Bijzij(t)xj(t)
    learning rate?passive decaypositive feedbacknegative feedbackinput
    d[dt: xi(t)] = - Ai*xi + sum[j=1 to n: fj(xj(t))*Bji*zji] - sum[j=1 to n: gj(xj)*Cp*Zp] + Ii
    Special case : d[dt: xi(t)] = - Ai*xi + sum[j=1 to n: fj(xj(t))*zp] + Ii
  • image p064fig02.10 The Shunting Model includes upper and lower bounds on neuronal activities. These bound have the effect of multiplying additive terms by excitatory and inhibitory automatic gain terms that enable such models to preserve their sensitivity to inputs whose size may vary greatly in size through time, while also approximately normalizing their total activities.
    || STM: Shunting Model (Grossberg, PNAS 1967, 1968). Mass action in membrane equations. Bi/Ci -> xi(t) -> O -> -Fi/Ei. Bounded activations, automatic gain control. d[dt: xi(t)] = - Ai*xi + (Bi - Ci*xi)sum[j=1 to n: fj(xj(t))*Dji*yji*zji + Ii] - (Ei*Xi + Fi)*sum[j=1 to n: gj(xj)*Gji*Yji*Zji + Ji]. Includes the Additive Model.
  • image p064fig02.11 Medium-Term Memory (MTM) and Long-Term Memory (LTM) equations complement the Additive and Shunting Models of STM. MTM is typically defined by a chemical transmitter that is released from the synaptic knobs of a neuron (Figure 2.03). Its release or inactivation in an activity-dependent way is also called habituation. LTM defines how associative learning occurs between a pair of neurons whose activities are approximately correlated through time. See the text for details.
    || Medium and Long Term memory.
    MTMhabituative transmitter gated[dt: yki(t)] = H*(K - yki) - L*fk(xk)*yki
    LTMgated steepest descent learningd[dt: zki(t)] = Mk*fk(xk)*(hi(xi) - zki)
  • image p065fig02.12 Three sources of neural network research: [binary, linear, continuous nonlinear]. My own research has contributed primarily to the third.
    || Three sources of neural network research.
    BinaryLinearContinuous and non-Linear
    neural network signal processingSystems theoryNeurophysiology and Psychology
    McCullogh-Pitts 1943
    ... Xi(t+1) = sgn{sum[j: Aij*Xj(t) - Bi}
    Von Neumann 1945
    Calanielio 1961
    Rosenblatt 1962
    Widrow 1962
    Anderson 1968
    Kohonen 1971
    Hodgkin, Huxley 1952
    Hartline, Ratliff 1957
    Grossberg 1967
    Von der Malsburg 1973
    digital computerY-A*X
    cross-correlate
    steepest descent
  • image p068fig02.13 Hartline
  • image p068fig02.14 Hodgkin and Huxley developed a model to explain how spikes travel down the squid giant axon.
    || Neurophysiology (single cell): spike potentials in squid giant axon (Hodgekin, Huxley 1952, Nobel Prize). time -> (dendrites -> cell body -> axon).
    C*dp[dt: V] = α*dp^2[dX^2: V] + (V(+) - V)*g(+) + (V(-) - V)*g(-) + (V^p - V)*g^p
    g(+) = G(+)(m,h), g(-) = G(-)(n), G^p = const, [m, h, n] - ionic processes, V - voltage
    Precursor of Shunting network model (Rail 1962). (Howell: see p075fig02.24 Membrane equations of neurophysiology. Shunting equation
  • image p071fig02.15 The noise saturation dilemma: How do neurons retain their sensitivity to the relative sizes of input patterns whose total sizes can change greatly through time?
    || Noise-Saturation Dilemma (Grossberg 1968-1973). Bounded activities from multiple input sources.
    If activities xi are sensitive to SMALL inputs, then why don
  • image p071fig02.16 To solve the noise-saturation dilemma, individual neurons in a network that is receiving a distributed spatial patterns of inputs need to remain sensitive to the ratio of input to them divided by all the inputs in that spatial pattern. Although the inputs are delivered to a finite number of neurons, the input and activity patterns are drawn continuously across the cells for simplicity.
    || Noise-Saturation Dilemma. [Ii, xi] vs t. [Input, Activity] pattern [small -> noise, large -> saturation]. Problem: remain sensitive to input RATIOS θi = Ii / sum[j: Ij] as total input I = sum[j: Ij] -> ∞. Many kinds of data exhibit sensitivity to ratios of inputs.
  • image p072fig02.17 Brightness constancy.
    || Vision: brightness constancy, contrast normalization. Compute RATIOS of reflected light. Reflectance processing. p72c1h0.45 "... In other words, the perceived brightness of the gray disk is constant despite changes in the overall illumination. On the other hand, if only the gray disk were illuminated at increaing intensities, with the annulus illuminated at a constant intensity, then the gray disk would look progressively brighter.
  • image p072fig02.18 Vision: brightness contrast. Conserve a total quantity, Total activity normalization.
    LUCERatio scales in choice behavior
    ZEILERAdaptation level theory

    ||
  • image p073fig02.19 Computing with cells: infinity does not exist in biology!
    || Computing in a bounded activity domain, Gedanken experiment (Grossberg 1970). Vm sub-areas [xm, B - xm], I(all m)], m=[1, i, B].
    Bexcitable sites
    xi(t)excited sites (activity, potential)
    B - xi(t)unexcited sites
  • image p073fig02.20 Shunting saturation occurs when inputs get larger to non-interacting cells.
    || Shunting saturation. [xi(t), B - xi(t)].
    (a)(b)
    d[dt: xi] = -A*xi + (B - xi)*Ii
    (a) Spontaneous decay of activity xi to equilibrium
    (b) Turn on unexcited sites B - xo by inputs Ii (mass action)
    Inadequate response to a SPATIAL PATTERN of inputs: Ii(t) = θi*I(t)
    θirelative intensity (cf. reflectance)
    I(t)total intensity (cf. luminance)
  • image p073fig02.21 How shunting saturation turns on all of a cell
  • image p073fig02.22 An on-center off-surround network is capable of computing input ratios.
    || Computing with patterns.
    How to compute the pattern-sensitive variable: θi = Ii / sum[k=1 to n: Ik]?
    Needs interactions! What type? θi = Ii / sum[k ≠ i: Ik]
    Ii↑ ⇒ θi↑ excitation, Ik↑ ⇒ θk↓, k ≠ i inhibition
    On-center off-surround network.
  • image p074fig02.23 The equations for a shunting on-center off-surround network. Shunting terms lead to many beautiful and important properties of these networks, which are found ubiquitously, in one form or another, in all cellular tissues.
    || Shunting on-center off-surround network.
    Mass action: d[dt: xi] = -A*xi +(B - xi)*Ii -xi*sum[k≠i: Ik]
    Turn on unexcited sitesTurn off excited sites
    At equilibrium:
    0 = d[dt: xi] = -(A + Ii + sum[k≠i: Ik])*xi + B*Ii = -(A + I)*xi + B*Ii
    xi = B*Ii/(A + I) = B*θi*I/(A + I) = θi* B*I/(A + I)No saturation!
    Infinite dynamical range
    Automatic gain control
    Compute ratio scale
    Weber law
    x = sum[k-1 to n: xk] = B*I/(A + I) ≤ B Conserve total activity
    NORMALIZATION
    Limited capacty
    Real-time probability
  • image p075fig02.24 The membrane equations of neurophysiology describe how cell voltages change in response to excitatory, inhibitory, and passive input channels. Each channel is described by a potential difference multiplied by a conductance. With the special choices shown in the lower right-hand corner, this equation defines a feedforward shuntin on-center off-surround network.
    || Membrane equations of neurophysiology.
    C*dp[dt] = (V(+) - V)*g(+) +(V(-) - V)*g(-) +(V(p) - V)*g(p)
    Shunting equation (not additive)
    V Voltage
    V(+), V(-), V(p) Saturating voltages
    g(+), g(-), g(p) Conductances
    V(+) = B, C = 1; V(-) = V(p) = 0; g(+) = Ii; g(-) = sum[k≠i: Ik];
    lower V: V(+) = V(p) Silent inhibition, upper V: V(+). (Howell: see p068fig02.14 Grossberg
  • image p076fig02.25 An on-center off-surround network can respond to increasing on-center excitatory inputs without a loss of sensitivity. Instead, as the off-surround input increases, the region of a cell
  • image p076fig02.26 The mudpuppy retina exhibits the shift property that occurs in the feedforward shunting on-center off-surround network in Figure 2.25. As a result, its sensitivity also shifts in response to different background off-surrounds, and therefore exhibits no compression (dashed purple lines).
    || Mudpuppy retina neurophysiology.
    I center, J background
    a) Relative figure-to-ground
    b) Weber-Fechner I*(A + J)^(-I)
    c) No hyperpolarization, SHUNT: Silent inhibition
    d) Shift property(Werblin 1970) xi(K,J) vs K = ln(I)
    Adaptation- sensitivity shifts for different backgrounds. NO COMPRESSION.
  • image p077fig02.27 A schematic of the on-center off-surround network that occurs in the mudpuppy retina, including three main cell types: receptors, horizontal cells, and bipolar cells.
    || Mechanism: cooperative-competitive dynamics.
    On-center off-surround (Kuffler 1953) cat retina
    Subtractive lateral inhibition (Hartline, Ratcliff 1956/7+) limulus retina.
    R receptor -> H horizontal -> B bipolar (Werblin, Dowling, etal 1969+) mudpuppy retina.
  • image p077fig02.28 Silent inhibition is replaced by hyperpolarization when the inhibitory saturating potential is smaller than the passive saturating potential. Then an adpatation level is created that determines how big input ratios need to be to activate their cells.
    || Weber Law and adaptation level.
    Hyperpolarization vs Silent inhibition
    d[dt: xi] = -A*xi +(B - xi)*Ii -(xi + C)*sum[k≠i: Ik]
    At equilibrium:
    0 = d[dt: xi] = -(A + Ii + )*xi +B*Ii -C*sum[k≠i: Ik]
    = -(A + I)*xi +(B + C)*Ii -C*I
    = -(A + I)*xi +(B + C)*I*[θi -C/(B + C)]
    xi = (B + C)*I/(A + I)* [θi -C/(B + C)]
    Weber Law Reflectance Adaptation level
  • image p078fig02.29 How the adaptation level is chosen to enable sufficiently distinct inputs to activate their cells.
    || Weber Law and adaptation level.
    xi = (B + C)*I/(A + I)* [θi -C/(B + C)]
    Weber Law Reflectance Adaptation level
    V(+) >> V(-) ⇒ B >> C ⇒ C/(B + C) << 1
    Adaptation level theory (Zeiler 1963).
  • image p078fig02.30 Choosing the adaptation level to achieve informational noise suppression.
    || Noise suppression. Attenuate Zero Spatial frequency patterns: no information. Ii vs i (flat line), xi vs i (flat line at zero)
    B >> C: Try B = (n - 1)*C or C/(B + C) = 1/n
    Choose a uniform input pattern (no distinctive features): All θi = 1/n
    xi = (B + C)*I/(A + I)*[θi -C/(B + C)] = 0 no matter how intense I is.
  • image p078fig02.31 How noise suppression enables matching of bottom-up and top-down input patterns.
    || Noise suppression -> pattern matching. mismatch (out of phase) suppressed, match (in phase) amplifies pattern.
  • image p079fig02.32 Matching amplifies the matched pattern due to automatic gain control. See terms I and J in the equation.
    || Substrate of resonance. Match (in phase) of BU and TD input patterns AMPLIFIES matched pattern due to automatic gain control by shunting terms. J = sum[i: Ji], I = sum[i: Ii], θi = (Ii + Ji)/(I + J)
    xi = (B + C)*(I + J)/(A + I + J)*[θi -C/(B + C)]
    Need top-down expectations to be MODULATORY.
  • image p080fig02.33 An opposite-attracts rule during the development of intracellular connections can lead to a mature network that realizes informational noise suppression.
    || How do noise suppression parameters arise? Symmetry-breaking during morphogenesis? Opposites attract rule.
    Intracellular parameters C/B = 1/(1 - n) Intercellular parameters
    Predicts that:
    • Intracellular excitatory and inhibitory saturation points can control the growth during development of :
    • Intercellular excitatory and inhibitory connections.
  • image p080fig02.34 How to achieve informational noise suppression in a network with multiple parallel processing channels.
    || Symmetry-breaking: dynamics and anatomy.
    Dynamics:
    • excitatory range is amplified
    • inhibitory range is compressed
    Anatomy:
    • narrow on-center
    • broad off-surround
    Noise suppression: attenuates uniform patterns
    Contour direction: enhances pattern gradients
  • image p081fig02.35 The equilibrium activities of a shunting netwok with Gaussian on-center off-surround kernels are sensitive to the ratio-contrasts of the input patterns that they process. The terms in the denominator of the equilibrium activities accomplish this using the shunting on-center and off-surround terms.
    || Ratio-contrast detector. flat versus [Gaussian Cki, flattened Gaussian? Eki]
    d[dt: xi] = -A*xi +(B - xi)*sum[k≠i: Ik]*Cki -(xi + D)*sum[k=1 to n: Ik*Eki]
    Cki = C*e^(-μ*(k - i)^2), Eki = E*e^(-μ*(k - i)^2)
    At equilibrium: xi = I*sum[k=1 to n: θk*Fki] / (A + I*sum[k=1 to n: θk*Gki])
    Fki = B*Cki -D*Eki (weighted D.O.G)
    Gki = Cki +Eki (S,O,G)
    • Reflectance processing
    • Contrast normalization
    • Discount illuminant
  • image p081fig02.36 Informational noise suppression in network with Gaussian on-center and off-surround function as contour detectors that are sensitive to ratio-contrast.
    || Noise suppression and contour detection.
    If B*sum[k=1 to n: Cki] <= D*sum[k=1 to n: Eki] then:
    • uniform patterns are suppressed
    • contrasts are selectively enhanced
    • contours are detected
    Ii vs i, xi vs i
    Responses are selective to [REFLECTANCE, SPATIAL SCALE], eg color [feature, surface] contours.
  • image p082fig02.37 My models begin with behavioral data, since brains are designed to achieve behavioral success. The text explains how models evolve in stages, through a process of successive refinements, or unlumpings. These unlumpings together carry out a kind of conceptual evolution, leading to models that can explain and predict ever larger psychological and neurobiological databases.
    || Modelling method and cycle.
    Behavioral data -(art of modeling)-> Design principles <- Neural data <-(brain predictions)- Mathematical model and analysis -(behavioral predictions)-> Behavioural data
    Operationalizes "proper level of abstraction"
    Operationalizes that you cannot "derive a brain" in one step.
  • image p085fig02.38 Our models have been used in many large-scale applications to engineering and technology. Linking brain to behavior explains how brain mechanisms give rise to psychological functions, and do so autonomously. The combination of mechanism, function, and autonomy helps to explain their value in helping to solve outstanding problems in technology.
    || Modeling method and cycle.
    Behavioral data -(art of modeling)-> Design principles <- Neural data <-(brain predictions)- Mathematical model and analysis -(behavioral predictions)-> Behavioural data
    Technology: Mathematical model and analysis <-> Technological applications
    At every stage, spin off new model designs and mechanisms to technologist who need autonomous intelligent applications.
  • image p087fig03.01 A macrocircuit of key visual processes (in green) and the cortical areas in which they primarily occur (in red), from the retina to the Prefrontal Cortex (PFC), including both the What and Where cortical streams. The [bottom-up, horizontal, and top-down] interactions help each of these processes to overcome computationally complementary processing deficiencies that they would experience without them, and aso to read-out top-down expectations that help to stabilize learning while they focus attention on salient objects and positions.
    || Emerging unified theory of visual intelligence. [What, Where] streams. Bottom-up and top-down interactions overcome COMPLEMENTARY processing deficiencies.
  • image p089fig03.02 What do you think lies under the two grey disks? (on a checkers board)
    || p089c1h0.55 "... As your eye traverses the entire circular boundary (Howell: of a grey disk on a checkerboard), the contrast keeps flipping between light-to-dark and dark-to-light. Despite these contrast reversals, we perceive a single continuous boundary surrounding the gray disk. ...".
  • image p090fig03.03 Kanizsa square and reverse-contrast Kanizsa square precepts. The spatial arrangement of pac-men, lines, and relative contrasts determines the perceived brightness of the squares, and even if they exhibit no brightness difference from their backgrounds, as in (b). These factors also determine whether pac-men will appear to be amodally completed behind the squares, and how far behind them.
    || p089c2h0.65 "...
    a) The percept of the square that abuts the pac-men is a visual illusion that is called the Kanizsa square. The enhanced brightness of the square is also an illusion.
    c) shows that these boundaries can be induced by either collinear edges or perpendicular line ends, and that both kinds of inducers cooperate to generate an even stronger boundary.
    d) if the perpendicular lines cross the positions of the illusory contours, then they can inhibit the strength of these contours. ..."
  • image p091fig03.04 A cross-section of the eye, and top-down view of the retina, shao how the blind spot and retinal veins can occlude the registration of light signals at their positions on the retina.
    || Eye: [optic nerve, ciliary body, iris,lens, pupil, cornea, sclera, choroid, retina]. Human retina: [fovea, blind spot, optic nerve]. see alsi cross-section of retinal layer.
  • image p092fig03.05 A cross-section of the retinal layer. Note that light stimuli need to go through all retinal layers before they reach the photoreceptor layer at which the light signals are registered.
    || light stimuli ->
    retinal layerscellular composition
    inner limiting membrane
    retinal nerve fibreganglion nerve fibres
    ganglion cellganglion
    inner plexiformamacrine
    inner nuclearhorizontal
    outer plexiform
    outer limiting membrane
    photoreceptorrod
    photoreceptorcone
    retinal pigment epithelium
    <- signal transduction. http://brain.oxfordjournals.org/content/early/2011/01/20/brain.awq346
  • image p093fig03.06 Every line is an illusion because regions of the line that are occluded by the blind spot or retinal veins are completed at higher levels of brain processing by boundary completion and surface filling-in.
    || Every line is an illusion!
    Boundary completionWhich boundaries to connect?
    Surface filling-inWhat color and brightness do we see?
  • image p094fig03.07 The processes of boundary completion and surface filling-in are computationally complementary.
    ||
    Boundary completionSurface filling-in
    outwardinward
    orientedunoriented
    insensitive to direction of contrastsensitive to direction-of-contrast
  • image p095fig03.08 Computer simulation of a Kanizsa square percept. See the text for details.
    || p094c2h0.2 "...
    b) shows the feature contours that are induced just inside the pac-man boundaries.
    c) feature contours fill-in within the square boundary
    d) create a percept of enhanced brightness throughout the square surface ..."
  • image p095fig03.09 Simulation of a reverse-contrast Kanizsa square percept. See the text for details.
    || p094c2h0.5 "...
    b) whereas bright feature contours are induced just inside the boundaries of the two black pac-men at the bottom of the figure, dark feature contours are induced inside the boundaries of the two white pac-man at the top of the figure
    c) the square boundary is recognized
    d) Because these dark and bright feature contours are approximately balanced, the filled-in surface color is indistinguishable from the filled-in surface color outside of the square, ... but [the square boundary is] not seen ..."
  • image p096fig03.10 The visual illusion of eon color spreading. Neither the square nor the blue color that are percieved within it are in the image that defines a neon color display. The display consists only of black and blue arcs.
    ||
  • image p096fig03.11 Another example of neon color spreading. The image is composed of black and blue crosses. See the text for details.
    || Howell: note the appearance of illusory red squares
  • image p100fig03.13 The Ehrenstein percept in the left panel is significantly weakened as the orientations of the lines that induce it deviate from being perpendicular deviate from being perpendicular to the illusory circle.
    ||
  • image p100fig03.14 Boundaries are completed with the orientations that receive the largest total amount of evidence, or support. Some can form in the locally preferred orientations that are perpendicular to the inducing lines, while others can form through orientations that are not locally preferred, thus showing that there is initially a fuzzy band of almost perpendicular initial grouping orientations at the end of each line.
    || Perpendicular induction at line ends wrt [circular, square] boundaries
    line ends localglobal
    perpendicular, crisppreferredpreferred
    NOT perpendicular, fuzzyunpreferredpreferred
  • image p100fig03.15 A fuzzy band of possible initial grouping orientations allows grouping to get started. Cooperative-competitive feedback via a hierarchical resolution of uncertainty chooses a sharp final grouping that has the most evidence to support it.
    || before choice: transient; after choice: equilibrium
  • image p102fig03.16 T
  • image p102fig03.17 The relative positions of the squares give rise to a percept of three regions. In the middle region, emergent diagonal groupings form, despite the fact that all the orientations in the image are verticals and horizontals.
    ||
  • image p103fig03.18 Computer simulations in [b, c, e, f] of groupings in response to different spatial arrangements in [a,c, e, g] of inducers that are composed of short vertical boundaries. Note the emergent horizontal groupings in [d, f, h] and the diagonal groupings in h, despite the fact that all its inducers have vertical orientations.
    ||
  • image p103fig03.19 As in Figure 3.18, emergent groupings can form whose orientations differ from thos of the inducing stimuli.
    || Thats how multiple orientations can induce boundary completion of an object. [diagonal, perpendicular, parallel]
  • image p104fig03.20 Sean Williams: how boundaries can form
    ||
  • image p104fig03.21 Four examples of how emergent boundaries can form in response to different kinds of images. These examples show how boundary webs can shape themselves to textures, as in (c), and shading, as in (d), in addition to lines, as in (a). In all these cases, the boundaries are invisible, but reveal themselves by supporting filling-in of surface brightness and color within their form-sensitive webs.
    ||
  • image p105fig03.22 Depth-selective boundary representations capture brightness and colors in surface filling-in domains. See the text for details.
    || 3D vision and figure-ground separation. multiple-scale, depth-selective boundary webs. refer to Figure 3.21(d)
    depth increasing ↓boundariessurfaces
    BC inputsurface capture!
    FC input
  • image p105fig03.23 The pointillist painting A Sunday on la Grande Jatte by Georges Seurat illustrates how we group together both large-scale coherence among the pixels of the painting, as well as forming small groupings around the individual dabs of color.
    ||
  • image p106fig03.24 In response to the Synthetic Aperture image (upper corner left), a shunting on-center off-surround network "discounts the illiminant" and thereby normalizes cell activities to compute feature contours, without causing saturation (upper right corner). Multiple-scale boundaries form in response to spatially coherent activities in the feature contours (lower left corner) and create the webs, or containers, into which the feature contours fill-in the final surface representations (lower right corner).
    || Do these ideas work on hard problems? SAR!
    input imagefeature contoursboundary contoursfilled-in surface
    Synthetic Aperture Radar: sees through weather 5 orders of magnitude of power in radar returndiscounting the illuminant
    • normalizes the image: preseves RELATIVE activities without SATURATION
    • shows individual PIXELS
    boundaries complete between regions where normalized feature contrasts changefilling-in averages brightnesses within boundary compartments
  • image p107fig03.25 The Roofs of Collioure by Matisse. See the text for details
    || p107c1h0.6 "... [Matisse] showed how patches of pure color, when laid down properly on a canvas, could be grouped by the brain into emergent boundarues, without the intervention of visible outlines. ... The trick was that these emergent boundaries, being invisible, or amodal, did not darken the colors in the surface representations. In this sense, Matisse intuitively realized that "all boundaries are invisible" through the masterful way in which he arranged his colors on canvas to generate boundaries that could support compelling surface representations. ..."
  • image p107fig03.26 How "drawing directly in color" leads to colored surface representations. Amodal boundary webs control the filling-in of color within these surface representations. See the text for details.
    || color patches on canvas -> [surface color and form, Amodal boundary web]. Amodal boundary web -> surface color and form.
  • image p108fig03.27 Matisse
  • image p108fig03.28 The watercolor illusion of Baingio Pinna 1987 can be explained using spatial competition betweeen like-oriented boundary signals. This occurs at what I have called the First Competitive Stage. This is one stage in the brain
  • image p109fig03.29 The 3D percepts that are generated by chiaroscuro and trompe l
  • image p109fig03.30 The triptych of Joe Baer, called Primary Light Goup: Red, Green, and Blue 1964-1965, generates watercolor illusion percepts which, when displayed side by side in a museum, create a striking impression.
  • image p110fig03.31 Henry Hensche
  • image p110fig03.32 Claude Monet
  • image p112fig03.33 Various ways that spatial gradients in boundary webs can cause self-luminous percepts. See the text for details.
    || Boundary web gradient can cause self luminosity. Similar to watercolor illusion. Gloss by attached highlight (Beck, Prazdny 1981), glare. (Bresan 2001) Double brilliant illusion, (Grossberg, Hong 2004) simulation. p111c2h0.5 "... This effect may be explained as the result of the boundary webs that are generated in response to the luminance gradients and how they control the filling-in of lightness within themselves and abutting regions. ... Due to the mutually inhibitory interactions across the boundaries that comprise these boundary webs, more lightness can spread into the central square as the steepness of the boundary gradients increases. ...".
  • image p113fig03.35 The Highest Luminance As White (HLAW) rule of (Hans Wallach 1948) works in some cases (top row) but not others (bottom row).
  • image p113fig03.36 The Blurred Highest Luminance As White (BHLAW) rule that I developed with my PhD student, Simon Hong, works in caseswhere the rule of Hans Wallach fails, as can be seen by comparing the simulation in Figure 3.35 with the one in this figure.
    || Blurred Highest Luminance As White (BHLAW) rule (Grossberg, Hong 2004, 2006). Spatial integration (blurring) adds spatial context to lightness perception.
  • image p114fig03.37 How the Blurred Highest Luminance as White rule sometimes normalizes the highest luminance to white (left panel) but at other times normalizes it to be self-luminous (right panel). See the text for details.
    || perceived reflectance vs cross-section of visual field. [white level, anchored lightness, self-luminous*, BHCAW]. *self-luminous only when conditions are right.
  • image p114fig03.38 Four color-field spray paintings of Jules Olitski. The text explains why they generate surfaces percepts with such ambiguous depth.
    || Jules and his friends (1967), Lysander-1 (1970), Instant Loveland (1968), Comprehensive Dream (1965). p114c2h0.4 "... it is impossible to visually perceive descrete colored units within the boundary webs in Olitski
  • image p115fig03.39 Two of Gene Davis
  • image p116fig03.40 A combination of T-junctions and perspective cues can create a strong percept of depth in response to 2D images, with a famous example being Leonardo da Vinci
  • image p117fig03.41 End gaps, or small breaks or weakenings of boundaries, can form where a stronger boundary abuts a weaker, like-oriented, boundary, as occurs where black boundaries touch red boundaries in the neon color spreading image of Figure 3.11.
    || Boundary contours - lower contrast boundary signals are weakened. feature contours- no inhibition, feature signals survive and spread. MP -> [BCS, FCS]. BCS -> FCS.
  • image p117fig03.42 Two paintings by Frank Stella. See the text for details.
    || Firzubad (top row) ... and Khurasan Gate (variation) (bottom row). p117c1h0.75 "... The luminance and color structure within a painting affects how it groups and stratifies the figures within it. These processes, in turn, affect the formation of attentional shrouds that organize how spatial attention is is allocated as we view them. ..." "... Stella wrote Furzabad is a good example of of lookng for stability and trying to create as much instability as possible.
  • image p120fig03.43 Four paintings by Monet of the Rouen cathedral under different lighting conditions (top row) and their monochromatic versions (bottom row). See the text for details.
    || p119c2h0.25 "... Monet uses nearby colors that are nearly equiluminant, and sharp, high-contrast luminance defined edges are sparse. He hereby creates weaker boundary signals within and between the parts of many forms, and stronger boundary signals between the forms. This combination facilitates color spreading within the forms and better separation of brightness and collor differences between forms. ... The grayscale versions of these paintings demonstrate the near equiluminance of the brushstrokes within forms, and places in which brightness and color differences significantly influence the groupings that differentiate between forms, including the differentiation between the cathedral and the sky. ..."
  • image p120fig03.44 The Rouen cathedral at sunset generates very different boundary webs than it does in full sunlight, as illustrated by Figure 3.45.
    || Rouen Cathedral at sunset (Monet 1892-1894).
    • Lighting almost equiluminant
    • Most boundaries are thus caused by color differences, not luminance differences
    • Fine architectural details are obscured, leading to...
    • Coarser and more uniform boundary webs, so...
    • Less depth in the painting.
  • image p121fig03.45 The Rouen cathedral in full sunlight.
    || Rouen Cathedral full sunlight (Monet 1892-1894).
    • Lighting is strongly non-uniform across most of the painting
    • Strong boundaries due to both luminance and color differences
    • Fine architectural details are much clearer, leading to...
    • Finer and more non-uniform boundary webs, so...
    • Much more detail and depth
  • image p121fig03.46 The Rouen cathedral in full sunlight contains T-Junctions that are not salient in the painting of it at sunset. These are among the painting
  • image p123fig04.01 A classical example of how boundaries are barriers to filling-in.
    || Combining stabilized images with filling-in (Krauskopf 1963, Yarbus 1967). Image: Stabilize these boundaries with suction cup attached to retina or electronic feedback circuit. Percept: A visible effect of an invisible cause!
  • image p124fig04.02 The verical cusp of lesser and greater illuminance is the same in both images, but the one on the left prevents brightness from flowing around it by creating closed boundaries that tighly surround the cusp.
  • image p126fig04.03 A McCann Mondrian is an excellent display with which to illustrate how our brains discount the illuminant to compute the "real" colors of objects. See the text for details.
    || Color constancy: compute ratios. McCann Mondrian. Biological advantage: never see in bright light, eg tropical fish
    Discount the illuminantCompute lightness
    Different colors seen from the same spectrum
    ... similar to those seen in white light
    Physical basis: reflectance RATIOS!
  • image p128fig04.04 When a gradient of light illuminates a McCann Mondrian, there is a jump in the total light that is reflected at nearby positions where the reflectances of the patches change,
    || Compute reflectance changes at contours. Fill-in illuminant-discounted surface colors.
    leftright
    I + εI - ε
    A*(I + ε)B*(I - ε)
    A*(I + ε)/(B*(I - ε)) - 1 = A/B - 1
  • image p129fig04.05 Multiple-scale balanced competition chooses color contours where the reflectance of the patches change. These color contours discount the illuminant.
    || Compute reflectance changes at contours. Fill-in illuminant-discounted surface colors. Discount illuminant: compute color contours.
  • image p129fig04.06 Filling-in of color contours restores a surface percept with colors that substantially discount the illuminant.
    || Compute reflectance changes at contours. Fill-in illuminant-discounted surface colors. Fill-in surface color: hierarchical resolution of uncertainty.
  • image p130fig04.07 Simulation of brightness constancy under uniform illumination.
    || Simulation of brightness constancy (Grossberg & Todorovic 1988). Uniform illumination. [stimulus (S), feature (F), boundary (B), output]. B -> F -> S -> B: Veridical! Boundary peaks are spatially narrower than feature peaks.
  • image p131fig04.08 Simulation of brightness constancy under an illimination gradient. Note that the feature content pattern (F) is the same in both cases, so too is the boundary contour (B) pattern that is derived from it, and the final filled-in surface.
    || Simulation of brightness constancy. Discount the illuminant. [stimulus (S), feature (F), boundary (B), output]. B -> F -> S -> B: not veridical, but useful! Ratio-sensitive feature contours (F).
  • image p131fig04.09 Simulation of brightness contrast
    || Simulation of brightness contrast. [stimulus (S), feature (F), boundary (B), output].
  • image p132fig04.10 Simulation of brightness assimilation. Note how the equal steps on the left and right sides of the luminance profile are transformed into different brightness levels.
    || Simulation of brightness assimilation. [stimulus (S), feature (F), boundary (B), output].
  • image p132fig04.11 Simulations of a double step (left panel) and the Craik-O
  • image p133fig04.12 Simulation of the 2D COCE.
    || (Todorovic, Grossberg 1988). p132c2h0.6 "... 2D Craik-O
  • image p134fig04.13 Contrast constancy shows how the relative luminances when a picture is viewed in an illumination gradient can even be reversed to restore the correct reflectances due to discounting the illuminant.
  • image p134fig04.14 The kinds of displays that Michael Paradiso and Ken Nakayamas used to catch filling-in "in the act" and which Karl Arrington then simulated using the Grossberg and Todorovic 1988 model.
    || Experiments on filling-in. Catching "filling0in" in the act (Paradiso, Nakayama 1991). (Arrington 1994 Vision Research 34, 3371-3387) simulated these data using the model of Grossberg and Todorovic 1988.
  • image p138fig04.15 Simple cells are oriented contrast detectors, not edge detectors.
    || From oriented filtering to grouping and boundary completion (Hubei, Weisel 1968). Oriented receptive fields: SIMPLE CELLS. Sensitive to : orientation, [amount, direction] of contrast, spatial scale. Oriented local contrast detectors, not edge detectors!
  • image p139fig04.16 The simplest way to realize an odd simple cell receptive field and firing threshold.
    || "Simplest" simple cell model. need more complexity for processing natural scenes. Difference-of-Gaussian or Gabor filter (J. Daugman, D. Pollen...). Output signal vs cell activity. Threshold linear signal, half-wave rectification.
  • image p140fig04.17 Complex cells pool inputs from simple cells that are sensitive to opposite contrast polarities. Complex cells hereby become contrast invartiant, and can respond to contrasts of either polarity.
    || Complex cells: pool signals from like-oriented simple cells of opposite contrast polarity at the same position. They are "insensitive to contrast polarity". Half-wave rectification of inputs from simple cells.
  • image p141fig04.18 The images formed on the two retinas in response to a single object in the world are displaced by different amounts with respect to their foveas. This binocular disparity is a powerful cue for determining the depth of the object from an observer.
    || Binocular Disparity. Binocular disparities are used in the brain to reconstruct depth from 2D retinal inputs, for relatively near objects.
  • image p141fig04.19 A laminar cortical circuit for computing binocular disparities in layer 3B of V1 at binocular simple cells. These cells add positionally disparate inputes from like polarized monocular simple cells (layer 4 of V1). Binocular simple cells at each position that is sensitive to opposite polarities then add their outputs at complex cells in layer 2/3. Chapter 10 will explain how these laminar circuits work in greater detail.
    || Laminar cortical circuit for complex cells. [left, right] eye.
    V1 layerdescription
    2/3Acomplex cells
    3Bbinocular simple cells
    4monocular simple cells
  • image p142fig04.20 A Glass pattern and a reverse-contrast Glass pattern give rise to a different boundary groupings because simple cells can only pool signals from like-polarity visual features. See the text for details.
  • image p143fig04.21 Oriented simple cells can respond at the ends of thick enough bar ends, but not at the ends of thin enough lines. See the text for an explanation of why this is true, and its implications for visual system design.
    || Hierarchical resolution of uncertainty. For a given field size. Different responses occur at bar ends and line ends. For a thin line no detector perpendicular to line end can respond enough to close the boundary there. Network activity.
  • image p144fig04.22 Computer simulation of how simple and complex cells respond to the end of a line (gray region) that is thin enough relative to the receptive field size (thick dashed region in the left panel). These cells cannot detect the line end, as indicated by the lack of responses there in the left panel (oriented short lines denote the cells
  • image p145fig04.23 If end gaps were not closed by end cuts, then color would flow out of every line end!
    || A perceptual disaster in the feature contour system. feature contour, line boundary. input -> [boundary, surface]. boundary -> surface. Color would flow out of every line end! as it does during neon color spreading.
  • image p145fig04.24 A brain
  • image p146fig04.25 Networks of simple, complex, and hypercomplex cells can create end cuts as an example of hierarchical resolution of uncertainty. See the text for details.
    || How are end cuts created? (Grossberg 1984) Two stages of short-range competition. 1st stage: Simple cells -> complex cells -> hypercomplex - endstopped complex. First competitive stage- across position, same orientation; Second competitive stage- same position, across orientation. -> cooperation.
  • image p148fig04.26 End cuts are formed during neon color spreading in the same way that they are formed at line ends.
    || End cut during neon color spreading.
    FIRST competitive stageSECOND competitive stage
    within orientationacross orientation
    across positionwithin position
    to generate end cuts.
  • image p149fig04.27 Bipole cells can form boundaries that interpolate end cuts, and use their cooperative-competitive interactions to choose the boundary groupings that have the most support from them.
    || Bipole cells: boundary completion. long-range cooperation & short-range inhibition: complete winning boundary groupings and suppress weaker boundaries.
  • image p150fig04.28 Bipole cells have two branches (A and B), or poles, in their receptive fields. They help to carry out long-range boundary completion.
    || Bipole property. Boundary completion via long-range cooperation. Completing boundaries inwardly between pairs or great numbers of inducers in an oriented way. fuzzy "AND" gate.
  • image p151fig04.29 Experimental evidence of bipole cells in cortical area V2 was reported by Von der Heydt, Peterhans, and Baumgarter (1984).
    || Bipoles: first neurophysiological evidence (V2) (von der Heydt, Peterhans, Baumgartner 1984, Peterhans, von der Heydt 1988). (Grossberg 1984) prediction.
    Ordering:
    Stimulus (S)
    probe location *
    cells in V2
    response?
    ...(S)*...YES
    ...*...(S)NO
    (S)...*...NO
    (S)...*...(S)YES
    (S)...*...
    (more contrast)
    NO
    (S)...*.....(S)YES
    Evidence for receptive field.
  • image p151fig04.30 Anatomical evidence for long-range horizontal connections has also been reported, as illustrated by the example above from (Bosking etal 1997).
    || Anatomy: horizontal connections (V1) (Bosking etal 1997). tree shrew. [10, 20]*[20, 10, 0, -10, -20] (degrees).
  • image p152fig04.31 The predicted bipole cell receptive field (upper left corner) has been supported by both neurophysiological data and psychophysical data, and used in various forms by many modelers. See the text for details.
    || Bipoles through the ages. (Grossberg 1984; Grossberg, Mongolla 1985). (Field, Hayes, Hess 1993) "association field". (Heitger, von der Heydt 1993). (Williams, Jacobs 1997). cf. "relatability" geometric constraints on which countours get to group (Kellman & Shipley 1991). Also "tensor voting" (Ullman, Zucker, Mumford, Guy, Medioni, ...).
  • image p153fig04.32 The double filter network embodies simple, complex, and hypercomplex (or endstopped complex) cells. It feeds into a network of bipole cells that can complete boundaries when it properly interacts with the double filter.
    || Double filter and grouping network. Cells : simple -> complex -> hypercomplex (endstopping) -> bipole
    Grouping networkbipole cells
    Double filterhypercomplex cells
    endstopping
    complex cells
    simple cells
  • image p156fig04.33 A tripartite texture (top row) and two bipartite textures (bottom row) that illustrate how emergent boundary groupings can segregate textured regions from one another.
  • image p157fig04.34 Some textures that were simulated with mixed success by the complex channels model. In particular, the model gets the wrong answer for the textures in (g) and (i). The Boundary Contour System model of Figure 4.32, which includes both a double filter and a bipole grouping network, simulates the observed results.
  • image p159fig04.35 Spatial impenetrability prevents grouping between the pac-men figures in the left figure, but not in the figure on the right.
    || p158c2h0.75 "... In the image shown in the left panel, the horizontal boundaries of the background squares interfere with vertical boundary completion by vertically-oriented bipole cells, again by spatial impenetrability. In contrast, the vertical boundaries of the background squares are collinear with the vertical pac-man inducers, thereby supporting formation of the square boundaries. Finer aspects of these percepts, such as why the square ... (right panel) appears to lie in front of four partially occuded circular discs, as regularly occurs when the Kanizsa square can form (eg Figure 3.3), can be understood using FACADE theory mechanism that will shown below to explain many figure-ground percepts using natural extensions to the three dimensional world of boundary and and surface mechanisms that we have already discussed. ..."
  • image p159fig04.36 Graffiti art by Banksy exploits properties of amodal boundary completion and spatial impenetrability.
    || p159c1h0.75 perceptual psychologist Nava Rubin "... When the wall is smooth, Banksy leaves the regions previously covered by stencil unpainted, relying of observers
  • image p161fig04.37 Kanizsa squares that form either collinearly to their inducers (left panel) or perpendicular to them (right panel) confirm predictions of the BCS boundary completion model.
    || Analog-sensitive boundary completion. contour strength vs Kanizsa square image. Increases with "support ratio" (Shipley, Kellman 1992). Inverted-U (Lesher, Mingoloa 1993; cf Soriano, Spillmann, Bach 1994)(shifted gratings). p370h0.6 BCS = Boundary Contour System, FCS = Feature Contour System. p161c1h0.85 "... As predicted by the BCS, they found an Inverted-U in contour strength as a function of line density. ... This effect may be explained by the action of the short-range competition that occurs before the stage of long-range cooperative grouping by bipole cells (Figure 4.32). It is thus another example of the balance between cooperative and competitive mechanisms. ..."
  • image p162fig04.38 How long-range cooperation among bipole cells and short-range competition by hypercomplex cells work together to generate the inverted-U in boundary strength that is found in the data of Figure 4.37 (right panel).
    || Cooperation and competition during grouping.
    few lineswide spacing, inputs outside spatial range of competition, more inputs cause higher bipole activity
    more linesnarrower spacing, slightly weakens net input to bipoles from each inducer
    increasing line densitycauses inhibition to reduce net total input to bipoles
  • image p163fig04.39 A schematic of the LAMINART model that explains key aspects of laminar visual cortical anatomy and dynamics. LGN -> V1 [6, 4, 2/3] -> V2 [6, 4, 2/3]
    || p163c1h0.6 "... The first article abount laminar computing ... proposed how the laminar cortical model could process 2D pictures using bottom-up filtering and horizontal bipole grouping interactions (Grossbeerg, Mingolla, Ross 1997). In 1999, I was able to extend the model to also include top-down circuits for expectation and attention (Grossberg 1999)(right panel). Such a synthesis of laminar bottom-up, horizontal, and top-down circuits is characteristic of the cerebral cortex (left panel). I called it LAMINART because it began to show how properties of Adaptive Resonance Theory, or ART, notably the ART prediction about how top-down expectations and attention work, and are realized by identical cortical cells and circuits. You can immediately see from the schematic laminar circuit diagram ... (right panel) that circuits in V2 seem to repeat circuits in V1, albeit with a larger spatial scale, despite the fact that V1 and V2 carry out different functions, How this anatomical similarity can coexist with functional diversity will be clarified in subsequent sections and chapters. It enable dfifferent kinds of biological intelligence to communicate seamlessly while carrying out their different psychological functions. ..."
  • image p164fig04.40 The Koffka-Benussi ring. See the text for details.
    || p164c2h0.25 "... [left image] The luminance of the ring is intermediate between the luminances of the two background regions. Its perceived brightness is also between the brightnesses of the two background regions, and appears to be uniform throughout. The right image differs from the left only in that a vertical line divides the two halves of the ring where it intersects the two halves in the background. Although the luminance of the ring is still uniform throughout, the two halves of the rig now have noticeably different brightnesses, with the left half of the ring looking darker than the right half. How can drawing a line have such a profound effect on the brightnesses of surface positions that are so far away from the line? ..."
  • image p165fig04.41 The Kanizsa-Minguzzi ring. See the text for details.
    || p165c1h0.6 "... (left panel), the annulus is divided by two line segments into annular sectors of unequal area. Careful viewing shows that the smaller sector looks a little brighter than the larger one. (Kanizsa, Minguzzi 1986) noted that "this unexpected effect is not easily explained. In fact, it cannot be accounted for by any simple psychological mechanism such as lateral inhibition or freuency filtering. Furthermore, it does not seem obvious to invoke oganizational factors, like figural belongingness or figure-ground articulation."". p165c2h0.35 "... (Grossberg, Todorovic 1988). Our main claim is that the two radial lines play two roles, one in the formation of boundaries with which to contain the filling-in process, and the other as a source of feature contour signals that are filled-in within the annular regions to create a surface brightness percept. ..."
  • image p166fig04.42 Computer simulation of Kanizsa-Minguzzi ring percept. See the text for details.
  • image p167fig04.43 (a) How bipole cells cause end cuts. (b) The Necker cube generates a bistable percept of two 3D parallelopipeds. (c) Focusing spatial attention on one of the disks makes it look both nearer and darker, as (Tse 1995) noted and (Grossbert, Yazdanbakhsh 1995) explained.
    || T-junction sensitivity. image -> bipole cells -> boundary. (+) long-range cooperation, (-) short-range competition.
  • image p168fig04.44 Macrocircuit of the main boundary and surface formation stages that take place from the lateral geniculate nucleus, or LGN, through cortical areas [V1, V2, V4]. See the text for details.
    || image p168fig04.45 How ON and OFF feature contour (FC) activities give rise to filled-in surface regions when they are adjacent to a like oriented boundary, but not otherwise.
  • image p170fig04.46 Surface regions can fill-in using feature contour inputs (+ and - signs) if they are adjacent to, and collinear with, boundary contour inputs (solid) line, as in (a), but not otherwise, as in (b).
  • image p170fig04.47 A double-opponent network processes output signals from opponent ON and OFF Filling-In DOmains, or FIDOs.
    || OFF FIDO -> shunting networks -> ON FIDO -> shunting networks-> opponent interation -> FIDO outputs
  • image p171fig04.48 How closed boundaries contain filling-in of feature contour signals, whereas open boundaries allow color to spread to both sides of the boundary.
    || Before filling-in: boundary contour, illuminant-discounted feature contour; After filling-in: no gap, gap
  • image p171fig04.49 An example of DaVinci stereopsis in which the left eye sees more of the wall between A and C than the right eye does. The region between B and C is seen only by the left eye because the nearer wall between C and D occludes it from the right eye view.
  • image p173fig04.50 This figure illustrates how a closed boundary can be formed in a prescribed depth due to addition of binocular and monocular boundaries, but not at other depths.
    || How are closed 3D boundaries formed? V1 Binocular, V2 boundary, V2 surface; Prediction: monocular and horizontal boundaries are added to ALL binocular boundaries along the line of sight. Regions that are surrounded by a CLOSED boundary can depth-selectively contain filling-in of lightness and colored signals.
  • image p174fig04.51 The same feedback circuit that ensures complementary consistency between boundaries and surfaces also, automatically, initiates figure-ground separation! See the text for details.
    || before feedback: [V1 -> V2 pale stripe -> V2 thin stripe, "attention pointers" (Cavanagh etal 2010)]; after feedback: [V1 + V2 thin stripe] -> V2 pale stripe via contrast sensitive [exhitation, inhibition] for depths [1, 2] -> object recognition
  • image p174fig04.52 An example of how the 3D LAMINART model can transform the two monocular images of the random dot stereogream in the top row into the three depth-separated surfaces representations in the bottom row.
    || Stereogram surface percepts: surface lightnesses are segregated in depth (Fang and Grossberg 2009). [left, right] inputs, [far, fixation, near] planes. Contrast algorithms that just compute disparity matches and let computer code build the surface eg (Marr, Poggio, etal 1974).
  • image p176fig04.53 The on-center off-surround network within position and across depth helps to explain why brighter Kanizsa squares look closer.
    || inhibition vs. depth. p176c1h0.25 "... to qualitatively understand how this example of proximity-luminance covariance works. It follows directly from the boundary pruning by surface contour feedback signals (Figure 4.51) that achieves complementary consistency and initiates figure-ground perception. ...". p176c1h0.45 "... these inhibitory sigals are part of an off-surround network whose strength decreases as the depth difference increases between the surface that generates the signal and its recipient boundaries. ...". p176c1h0.8 "... Within FACADE theory, the perceived depth of a surface is controlled by the boundaries that act as its filling-in generators and barriers (Figure 3.22), since these boundaries select the depth-sselective FIDOs within whin filling-in can occur, and thereby achieve surface capture. These boundaries, in turn, are themselves strengthened after surface-to-boundary contour feedback eliminates redundant boundaries that cannot support sucessful filling-in (Figure 4.51). These surface contour feedback signals have precisely the properties that are needed to explain why brighter Kanizsa squares look closer! ..."
  • image p178fig04.54 Initial steps in figure-ground separation. See the text for details.
    ||
  • topLeftrepeats the image in Figure 1.3
    topRightshows again the long-range cooperation and short-range compeition that are controlled by the bipole grouping process (Figure 4.43a middle panel)
    bottomLeftshows the end gaps that are caused by these bipole grouping mechanisms
    bottomRightshows how surface filling-in is contained within the closed horizontal rectangular boundary, but spills out of the end gaps formed in the other two rectangles
  • image p178fig04.55 Amodal completion of boundaries and surfaces in V2.
    || Separated V2 boundaries: near, far (amodal boundary completion); Separated V2 surfaces: ?horizonal, vertical? (amodal surface filling-in).
  • image p179fig04.56 Final steps in generating a visible, figure-ground separated, 3D surface representation in V4 of the unoccluded parts of opaque surfaces.
    || Visible surface perception.
    Boundary enrichment:nearfarasymmetry between near & far
    V4horizontal rectanglehorizontal & vertical rectanglescannot use these (overlapping?) boundaries for occluded object recognition
    V2horizontal rectanglevertical rectangleuse these boundaries for occluded object recognition
    Visible surface filling-in:filling-in of entire vertical rectanglepartial filling in of horizontal rectanglevisible percept of unoccluded [vertical] surface
  • image p181fig04.57 Percepts of unimodal and bistable transparency (top row) as well as of a flat 2D surface (bottom row, left column) can be induced just by changing the relative contrasts in an image with a fixed geometry.
    || X junction
  • image p182fig04.58 LAMINART model processing stage that are sufficient to explain many percepts of transparency, including those summarized in Figure 4.57.
    || [left, right] eye, [LGN, V1 [6, 4, 3B, 2/3 A], V2 [4, 2/3]], [mo, bi]nocular cart [simple, complex] cells, [excita, inhibi]tory cart [connection, cell]s.
  • image p186fig05.01 Humans and other autonomous adaptive intelligent agents need to be able to learn both many-to-one and one-to-many maps.
    || Learn many-to-one (compression, naming) and one-to-many (expert knowledge) maps
  • image p186fig05.02 Learning a many-to-one map from multiple visual fonts of a letter to the letter
  • image p186fig05.03 Many-to-one maps can learn a huge variety of kinds of predictive information.
    || Many-to-one map, two stage compression: IF-THEN rules: [symptom, test, treatment]s; length of stay in hospital
  • image p189fig05.04 The hippocampus is one of several brain regions that are important in learning and remembering about objects and events that we experience throughout life. The book will describe several hippocampal processes that contribute to this achievement in different ways.
    || hypothalmic nuclei, amygdala, hippocampus, cingulate gyrus, corpus callosum, thalamus
  • image p192fig05.05 ON and OFF cells in the LGN respond differently to the sides and ends of lines.
    || [ON, OFF]-center, [OFF, ON]-surround (respectively). OFF-center cells maximum response at line end (interior), ON-center cells maximum response along sides (exterior)
  • image p192fig05.06 Bottom-up and top-down circuits between the LGN and cortical area V1. The top-down circuits obey the ART Matching Rule for matching with bottom-up input patterns and focussing attention on expected critical features.
    || Model V1-LGN circuits, version [1, 2]. retina -> LGN relay cells -> interneurons -> cortex [simple, endstopped] cells -> cortex complex cells
  • image p193fig05.07 A more detailed description of the connections between retinal ganglion cells, the LGN, and V1.
    ||
  • image p193fig05.08 The patterns of LGN activation and inhibition on the sides and ends of a line without the top-down feedback (A) and with it (C). The top-down distribution of excitation (+) and inhibition (-) are shown in (B).
    ||
  • image p194fig05.09 A computer simulation of the percept (D) that is generated by feature contours (B) and boundary contours (C) in response to an Ehrenstein disk stimulus (A).
    ||
  • image p198fig05.10 A competitive learning circuit learns to transform distributed feature patterns into selective responses of recognition categories.
    || Competitive learning and Self-Organized Maps (SOMs). input patterns -> feature level (F1) -> adaptive filter (T=ZS) ->
  • image p199fig05.11 Instar learning enables a bottom-up adaptive filter to become selectively tuned to particular feature patterns. Such pattern learning needs adaptive weights that can either increase or decrease to match the featural activations that they filter.
    || Instar learning STM->LTM: need both increases and decreases in strength for the LTM pattern to learn the STM pattern
  • image p200fig05.12 The duality of the outstar and instar networks is evident when they are drawn as above.
    ||
  • image p200fig05.13 Instar and outstar learning are often used to learn the adaptive weights in the bottom-up filters and top-down expectations that occur in ART. The ART Matching Rule for object attention enables top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features.
    || Expectations focus attention: feature pattern (STM), Bottom-Up adaptive filter (LTM), Category (STM), competition, Top-Down expectation (LTM); ART Matching Rule: STM before top-down matching, STM after top-down matching (attention!)
  • image p200fig05.14 Outstar learning enables individual sampling cells to learn distributed spatial patterns of activation at the network of cells that they sample. Again, both increases and decreases in LTM traces must be possible to enable them to match the activity pattern at the sampled cells.
    || Outstar learning, need both increases and decreases in ????
  • image p201fig05.15 An outstar can learn an arbitrary spatial pattern of activation at its sampled nodes, or cells. The net pattern that is learned is a time average of all the patterns that are active at the sampled nodes when the sampling node is active.
    || Spatial learning pattern, outstar learning.
  • image p202fig05.16 In the simplest example of category learning, the category that receives the largest total input from the feature level is chosen, and drives learning in the adaptive weights that abut it. Learning in this "classifying vector", denoted by zi, makes this vector more parallel to the input vector from the feature level that is driving the learning (dashed red arrow).
    || Geometry of choice and learning
  • image p202fig05.17 This figure summarizes the simplest equations whereby the adaptive weights of a winning category learn the input pattern that drove it to win, or more generally a time-average of all the input patterns that succeeded in doing so.
    || Geometry of choice and learning, learning trains the closest LTM vector
  • image p205fig05.18 How catastrophic forgetting can occur in a competitive learning or self-organizing map model due to basic properties of competition and associative learning.
    || Learning from pattern sequences, practicing a sequence of spatial patterns can recode all of them! When is learning stable? Input patterns cannot be too dense relative to the number of categories; Either: not to many distributed inputs relative to the number of categories, or not too many input clusters
  • image p207fig05.19 The ART hypothesis testing and learning cycle. See the text for details about how the attentional system and orienting system interact in order to incorporate learning of novel categories into the corpus of already learned categories without causing catastophic forgetting.
    ||
  • image p211fig05.20 The PN and N200 event-related potentials are computationally complementary events that are computed within the attentional and orienting systems.
    || PN and N200 are complementary waves. PN [top-down, conditionable, specific] match; N200 [bottom-up, unconditionable, nonspecific] mismatch
  • image p211fig05.21 Sequences of P120, N200, and P300 event-related potentials occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    || ERP support for mismatch-mediated reset: event-related potentials: human scalp potentials. ART predicted correlated sequences of P120-N200-P300 Event Related Potentials during oddball learning. P120 mismatch; N200 arousal/novelty; P300 STM reset. Confirmed in (Banquet and Grossberg 1987)
  • image p213fig05.22 Suppose that a very different exemplar activates a category than the one that originally learned how to do this.
    || By prior learning, X1 at F1 is coded at F2, Suppose that X2 incorrectly activates the same F2 code. How to correct the error? The problem occurs no matter how you define an "error"
  • image p213fig05.23 A category, symbol, or other highly compressed representation cannot determine whether an error has occurred.
    || Compression vs error correction. past vs present. Where is the knowledge than an error was made? Not at F2! The compressed code cannot tell the difference! X2 is at F1 when (green right triangle GRT) is at F2 defines the error. There is a mismatch between X1 and X2 at F1. How does the system know this?
  • image p214fig05.24 Learning of a top-down expectation must occur during bottom-up learning in the adaptive filter in order to be able to match the previously associated feature pattern with the one that is currently active.
    || Learning top-down expectations. When the code (green right triangle GRT) for X1 was learned at F2, GRT learned to read-out X1 at F1. [Bottom-Up, Top-Down] learning
  • image p214fig05.25 The sequence of events whereby a novel input pattern can activate a category which, in turn, reads out its learned top-down expectation to be matched against the input pattern. Error correction thus requires the use of a Match Detector that has properties of the Processing Negativity ERP.
    || How is an error corrected. During bottom-up learning, top-down learning must also occur so that the pattern that is read out top-down can be compared with the pattern that is activated by bottom-up inputs. Match detector: Processing Negativity ERP. 1. top-down, 2. conditionable, 3. specific, 4. match
  • image p214fig05.26 When a big enough mismatch occurs, the orienting system is activated and sends a burst of nonspecific arousal to the category level. This Mismatch Detector has properties of the N200 ERP.
    || Mismatch triggers nonspecific arousal. Mismatch at F1 eleicits a nonspecific event at F2. Call this event nonspecific arousal. N200 ERP Naatanen etal: 1. bottom-up, 2. unconditionable, 3. nonspecific, 4. mismatch
  • image p215fig05.27 Every event activates both the attentional system and the orienting system. This text explains why.
    || Attentional and Orienting systems. Every event has a cue (specific) and an arousal (nonspecific) function
  • image p215fig05.28 How a mismatch between bottom-up and top-down input patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level.
    || Mismatch -> inhibition -> arousal -> reset. BU input orienting arousal, BU+TD mismatch arousal and reset. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
  • image p220fig05.29 Vigilance is a gain parameter on inputs to the orienting system that regulates whether net excitation from bottom-up inputs or inhibition from activated categories will dominate the orienting system. If excitation wins, then a memory search for a better matching will occur. If inhibition wins, then the orienting system will remain quiet, thereby enabling resonance and learning to occur.
    || Vigilance control [resonate and learn, reset and search]. ρ is a sensitivity or gain parameter
  • image p221fig05.30 When a predictive disconfirmation occurs, vigilance increases enough to drive a search for a more predictive category. If vigilance increases just enough to exceed the analog match between features that survive top-down matching and the entire bottom-up input pattern, then minimax learning occurs. In this case, the minimum amount of category generalization is given up to correct the predictive error.
    || Match tracking realizes minimax learning principle. Given a predictive error, vigilance increases just enough to trigger search and thus acrifices the minimum generalization to correct the error ... and enables expert knowledge to be incrementally learned. predictive error -> vigilance increase just enough -> minimax learning
  • image p221fig05.31 A system like Fuzzy ARTMAP can learn to associate learned categories in one ART network with learned categories in a second ART network. Because both bottom-up and top-down interactions occur in both networks, a bottom-up input pattern to the first ART network can learn to generate a top-down output pattern from the second ART network.
    || Fuzzy ARTMAP. Match tracking realizes minimax learning principle: vigilence increases to just above the match ratio of prototype / exemplar, thereby triggering search
  • image p224fig05.32 Learning the alphabet with two different levels of vigilance. The learning in column (b) is higher than in column (a), leading to more concrete categories with less abstract prototypes. See the text for details.
    ||
  • image p225fig05.33 Some early ARTMAP benchmark studies. These successes led to the use of ARTMAP, and many variants that we and other groups have developed, in many large-scale applications in engineering and technology that has not abated even today.
    || see Early ARTMAP benchmark studies
  • image p225fig05.34 ARTMAP was successfully used to learn maps of natural terrains with many advantages over those of mapping projects that used AI expert systems. The advantages are so great that many mapping projects started to use this technology.
    || AI expert system - 1 year: field identification of natural regions; derivation of ad hoc rules for each region by expert geographers; correct 80,000 of 250,000 site labels; 230m (site-level) scale. ARTMAP system - 1 day: rapid, automatic, no natural regions or rules; confidence map; 30m (pixel-level) scale can see roads; equal accuracy at test sites
  • image p226fig05.35 I had shown in 1976 how a competitive learning or self-organizing map model could undergo catastrophic forgetting if the input environment was sufficiently dense and nonstationary, as illustrated by Figure 5.18. Later work with Gail Carpenter showed how, if the ART Matching Rule was shut off, repeating just four input patterns in the correct order could also casue catastrophic forgetting by causing superset recoding, as illustrated in Figure 5.36.
    || Code instability input sequences. D C A; B A; B C = ; |D|<|B|<|C|; where |E| is the number of features in the set E. Any set of input vectors that satisfy the above conditions will lead to unstable coding if they are periodically presented in the order ABCAD and the top-down ART Matching Rule is shut off.
  • image p226fig05.36 Column (a) shows catastrophic forgetting when the ART Matching Rule is not operative. It is due to superset recoding. Column (b) shows how category learning quickly stabilizes when the ART Martching Rule is restored.
    || Stabel and unstable learning, superset recoding
  • image p228fig05.37 A macrocircuit of the neurotrophic Spectrally Timed ART, or nSTART, model. I developed nSTART with my PhD student Daniel Franklin. It proposes how adaptively timed learning in the hippocampus, bolstered by Brain Derived Neurotrophic Factor, or BDNF, helps to ensure normal memory consolidation.
    || habituative gates, CS, US, Thalamus (sensory cortex, category learning, conditioned reinforcer learning, adaptively time learning and BDNF), Amygdala (incentive motivation learning), Hippocampus (BDNF), Prefrontal Cortex (attention), Pontine nuclei, Cerebellum (adaptively timed motor learning)
  • image p230fig05.38 The Synchronous Matching ART, or SMART, model includes spiking neurons in a laminar cortical hierarchy. I developed SMART with my PhD student Massimiliano Versace. By unlumping LAMINART to include spiking neurons, finer details of neurodynamics, such as the existence of faster gamma oscillations during good enough matches, and slower beta oscillations during bad enough mismatches, could be shown as emergent properties of network interactions.
    || Second order thalamus -> specific thalamic nucleus -> Thalamic reticulate nucleus -> neocortical laminar circuit [6ll, 6l, 5, 2/3, 1] -> Higher order cortex. Similar for First order thalamus -> First order cortex, with interconnection to Second order, nonspecific thalamic nucleus
  • image p231fig05.39 The SMART hypothesis testing and learning cycle predicts that vigilance increases when a mismatch in subcortical regions like the nonspecific thanlamus activates the nucleus basalis of Meynert which, in turn, broadcasts a burst of the neurotransmitter acetylcholine, or ACh, to deeper cortical layers. Due to the way in which LAMINART proposes that cortical matching and mismatching occurs, this ACh burst can increase vigilance and thereby trigger a memory search. See the text for details.
    || [BU input, [, non]specific thalamic nucleus, thalamic reticulate nucleus, neocortical laminar circuit] cart [Arousal, Reset, Search, Vigilance]
  • image p232fig05.40 Computer simulation of how the SMART mode generates (a) gamma oscillations if a good enough match occurs, or (c) beta oscillations if a bad enough match occurs. See the text for details.
    || Brain oscillations during match/mismatch, data, simulation. (a) TD corticothalamic feedback increases synchrony (Sillito etal 1994) (b) Match increases γ oscillations (c) Mismatch increases θ,β oscillations
  • image p232fig05.41 (a)-(c). The sequence of interlaminar events that SMART predicts during a mismatch reset. (d) Some of the compatible neurophysiological data.
    || Mismatch causes layer 5 dendritic spike that trigger reset. (a) Arousal causes increase in nonspecific thalamic nuclei firing rate and layer 5 dendritic and later somatic spikes (Karkum and Zhu 2002, Williams and Stuart 1999) (b) Layer 5 spikes reach layer 4 via layer 6i and inhibitory neurons (Lund and Boothe 1975, Gilbert and Wiesel 1979) (c) habituative neurotransmitters in layer 6i shift the balance of active cells in layer 4 (Grossberg 1972, 1976) (d) Dendritic stimulation fires layer 5 (Larkum and Zhu 2002) stimulation apical dendrites of nonspecific thalamus
  • image p233fig05.42 Mismatch-induced beta oscillations have been reported in at least three parts of the brain: V1, V4, and hippocampus. Althpough there may be other reasons for beta oscillations in the brain, those that are caused by a mismatch should be studied in concert with the gamma oscillations that occur during a good enough match. See tyhe text for details.
    || Is there evidence for the [gamma, beta] prediction? Yes, in at least three parts of the brain, (Buffalo EA, Fries P, Ladman R, Buschman TJ, Desimone R 2011, PNAS 108, 11262-11267) Does this difference in average oscillation frequencies in the superficial and deep layers reflect layer 4 reset? Superficial recording γ (gamma), Deep recording β (beta) (Berke etal 2008, hippocampus; Buschman and Miller 2009, FEF)
  • image p236fig05.43 The activation of the nucleus basalis of Meynert, and its subsequent release of ACh into deeper layers of neocortex, notably layer 5, is assumed to increase vigilance by reducing afterhyperpolarization (AHP) currents.
    || Vigilance control: mismatch-mediated acetylcholine release (Grossberg and Versace 2008). Acetylcholine (ACh) regulation by nonspecific thalamic nuclei via nucleus basalis of Meynert reduces AHP in layer 5 and causes a mismatch/reset thereby increasing vigilance. HIGH vigilance ~ sharp code, LOW vigilance ~ coarse code
  • image p240fig05.44 When an algebraic exemplar model is realized using only local computations, it starts looking like an ART prototype model.
    || How does the model know which exemplars are in category A? BU-TD learning. How does a NOVEL test item access category A?
  • image p241fig05.45 The 5-4 category structure is one example of how an ART network learns the same kinds of categories as human learners. See the text for details.
    || 5-4 Category structure. A1-A5: closer to the (1 1 1 1) prototype; B1-B4: closer to the (0 0 0 0) prototype
  • image p242fig05.46 Computer simulations of how two variants of Distributed ARTMAP incrementally learn the 5-4 category structure. See the text for details.
    || Distributed ARTMAP with [self-supervised learning, post-training LTM noise]
  • image p245fig05.47 How long-range excitatory connections and short-range disynaptic inhibitory connections realize the bipole grouping law.
    || stimulus -> boundary representation -> layer 2/3
  • image p246fig05.48 Microcircuits of the LAMINART model that I developed with Rajeev Raizada. See the text for details of how they integrate bottom-up adaptive filtering, horizontal bipole grouping, and top-down attentional matching that satisfied the ART Matching Rule.
    ||
  • image p248fig05.49 This circuit of the LAMINART model helps to explain properties of Up and Down states during slow wave sleep, and how disturbances in ACh dynamics can disrupt them.
    ||
  • image p252fig06.01 A surface-shroud resonance begins to form when the surface representations of objects bid for spatial attention. In addition to these topographic excitatory inputs, there is long-range inhibition of the spatial attention cells that determines which inputs will attract spatial attention.
    || Bottom-up spatial attention competition. [more, less] luminous perceptual surfaces -> competition -> spatial attention
  • image p253fig06.02 After bottom-up surface inputs activate spatial attentional cells, they send top-down topographic excitatory signals back to the surface representations. This recurrent shunting on-center off-surround network contrast enhances larger attentional activities while approximately normalizing the total spatial attentional activity. A surface-shroud resonance hereby forms that selects an attentional shroud, enhances the perceived contrast of the attended surface (light blue region), and maintains spatial attention on it.
    || Surface-shroud resonance. perceptual surfaces -> competition -> spatial attention. (Carrasco, Penpeci-Talgar, and Eckstein 2000, Reynolds and Desimone 2003)
  • image p254fig06.03 These interactions of the ARTSCAN Search model enable it to learn to recognize and name invariant object categories. Interactions between spatial attention in the Where cortical stream, via surface-shroud resonances, and object attention in the What cortical stream, that obeys the ART Matching Rule, coordinate these learning, recognition, and naming processes.
    || Retinal image -> On & OFF cell contrast normalization (retina/LGN) -> polarity [, in]sensitive contrast enhancement (V1) -> object [boundary (V2), surface (V2/V4)] -> surface contour (V2); What stream categories: volition control (BG). object boundary (V2) <-> view (ITp) <-> view integrator (ITp) <-> object (ITa) <-> [object-value (ORB), value (Amyg)] <-> name (PFC)
  • image p255fig06.04 The ARTSCAN Search model can also search for a desired target object in a scene, thereby clarifying how our brains solve the Where
  • image p257fig06.05 A curve tracing task with monkeys was used by Roelfsema, Lamme, and Spekreijse in 1998 to demonstrate how spatial attention can flow along object boundaries. See the text for details.
    || Attention flows along curves: Roelfsema etal 1998: Macaque V1. fixation (300ms) -> stimulus (600ms RF - target curve, distractor) -> saccade. Crossed-curve condition: attention flows across junction between smoothly connected curve segments, Gestalt good continuation
  • image p258fig06.06 Neurophysiological data and simulation of how attention can flow along a curve. See the text for details.
    || Simulation of Roelfsema etal 1998, data & simulation. Attention directed only to far end of curve. Propagates along active layer 2/3 grouping to distal neurons.
  • image p258fig06.07 A top-down spotlight of attention can also be converted into a shroud. This process begins when the spotlight triggers surface filling-in within a region. Figure 6.8 shows how it is completed.
    || Reconciling spotlights and shrouds: top-down attentional spotlight becomes a shroud. spotlight of attention, surface filling-in
  • image p259fig06.08 The distributed ARTSCAN, or dARTSCAN, model includes spatial attention in both PPC and PFC, and both fast-acting attention, triggered by transient cells in Where cortical areas such as MT, and slower-acting surface-shroud resonances in What cortical areas such as V4 and PPC. See the text for details.
    || dARTSCN spatial attention hierarchy, Fast (Where stream) Slow (What stream) (Foley, Grossberg, and Mingolia 2012). [transient cells (MT) ->, object surfaces (V4) <->] [object shrouds (PPC) <-> spatial shrouds (PPC/PFC)]
  • image p260fig06.09 Crowding in the periphery of the eye can be avoided by expanding the size and spacing of the letters to match the cortical magnification factor.
    || Crowding: visible objects and confused recognition. Accurate target recogition requires increased flanker spacing at higher eccentricity
  • image p260fig06.10 The cortical magnification factor transforms (A) artesian coordinates in the retina into (B) log polar coordinates in visual cortical area V1.
    ||
  • image p261fig06.11 If the sizes and distances between the letters stays the same as they are received by more peripheral parts of the retina, then all three letters may be covered by a single shroud, thereby preventing their individual perception and recognition.
    || Crowding: visible objects and confused recognition. log compression and center-surround processing cause... input same eccentricity, surface, object shroud, crowding threshold. object shrouds merge!
  • image p261fig06.12 Pop-out of the L among T
  • image p265fig06.13 The basal ganglia gate perceptual, cognitive, emotional, and more processes through parallel loops.
    || [motor, ocularmotor, dorsolateral, ventral-orbital, anterior cingulate] vs. [Thalamus, pallidum-subs, nigra, Striatum, Cortex]
  • image p267fig06.14 Feedback from object surfaces to object boundaries uses surface contours. This feedback assures complementary consistency and enables figure-ground separation. A corollary discharge of the surface contours can be used to compite salient object feature positions.
    || Perceptual consistency and figure-ground separation.
  • image p268fig06.15 The largest salient feature signal is chosen to determine the next target position of a saccadic eye movement. This This target position signal self-inhibits to enable the next most salient position to be foveated. In this way, multiple feature combinations of the object can be foveated and categorized. This process clarifies how the eyes can explire even novel objects before moving to other objects. These eye movements enable invariant categories to be learned. Each newly chosen target position is, moreover, an "attention pointer" whereby attention shifts to the newly foveated object position.
    || How are saccades within an object determined? Figure-ground outputs control eye movements via V3AA! Support for prediction (Theeuwes, Mathot, and Kingstone 2010), More support: "attention pointers" (Cavanaugh etal 2010), Even more support (Backus etal 2001, Caplovitz and Tse 2006, Galletti and Battaglia 1989, Nakamura and Colby 2000)
  • image p270fig06.16 The same target position signal that can command the next saccade also updates a gain field that predictively maintains the attentional shroud in head-centered coordinates, even before the eye movement is complete. This process keeps the shroud invariant under eye movements, so that it can continue to inhibit reset of an emerging invariant category as t is associated with multiple object views, even while the conscious surface representation shifts with each eye movement in retinotopic coordinates. This pdating process is often called predictive re mapping.
    || Predictive remapping of eye movements! From V3A to LIP. [spatial attention, object attention, figure-ground separation, eye movement remapping, visual search]. (Beauvillaib etal 2005, Carlson-Radvansky 1999, Cavanaugh etal 2001, Fecteau & Munoz 2003, Henderson & Hollingworth 2003, Irwin 1991)
  • image p271fig06.17 Persistent activity in IT cells is just what is needed to enable view-invariant object category learning by ARTSCAN to be generalized to [view, position, size]-invariant category learning by positional ARTSCAN, or pARTSCAN. See the text for details.
    || Persistent activity in IT. Physiological data show that persistent activity exist in IT (Fuester and Jervey 1981, Miyashita and Chang 1988, Tomita etal 1999). Adapted from (Tomita etal 1999 Nature)
  • image p272fig06.18 The pARTSCAN model can learn [view, position, size]-invariant categories by adding view category integrator cells that have the properties of persistent neurons in IT. These integrator cells get reset with the invariant object category, not the view category.
    || pARTSCAN: positionally-invariant object learning. (Cao, Grossberg, Markowitz 2011). IT cells with persistent activities are modeled by view category integrators in ITp. View-specific category cells are RESET as the eyes move within the object. View category integrator cells are NOT RESET when the view-specific category is reset. They are RESET along with invariant object category cells when a spatial attention shift occurs.
  • image p272fig06.19 The various parts of this figure explain why persistent activity is needed in order to learn positionally-invariant object categories, and how this fails when persistent activity is not available. See the text for details.
    ||
  • image p273fig06.20 pARTSCAN can simulate the IT cell recoding that Li and DiCarlo reported in their swapping experiments because the swapping procedure happens without causing a parietal reset burst to occur. Thus the originally activated invariant category remains activated and can get associated with the swapped object features.
    || Simulation of Li and DiCarlo swapping data. data (Li and DiCarlo 2008), model (Cao, Grossberg, Markowitz 2011). normalized response vs. exposure (swaps and/or hours)
  • image p274fig06.21 pARTSCAN can also simulate the trade-off in IT cell responses between position invariance and selectivity that was reported by Zoccolan etal 2007. This trade-off limits the amount of position invariance that can be learned by a cortical area like V1 that is constrained by the cortical magnification factor.
    || Trade-off in IT cell response properties. Inferotemporal cortex cells with greater position invariance respond less selectively to natural objects. invariance-tolerance, selectivity-sparseness. data (Zoccolan etal 2007) model (Grossberg, Markowitzm, Cao 2011). position tolerance (PT, degrees) vs sparseness (S)
  • image p274fig06.22 pARTSCAN can simulate how IT cortex processes image morphs, when it learns with high vigilance. See the text for details.
    || Akrami etal simulation: a case of high vigilance. tested on morphs between image pairs
  • image p275fig06.23 Data from (Akrami etal 2009) and our simulation of it. See the text for details.
    || IT responses to image morphs. data vs model
  • image p275fig06.24 Left and right eye stereogram inputs are constructed to generate percepts of objects in depth. These percepts include the features of the objects, not only their relative depths, a property that is not realized in some other models of steriopsis. See the text for details.
    || Sterogram surface percepts: surface lightnesses are segregated in depth (Fand, Grossberg 2009). Contrast algorithms that just compute disparity matches and let computer code build the surface, eg (Marr, Poggio 1974)
  • image p276fig06.25 In addition to the gain field that predictively maintains a shroud in head-centered coordinates during saccades, there are gain fields that predictively maintain binocular boundaries in head-centered coordinates so that they can maintain binocular fusion during saccades and control the filling-in of surfaces in retinotopic coordinates.
    || Surface-shroud resonance.
  • image p277fig06.26 Gain fields also enable predictive remapping that maintain binocular boundary fusion as the eyes move betweeen objects. See the text for details.
    || Predictive remapping maintains binocular boundary fusion even as eyes move between objects. retinotopic boundary -> invariant boundary (binocular)
  • image p278fig06.27 A surface-shroud resonance through the Where stream enables us to consciously see an object while a feature-category resonance into the What stream enables us to recognize it. Both kinds of resonances can synchronize via visual cortex so that we can know what an object is when we see it.
    || What kinds of resonances support knowing vs seeing? What stream [knowing, feature-prototype resonance], Where stream [seeing, surface-shroud resonance]
  • image p278fig06.28 If the feature-category resonances cannot form, say due to a lesion in IT, then a surface-shroud resonance can still support conscious seeing of an attended object, and looking at or reaching for it, even if the individual doing so knows nothing about the object, as occurs during visual agnosia. The surface-shroud resonance supports both spatial attention and releases commands that embody the intention to move towards the attended object.
    || What kinds of resonances support knowing vs seeing? visual agnosia: reaching without knowing Patient DF (Goodale etal 1991). Attention and intention both parietal cortical functions (Anderson, Essick, Siegel 1985; Gnadt, Andersen 1988; Synder, Batista, Andersen 1997, 1998)
  • image p283fig07.01 The usual boundary processing stages of [simple, complex, hypercomplex, bipole] cells enable our brains to correct uncontrolled persistence of previously excited cells just by adding habituative transmitter gates, or MTM traces, at appropriate places in the network.
    || Boundary processing with habituative gates. spatial competition with habituative gates, orientational competition: gated dipole, bipole grouping
  • image p284fig07.02 Psychophysical data (top row) and simulation (bottom row) of how persistence decreases with flash illuminance and duration.
    || Persistence data and simulations. (Francis, Grossberg, Mingolia 1994 Vision Research, 34, 1089-1104). Persistence decreases with flash illuminance and duration (Bowen, Pola, Matin 1974; Breitmeyer 1984; Coltheart 1980). Higher luminance or longer duration habituates the gated dipole ON channel more. Causes larger and faster rebound in the OFF channel to shut persisting ON activity off.
  • image p285fig07.03 Persistence decreases with flash illuminance and duration due to the way in which habituative transmitters regulate the strength of the rebound in response to offset of a stimulating input, and how this rebound inhibits previously activated bipole cells.
    || Persistence data and simulations (Francis, Grossberg, Mingolia 1994 Vision Research, 34, 1089-1104). Persistence decreases with flash illuminance and duration. Horizontal input excites a horizontal bipole cell, which supports persistence. Offset of the horizontal input causes a rebound of activity in the vertical pathway, which inhibits the horizontal bipole cell, thereby terminating persistence.
  • image p286fig07.04 Illusory contours persist longer than real contours because real contours have more inducers whose rebound at contour offset can cause faster boundary reset. Illusory contours also take longer to form than real contours, which explains the increasing portion of the curve.
    || Persistence data and simulations (Meyer, Ming 1988; Reynolds 1981). Increasing portion of curve is due to formation time of the illusory contour. Longer persistence is due to fewer bottom-up inducers of an illusory contour that has the same length as a real contour: only illuminance-derived edges generate reset signals. When bottom-up inducers are inhibited by OFF cell rebounds, their offset gradually propagates to the center of the illusory contour.
  • image p286fig07.05 This figure shows the propagation through time of illusory contour offset from the rebounded cells that got direct inputs to the center of the contour.
    || Persistence data and simulations. Illusory contours persist longer than real contours (Meyer, Ming 1988; Reynolds 1981). When bottom-up inducers are inhibited by OFF cell rebounds, their offset gradually propagates to the center of the illusory contour.
  • image p287fig07.06 The relative durations of persistence that occur due to an adaptation stimulus of the same or orthogonal orientation follow from the properties of the habituative gated dipoles that are embedded in the boundary completion system.
    || Persistence data and simulations. Change in persistence depends on whether adaptation stimulus has same or orthogonal orientation as test grating (Meyer, Lawson, Cohen 1975). If adaptation stimulus and test stimulus have the same orientation, they cause cumulative habituation, which causes a stronger reset signal, hence less persistence. When they are orthogonal, the competition on the ON channel is less, hence more persistence.
  • image p287fig07.07 Persistence increases with distance between a target and a masking stimulus due to weakening of the spatial competition in the first competitive stage of hypercomplex cells.
    || Persistence data and simulations. Persistence increases with distance between a target and a masking stimulus (Farrell, Pavel, Sperling 1990). There is less spatial competition from the masker to the target when they are more distant, hence the target is more persistent.
  • image p290fig08.01 Motion in a given direction pools all possible contrast-sensitive sources of information that are moving in that direction.
    ||
  • image p291fig08.02 Complex cells can respond to motion in opposite directions and from features with opposite contrast polarities.
    ||
  • image p292fig08.03 The MacKay and waterfall illusion aftereffects dramatically illustrate the different symmetries that occur in the orientational form stream and the directional motion stream.
    || Form and motion aftereffects. different inhibitory symmetries govern orientation and direction. illusions: [Form- MacKay 90°, Motion- waterfall 180°]. stimulus, aftereffect percept
  • image p293fig08.04 Most local motion signals on a moving object (red arrows) may not point in the direction of the object
  • image p295fig08.05 The perceived direction of an object is derived either from a small subset of feature tracking signals, or by voting among ambiguous signals when feature tracking signals are not available.
    || Aperture problem. Barberpole illusion (Wallach). How do sparse feature tracking signals capture so many ambiguous motion signals to determine the perceived motion direction?
  • image p296fig08.06 In the simplest example of apparent motion, two dots turning on and off out of phase in time generate a compelling percept of continuous motion between them.
    || Simplest long-range motion paradigm. ISI- interstimulus interval, SOA- stimulus onset synchrony
  • image p296fig08.07 When two flashes turn on and off out of phase with the correct range of interstimulus intervals, and not too far from one another, then either beta motion of phi motion are perceived.
    || Beta and Phi motion percepts. Beta motion: percepts of continuous motion of a well-defined object across empty intervening space. Phi motion: sense of "pure" motion without a concurrent percept of moving object. (Exner 1875) http://www.yorku.ca/eye/balls.htm
  • image p297fig08.08 When a second flash is more intense than the first flash, then apparent motion may occur from the second to the first flash.
    || Delta motion: motions from the second to the first flash. Data: (Kolers 1972; Korte 1915). Simulation: (Grossberg, Rudd 1992). This occurs when the luminance or contrast of the second flash is large compared to that of the first flash. Sustained and transient cells obey shunting dynamics whose averaging rates speed up with output intensity. The first flash to wane is the one that will be the source of the G-wave.
  • image p297fig08.09 Simulation of motion in opposite directions that is perceived when two later flashes occur on either side of the first flash.
    || Split motion. Data: (H.R. Silva 1926), Simulation: (Grossberg, Rudd 1992)
  • image p298fig08.10 Simulation of the motion speed-up that is perceived when flash duration decreases.
    || "The less you see it, the faster it moves". Data: (Giaschi, Anstis 1989), Simulation: (Grossberg, Rudd 1992). ISI = 0, flash duration decreases; SOA = constant, flash duration decreases
  • image p298fig08.11 This formotion percept is a double illusion due to boundary completion in the form stream followed by long-range apparent motion using the completed bioundaries in the motion stream.
    || Form-motion interactions. Apparent motion of illusory contours (Ramachandran 1985). Double illusion! Illusory contour is created in form stream V1-V2. Apparent motion of illusory contours occurs in motion stream due to a V2-MT interaction.
  • image p300fig08.12 A single flash activates a Gaussian receptive field across space whose maximum is chosen by a winner-take-all recurrent on-center off-surround network.
    || Gaussian receptive fields are sufficient! (Grossberg, Rudd 1992). Single flash. Suppose that a single flash causes a narrow peak of activity at the position where it occurs. It generates output signals through a Gaussian filter that produces a Gaussian activity profile at the next processing stage., A recurrent on-center off-surround network chooses the maximum activity and suppresses samaller activities. Winner-take-all
  • image p300fig08.13 As a flash waxes and wanes through time, so too do the activities of the cells in its Gaussian receptive field. Because the maximum of each Gaussian occurs at the same position, nothing is perceived to move.
    || Temporal profile of a single flash. Suppose that a single flash quickly turns on to maximum activity, stays there for a short time, and then shuts off. It causes an increase in activity, followed by an exponential decay of activity. The corresponding Gaussian profile waxes and wanes through time. Since the peak position of the Gaussian does not change through time, nothing moves.
  • image p300fig08.14 Visual inertia depicts how the effects of a flash decay after the flash shuts off.
    || Inertia (%) vs ISI (msec)
  • image p301fig08.15 If two flashes occur in succession, then the cell activation that is caused by the first one can be waning while the activation due to the second one is waxing.
    || Temporal profile of two flashes. Of two flashes occur in succession, the waning of the activity due to the first flash may overlap with the waxing of the activity due to the second flash.
  • image p301fig08.16 The sum of the waning Gaussian activity profile due to the first flash and the waxing Gaussian activity profile due to the second flash has a maximum that moves like a travelling wave from the first to the second flash.
    || Travelling wave (G-wave): long-range motion. If the Gaussian activity profiles of two flashes overlap sufficiently in space and time, then the sum of Gaussians produced by the waning of the first flash added to the Gaussian produced by the waxing of the second flash, can produce a single-peaked travelling wave from the position of the first flash to that of the second flash. The wave is then processed through a WTA choice network (Winner Take All). The resulting continuous motion percept is both long-range and sharp.
  • image p302fig08.17 An important constraint on whether long-rang apparent motion occurs is whether the Gaussian kernel is broad enough to span the distance between successive flashes.
    || Motion speed-up with increasing distance: For a fixed ISI, how does perceived velocity increase with distance between the flashes? Gaussian filter : Gp = exp{ -(j-i)^2 / (2*K^2) }. The largest separation, L_crit, for which sufficient spatial overlap between two Gaussians centered at locations i and j will exist to support a travelling wave of summed peak activity is : L_crit = 2*K
  • image p302fig08.18 This theorem shows how far away (L), given a fixed Gaussian width, two flashes can be to generate a wave of apparent motion between them.
    || G-wave properties (Grossberg 1977). Let flashes occur at positions i=0 and i=L. Suppose that d[dt: x0] = -A*x0 + J0; d[dt: xL] = -A*xL + JL; Define G(w,t) ...; Theorem 1 max_w G(w,t) moves continuously through time from w=0 to w=L if and only if L <= 2*K.
  • image p303fig08.19 The dashed red line divides combinations of flash distance L and Gaussian width K into two regions of no apparent motion (above the line) and apparent motion (below the line).
    || No motion vs motion at multiple scales.
  • image p303fig08.20 The G-wave speeds up with the distance between flashes at a fixed delay, and has a consitent motion across multiple spatial scales.
    || G-wave properties (Grossberg 1977). Theorem 2 (Equal half-time property) The time at which the motion signal reaches position w=L/2. Apparent motion speed-up with distance: this half-time is independent of the distance L between the two flashes. Consistent motion across scales: half-time is independent of the scale size K. Method of proof: elementary algebra and calculus (Grossberg, Rudd 1989 appendix)
  • image p304fig08.21 A computer simulation of the equal half-time property whereby the apparent motions within different scales that respond to the same flashes all reach the half-way point in the motion trajectory at the same time.
    || Equal half-time property: how multiple scales cooperate to generate motion percept. Travelling waves from Gaussian filters of different sizes bridge the same distance in comparable time. The time needed to bridge half the distance between flashes is the same.
  • image p304fig08.22 Data (top image) and simulation (bottom image) of Korte
  • image p305fig08.23 Despite its simplicity, the Terus display can induce one of four possible percepts, depending on the ISI.
    || Ternus motion. ISI [small- stationary, intermediate- element, larger- group] motion http://en.wikipedia.org/wiki/Ternus_illusion
  • image p305fig08.24 When each stimulus has an opposite contrast relative to the background, element motion is eliminated and replaced by group motion at intermediate values of the ISI.
    || Reverse-contrast Ternus motion. ISI [small- stationarity, intermediate- group (not element!), larger- group] motion.
  • image p306fig08.25 The Motion BCS model can explain and simulate all the long-range apparent motion percepts that this chapter describes.
    || Motion BCS model (Grossberg, Rudd 1989, 1992) Level 1: discount illuminant; Level 2: short-range filter, pool sustained simple cell inputs with like-oriented receptive fields aligned in a given direction. Sensitive to direction-of-contrast; Level 3: Transient celss with unoriented receptive field. Sensitive to direction-of-change
  • image p306fig08.26 The 3D FORMOTION model combines mechanisms for determining the relative depth of a visual form with mechanisms for both short-range and long-range motion filtering and grouping. A formotion interaction from V2 to MT is predicted to enable the motion stream to track objects moving in depth.
    || 3D Formotion model (Chey etal 1997; Grossberg etal 2001; Berzhanskaya etal 2007). Form [LGN contours -> simple cells orientation selectivity -> complex cells (contrast pooling, orientation selectivity, V1) -> hypercomplex cells (end-stopping, spatial sharpening) <-> bipole cells (grouping, cross-orientation competition) -> depth-separated boundaries (V2)], Motion: [LGN contours -> transient cells (directional stability, V1) -> short-range motion filter -> spatial competition -> long-range motion filter and boundary selection in depth (MT) <-> directional grouping, attentional priming (MST)]
  • image p307fig08.27 The distribution of transients through time at onsets and offsets of Ternus display flashes helps to determine whether element motion or group motion will be perceived.
    || Ternus motion. Element motion: zero or weak transients at positions 2 and 3; Group motion: strong transients at positions 2 and 3. Conditions that favor visual persistence and thus perceived stationarity of element (2,3) favor element motion (Braddick, Adlard 1978; Breitmeyer, Ritter 1986; Pantle, Peteresik 1980)
  • image p308fig08.28 The Gaussian distributions of activity that arise from the three simultaneous flashes in a Ternus display add to generate a maximum value at their midpoint. The motion of this group gives rise to group motion.
    || Ternus group motion simulation. If L < 2*K, Gaussian filter of three flashes forms one global maximum.
  • image p310fig08.29 When the individual component motions in (A) and (B) combine into a plaid motion (C), both their perceived direction and speed changes.
    ||
  • image p311fig08.30 The data of (Castet etal 1993) in the left image was simulated in the right image by the 3D FORMOTION model that I developed with my PhD student Jonathan Chey. These data provide insight into how feature tracking signals propagate from the ends of a line to its interior, where they capture consistent motion directional signals and inhibit inconsistent ones.
    || Solving the aperture problem. A key design problem: How do amplified feature tracking signals propagate within depth to select the cirrect motion directions at ambiguous positions? This propagation from feature tracking signals to the line interior determines perceived speed in Castet etal data, which is why speed depends on line tilt and length. Data: (Castet etal 1993), Simulation: (Chey etal 1997)
  • image p311fig08.31 Processing stages of the Motion BCS convert locally ambiguous motion signals from transient cells into a globally coherent percept of object motion, thereby solving the aperture problem.
    || Why are so many motion processing stages needed? change sensitive receptors -> directional transient cells -> directional short-range filter -> spatial and directional competition -> directional long-range filter (MT) <-> Directional grouping network
  • image p312fig08.32 Schematic of motion filtering circuits.
    || Level 1: Change sensitive units -> Level 2: transient cells -> Level 3: short-range spatial filters -> Level 4: intra-scale competition -> Level 5: inter-scale competition
  • image p312fig08.33 Processing motion signals by a population of speed-tuned neurons.
    ||
  • image p314fig08.34 The VISTARS model for visually-based spatial navigation. It uses the Motion BCS as a front end and feeds it output signals into two computationally complementary cortical processing streams for computing optic flow and target tracking information.
    || VISTARS navigation model (Browning, Grossberg, Mingolia 2009). Use FORMOTION model as front end for higher level navigational circuits: input natural image sequences -> estimate heading (MT+)-MSTd -> additive processing -> estimate object position (MT-)-MSTv direction and speed subtractive processing -> Complementary Computing. [optic flow navigation, object tracking]
  • image p315fig08.35 The output signals from the directional grouping network obey the ART Matching Rule. They thereby select consistent motion directional signals while suppressing inconsistent ones, and do not distort that the spared cells code. The aperture problem is hereby solved by the same mechanism that dynamically stabilizes the learning of directional grouping cells.
    || How to select correct direction and preserve speed estimates? Prediction: Feedback from MSTv to MT- obeys ART Matching Rule; Top-down, modulatory on-center, off-surround network (Grossberg 1976, 1980; Carpenter, Grossberg 1987, 1991); Explains how directional grouping network can stably develop and how top-down directional attention can work. (Cavanough 1992; Goner etal 1986; Sekuer, Ball 1977; Stelmach etal 1994). Directional grouping network (MSTv) <-> Directional long-range filter (MT). Modulatory on-center selects chosen direction and preserves speed. Off-surround inhibits incompatible directions.
  • image p316fig08.36 How the directional grouping network, notably properties of the ART Matching Rule, enables a small set of amplified feature tracking signals at the ends of a line to select consistent directions in the line interior, while suppressing inconsistent directions.
    || Motion capture by directional grouping feedback. Directional grouping network (MSTv) <-> Directional long-range filter (MT). It takes longer to capture ambiguous motion signals in the line interior as the length of the line increases cf (Castet etal 1993)
  • image p317fig08.37 Processing stages that transform the transient cell inputs in response to a tilted moving line into a global percept of the object
  • image p319fig08.38 The neurophysiological data from MT (left image) confirms the prediction embodied in the simulation of MT (right image) concerning the fact that it takes a long time for MT to compute an object
  • image p320fig08.39 Simulation of the barberpole illusion direction field at two times. Note that the initial multiple directions due to the feature tracking signals at the contiguous vertical and horizontal sides of the barberpole (upper image) get supplanted by the horizontal direction of the two horizontal sides (lower image).
    || Barberpole illusion (one line) simulation
  • image p321fig08.40 Visible occluders capture the boundaries that they share with moving edges. Invisible occluders do not. Consequently, the two types of motions are influenced by different combinations of feature tracking signals.
    || Motion grouping across occluders (J. Lorenceau, D. Alais 2001). Rotating contours observed through apertures. Determine direction of a circular motion. [, in]visible occluders http://persci.mit.edu/demos/square/square.html
  • image p322fig08.41 A percept of motion transparency can be achieved by using motion grouping feedback that embodies the "asymmetry between near and far" along with the usual opponent competition between opposite motion directions.
    || Motion transparency. near: big scale; far: small scale MSTv, "Asymmetry between near and far" Inhibition from near (large scales) to far (small scales) at each position
  • image p323fig08.42 The chopsticks illusion not only depends upon how feature tracking signals are altered by visible and invisible occluders, but also upon how the form system disambiguates the ambiguous region where the two chopsticks intersect and uses figure-ground mechanisms to separate them in depth.
    || Chopsticks: motion separation in depth (Anstis 1990). [, in]visible occluders [display, percept]
  • image p324fig08.43 Attention can flow along the boundaries of one chopstick and enable it to win the orientation competition where the two chopsticks cross, thereby enabling bipole grouping and figure-ground mechanisms to separate them in depth within the form cortical stream.
    || The ambiguous X-junction. motion system. Attention propagates along chopstick and enhances cell activations in one branch of a chopstick. MT-MST directional motion grouping helps to bridge the ambiguous position.
  • image p325fig08.44 Attentional feedback from MST-to-MT-to-V2 can strengthen one branch of a chopstick (left image). Then bipole cell activations that are strengthened by this feedback can complete that chopstick
  • image p325fig08.45 The feedback loop between MT/MST-to-V1-to-V2-to-MT/MST enables a percept of two chopsticks sliding one in front of the other while moving in opposite directions.
    || Closing formotion feedback loop. [formotion interaction, motion grouping] V1 -> V2 -> (MT <-> MST) -> V1
  • image p326fig08.46 How do we determine the relative motion direction of a part of a scene when it moves with a larger part that determines an object reference frame?
    || How do we perceive relative motion of object parts?
  • image p327fig08.47 Two classical examples of part motion in a moving reference frame illustrate the general situation where complex objects move while their multiplie parts may move in different directions relative to the direction of the reference frame.
    || Two kinds of percepts and variations (Johansson 1950). Symmetrically moving inducers: each do moves along a straight path, each part contributes equally to common motion; Duncker wheel (Duncker 1929): one dot moves on a cycloid, the other dot (the "center") moves stright, unequal contributipon from parts; If the dot is presented alone: seen as cycloid; if with center: seen as if it were on the rim of a wheel.
  • image p328fig08.48 How vector subtraction from the reference frame motion direction computes the part directions.
    || How vector decomposition can explain them. Common motion subtracted from retinal motion gives part motion: [retinal, common, part] motion
  • image p328fig08.49 A directional peak shift in a directional hypercolumn determines the part directions relative to a moving reference frame.
    || What is the mechanism of vector decomposition? (Grossberg, Leveille, Versace 2011). Prediction: directional peak shift! ...specifically, a peak shift due to Gaussian lateral inhibition. [retinal, part, common, relative] motion. shunting dynamics, self-normalization, contrast gain control
  • image p329fig08.50 The common motion direction of the two dots builds upon illusory contours that connect the dots as they move through time. The common motion directin signal can flow along these boundaries.
    || How is common motion direction computed? retinal motion. Bipole grouping in the form stream creates illusory contours between the dots. V2-MT formotion interaction injects the completed boundaries into the motion stream where they capture consistent motion signals. Motion of illusory contours is computed in the motion stream: cf. Ramanchandran
  • image p329fig08.51 Large and small scale boundaries differentially form illusory contours between the dots and boundaries that surround each of them respectively. These boundaries capture the motion signals that they will support via V2-to-MT formotion interaction. The MST-to-MT directional peak shift has not yet occurred.
    || Large scale: near. Can bridge gap between dots to form illusory contours. Spatial competition inhibits inner dot boundaries.; Small scale: far. Forms boundaries around dots.
  • image p330fig08.52 Direction fields of the object frame (left column) and of the two dot "parts" (right column) show the correct motion directions after the peak shift top-down expectation acts.
    || Simulation of motion vector decomposition. [Larger scale (nearer depth), Small scale (farther depth)] vs [Down, Up]
  • image p330fig08.53 Simulation of the various directional signals of the left dot through time. Note the amplification of the downward directional signal due to the combined action of the short-range and long-range directional signals.
    ||
  • image p331fig08.54 The simulated part directions of the rotating dot through time after the translational motion of the frame does its work via the top-down peak shift mechanism.
    || Cycloid. Motion directions of a single dot moving slowly along a cycloid curve through time.
  • image p331fig08.55 The rightward motion of the dot that determines the frame propagates along the illusory contour between the dots and thereby dominates the motion directions along the rim as well, thereby setting the stage for the peak shift mechanism.
    || Duncker Wheel: large scale. [cyc;oid, center] velocity -> rightward common velocity. Stable rightward motion at the center captures motion at the rim.
  • image p332fig08.56 Simulation of the Duncker Wheel motion through time. See the text for details.
    || Duncker Wheel: small scale. Temporal procession of activity in eight directions. Wheel motion as seen when directions are collapsed.
  • image p332fig08.57 The MODE model uses the Motion BCS as its front end, followed by a saccadic target selection circuit in the model LIP region that converts motion directions into movement directions. These movement choices are also under basal ganglia (BG) control. More will be explained about the BG in Chapters 13 and 15.
    || MODE (MOtion DEcision) model (Grossberg, Pilly 2008, Vision Research). Change sensitive receptors -> directional transient cells -> directiponal short-range filter -> spatial and directional competition -> directional long-range filter (MT) <-> directional grouping network (MSTv) -> saccadic target selection <-> gsting mechanism (BG). Representation of problem that solves the aperture problem (change sensitive receptors (CSR) -> directional grouping network (DGN, MSTv)). Gated movement choice (saccadic target selection & gating mechanism)
  • image p333fig08.58 Neurophysiological data (left image) and simulation (right image) of LIP data during correct trials on the RT task. See the text for details.
    || LIP responses during RT task correct trials (Roltman, Shadlen 2002). More coherence in favored direction causes faster cell activation. More coherence in opposite direction causes faster cell inhibition. Coherence stops playing a role in the final stages of LIP firing.
  • image p334fig08.59 Neurophysiological data (left column) and simulations (right column) of LIP responses for the FD task during both [correct, error] trials. See the text for details.
    || LIP responses for the FD task during both [correct, error] trials (Shadlen, Newsome 2001). LIP encodes the perceptual decision regardless of the true direction of the dots. Predictiveness of LIP responses on error trials decreases with increasing coherence.
  • image p334fig08.60 Behavioral data (left image) and simulation (right image) about accuracy in both the RT and FD tasks. See text for details
    || Behavioral data: % correct vs % coherence (Mazurek etal 2003; Roltman, Shadien 2002). More coherence in the motion causes more accurate decisions. RT task accuracy at weaker coherence levels is slightly better than FD task accuracy.
  • image p335fig08.61 Behavioral data (left image) and simulation (right image) about speed in correct and error trials of the RT task. See text for details.
    || Behavioral data: speed, correct and error trials (RT task) (Roltman, Shadien 2002). More coherence in the motion causes faster reaction time.
  • image p335fig08.62 More remarkable simulation fits (right column) to LIP neurophysiology data (left column) about where and when to move the eyes.
    || LIP encodes not only where, but also when, to move the eyes. ...No Bayes(Roltman, Shadien 2002). Firing rate (sp/s) vs time (ms). Slope of firing rate (sp/s^2) vs % correct.
  • image p338fig09.01 The brain regions that help to use visual information for navigating in the world and tracking objects are highlighted in yellow.
    || How does a moving observer use optic flow to navigate while tracking a moving object? [What ventral, Where dorsal] retina -> many locations -> PFC
  • image p338fig09.02 Heading, or the direction of self-motion (green dot), can be derived from the optic flow (red arrows) as an object, in this case an airplane landing, moves forward.
    || Heading and optic flow (Gibson 1950). Optic flow: scene motion generates a velocity field. Heading: direction of travel- self-motion direction. Heading from optic flow, focus of expansion (Gibson 1950). Humans determine heading accurately to within 1-2 degrees.
  • image p339fig09.03 When an observer moves forward, an expanding optic flow is caused. Eye rotations cause a translating flow. When these flows are combined, a spiral flow is caused. How do our brains compensate for eye rotations to compute the heading of the expanding optic flow?
    || Optic flow during navigation (adapted from Warren, Hannon 1990) [observer, retinal flow]: [linear movement, expansion], [eye rotation, translation], [combined motion, spiral]
  • image p339fig09.04 This figure emphasizes that the sum of the expansion and translation optic flows is a spiral optic flow. It thereby raises the question: How can the translation flow be subtracted from the spiral flow to recover the expansion flow?
    || Eye rotations add a uniform translation to an flow field. Resulting retinal patterns are spirals. Expansion + translation = spiral
  • image p340fig09.05 An outflow movement command, also called efference copy or corollary discharge, is the souce ot the signals whereby the commanded eye movement position is subtracted from spiral flow to recover expansion flow and, with it, heading.
    || Subtracting efference copy. Many experiments suggest that the brain internally subtracts the translational component due to eye movements. Efference copy subtracts the translational component using pathways that branch from outflow movement commands to the eye muscles.
  • image p340fig09.06 Corollary discharges are computed using a branch of the outflow movement commands that move their target muscles.
    ||
  • image p340fig09.07 Log polar remapping from the retina to cortical area V1 and beyond converts expansion, translation, and spiral flows on the retina into parallel flows, with different orientations, on the cortical map.
    || Log polar remapping of optic flow. retina -> cortex. Any combination of expansion and circular motion centered on the fovea maps to cortex as a single direction. Retinal Cartesian coordinates (x,y) map to cortical polar coordinates (r,theta). This makes it easy to compute directional receptive fields in the cortex!
  • image p341fig09.08 How the various optic flows on the retina are mapped through V1m MT, and MSTd to then compute heading in parietal cortex was modeled by (Grossberg, Mingolia, Pack 1999), using the crucial transformation via V1 log polar mapping into parallel cortical flow fields.
    || MSTd model (Grossberg, Mingolia, Pack 1999). Retinal motion -> V1 log polar mapping -> Each MT Gaussian RF sums motion in preferred direction -> Each MSTd cell sums MT cell inputs with same log polar direction -> Efference copy subtracts rotational flow from MSTd cells.
  • image p341fig09.09 Responses of MSTd cells that are used to compute heading. See the text for details.
    || Cortical area MSTd (adapted from Graziano, Anderson, Snowden 1994). MSTd cells are sensitive to spiral motion as combinations of rotation and expansion.
  • image p342fig09.10 Model simulations of how the peak of MSTd cell activation varies with changes of heading.
    || Heading in log polar space: Retina -> log polar -> MSTd cell. Log polar motion direction correlates with heading eccentricity.
  • image p342fig09.11 Psychophysical data (left panel) and computer simulation (right column) of the importance of efference copy in real movements. See the text for details.
    || Heading: move to wall and fixate stationary object (adapted from Warren, Hannon 1990). Inaccurate for simulated eye rotation, accurate for real eye rotation, need confirmation by efference copy!
  • image p343fig09.12 Transforming two retinal views of the Simpsons into log polar coordinates dramatizes the problem that our brains need to solve in order to separate, and recognize, overlapping figures.
    || View 1 cortical magnification. View 2 How do we know if we are still fixating on the same object?!
  • image p343fig09.13 When one scans the three different types of pears in the left image, as illustrated by the jagged blue curve with red movement end positions, and transforms the resulting retinal images via the cortical magnification factor, or log polar mapping, the result is the series of images in the right column. How do our brains figure out from such confusing data which views belong to which pear?
    || View-invariant object learning and recognition Three pears: Anjou, Bartlett, Comice. Which is the Bartlett pear? During unsupervised scanning and learning about the world, no one tells the brain what views belong to which objects while it learns view-invariant object categories. Cortical magnificantion in V1.
  • image p344fig09.14 (top row, left column) By fitting MT tuning curves with Gaussian receptive fields, a tuning width of 38° is estimated, and leads to the observed standard spiral tuning of 61° in MSTd. (bottom row, left column) The spiral tuning estimate in Figure 9.16 maximizes the position invariant of MSTd receptive fields. (top row, right column) Heading sensitivity is not impaired by these parameter choices.
    || [Spiral tuning (deg), position invariance (deg^(-1)), heading sensitivity] versus log polar direction tuning σ (deg)
  • image p345fig09.15 Double opponent directional receptive fields in MT are capable of detecting the motion of objects relative to each other and their backgrounds.
    || Motion opponency in MT (Born, Tootell 1992). Motion opponent (Grossberg etal), Differential motion (Royden etal), Subtractive motion cells (Neumann etal). ON center directionally selective: [excit, inhibit]ed by motion in [one, opponent] direction. OFF surround directionally selective: [excit, inhibit]ed by motion in [opponent, center] direction.
  • image p346fig09.16 A macrocircuit of some of the main brain regions that are used to move the eyes. Black boxes denote areas belonging to the saccadic eye movement systes (SAC), white boxes the smooth pursuit eye system (SPEM), and gray boxes, both systems. The abbreviations for the different brain regions are: LIP - Lateral Intra-Parietal area; FPA - Frontal Pursuit Area; MST - Middle Superior Temporal area; MT - Middle Temporal area; FEF - Frontal Eye Fields; NRPT - Nucleus Reticularis Tegmenti Pontis; DLPN - Dorso-Lateral Pontine Nuclei; SC - Superior Colliculus; CBM - CereBelluM; MVN/rLVN - Medial and Rostro-Lateral Vestibular Nucleii; PPRF - a Peri-Pontine Reticular Formation; TN - Tonic Neurons
    ||
  • image p347fig09.17 The leftward eye movement control channel in the model that I developed with Christopher Pack. See the text for details.
    || retinal image -> MT -> MST[v,d] -> pursuit
  • image p347fig09.18 These circuits between MSTv and MSTd enable predictive target tracking to be achieved by the pursuit system, notably when the eyes are successfully foveating a moving target. Solid arrows depict excitatory connections, dashed arrows depict inhibitory connections.
    ||
  • image p348fig09.19 How a constant pursuit speed that is commanded by MSTv cells starts by using target speed on the retina and ends by using backgound speed on the retina in the reverse direction during successful predictive pursuit.
    || target speed on retina, background speed on retina, pursuit speed command by MSTV cells
  • image p349fig09.20 Using virtual reality displays (left image), (Fajen, Warren 2003) collected data (right two images) about how observers avoid obstacles (open circular disks) as a function of their distance and angular position as they navigate towards a fixed goal (x). These data illustrate how goals act as attractors while obstacles act as repellers.
    || Steering from optic flow (Fajen, Warren 2003). goals are attractors, obstacles are repellers. Damped spring model explains human steering data.
  • image p349fig09.21 How attractor-repeller dynamics with Gaussians change the net steering gradient as the goal is approached.
    || Steering dynamics: goal approach. body-centered coordinates [obstacle, goal, heading] -> steering
  • image p350fig09.22 How the negative Gaussian of an obstacle causes a peak shift to avoid the obstacle without losing sight of how to reach the goal.
    || Steering dynamics: obstacle avoidance. body-centered coordinates [obstacle, goal, heading] -> steering
  • image p350fig09.23 Unidirectional transient cells respond to changes in all image contours as an auto navigates and urban scene while taking a video of it.
    || Unidirectional transient cells (Baloch, Grossberg 1997; Berzhanskaya, Grossberg, Mingolia 2007). Transient cells respond to leading and trailing boundaries. Transient cells response, driving video
  • image p351fig09.24 Directional transient cells respond most to motion in their preferred directions.
    || Directional transient cells. 8 directions, 3 speeds
  • image p351fig09.25 By the time MT+ is reached, directional transient cells and directional filters have begun to extract more global directional information from the image.
    || M+ computes global motion estimate. Estimate global motion from noisy local motion estimates.
  • image p352fig09.26 The final stage of the model computes a beautiful expansion optic flow that permits an easy estimate of the heading direction, with an accuracy that matches that of human navigators.
    || The model generates accurate heading (Warren, Hannon 1990; Royden, Crowell, Banks 1994). Maximally active MSTd cell = heading estimate. Accuracy matches human data. Random dots [mean +-1.5°, worst +-3.8°], Random dots with rotation [accurate with rotations <1°/s, rotation increases, error decreases], OpenGL & Yosemite benchmark +-1.5°, Driving video +-3°.
  • image p354fig10.01 The lamiar cortical circuit that realizes how we pay attention to an object sends signals from layer 6 of a higher cortical level to layer 6 of a lower cortical level and then back up to layer 4. This "folded feedback" circuit realizes a top-down, modulatory on-center, off-surround circuit that realizes the ART Matching Rule.
    || Top-down attention and folded feedback. Attentional signals also feed back into 6-to-4 on-center off-surround. 1-to-5-to-6 feedback path: Macaque (Lund, Booth 1975) cat (Gilbert, Wiesel 1979). V2-to-V1 feedback is on-center off-surround and affects layer 6 of V1 the most (Bullier etal 1996; Sandell, Schiller 1982). Attended stimuli enhanced, ignored stimuli suppressed. This circuit supports the predicted ART Matching Rule! [LGN, V[1,2][6->1]]
  • image p355fig10.02 Distinguishing processes of seeing vs knowing has been difficult because they interact so strongly.
    || Seeing vs. Knowing. Seeing and knowing [operate at different levels of the brain, use specialized circuits], but they [interact via feedback, use similar cortical designs, feedback is needed for conscious perception]. Cerebral Cortex: Seeing [V1-V4, MS-MST], Knowing [IT, PFC].
  • image p356fig10.03 Laminar computing achieves at least three basic properties of visual processing that have analogs in all biologically intelligent behaviors. These properties may be found in all cortical circuits in specialized form.
    || What does Laminar Computing achieve? 1. Self-stabilizing development and learning; 2. Seamless fusion of a) pre-attentive automatic bottom-up processing, b) attentive task-selective top-down processing; 3. Analog coherence: Solution of Binding Problem for perceptual grouping without loss of analog sensitivity. Even the earliest visual cortical stages carry out active adaptive information processing: [learn, group, attention]ing
  • image p357fig10.04 Laminar Computing achieves its properties by computing in a new way that sythesizes the best properties of feedforward and feedback interactions, analog and digital computations, and preattentive and attentive learning. The property of analog coherence enables coherent groupings and decisions to form without losing sensitivity to the amount of evidence that supports them.
    || Laminar Computing: a new way to compute. 1. Feedforward and feedback: a) Fast feedforward processing when data are unambiguous (eg Thorpe etal), b) slower feedback chooses among ambiguous alternatives [self-normalizing property, real-time probabiligy theory], c) A self-organizing system that trades certainty against speed: Goes beyond Bayesian models! 2. Analog and Digital: Analog Coherence combines the stability of digital with the sensitivity of analog. 3. Preattentive and Attentive Learning: Reconciles the differences of (eg) Helmholtz and Kanizsa, "A preattentive grouping is its own
  • image p359fig10.05 Activation of V1 is initiated, in part, by direct excitatory signals from the LGN to layer 4 of V1.
    || How are layer 2/3 bipole cells activated? Direct bottom-up activation of layer 4. LGN -> V1 layer 4. Strong bottom-up LGN input to layer 4 (Stratford etal 1996; Chung, Ferster 1998). Many details omitted.
  • image p359fig10.06 Another, albeit indirect, pathway from LGN exists that can also excite layer 4 of V1. Why are not these two pathways redundant? The answer, ultimately, how to do with how cortex learns, as well as with how it pays attention. See the text for details.
    || Another bottom-up input to layer 4: Why?? Layer 6-to-4 on-center off-surround (Grieve, Sillito 1991, 1995; Ahmedetal 1994, 1997). LGN projects to layers 6 and 4. Layer 6 excites spiny stellates in column above it. Medium range connections onto inhibitory neurons. 6-t-4 path acts as on-center off-curround.
  • image p359fig10.07 The two bottom-up pathways from LGN to layer 4 of V1 can together activate layer 4 and contrast-normalize layer 4 responses.
    || Bottom-up contrast normalization (Grossberg 1968, 1973; Sperling, Sondhi 1968; Heeger 1992; Douglas etal 1995; Shapley etal 2004). Together, direct LGN-to-4 path and 6-to-4 on-center off-surround provide contrast normalization if cells obey shunting or membrane equation dynamics.
  • image p360fig10.08 The bottom-up on-center off-surround from LGN-to-6-to-4 has a modulatory on-center because of its role in realizing the ART Matching Rule and, with it, the ability of the cortex to dynamically stabilize its learned memories.
    || Modulation of priming by 6-to-4 on-center (Stratford etal 1996; Callaway 1998). On-center 6-to-4 excitation is inhibited down to being modulatory (priming, subthreshold). On-center 6-to-4 excitation cannot activate layer 4 on its own. Clarifies need for direct path. Prediction: plays key role in stable grouping, development and learning. ART Matching Rule!
  • image p360fig10.09 Perceptual grouping is carried out in layer 2/3 by long-range horizontal excitatory recurrent connections, supplemented by short-range disynaptic inhibitory connections that together realize the bipole grouping properties that are diagrammed in Figure 10.10.
    || Grouping starts in layer 2/3. LGN-> 6-> 4-> 2/3: 1. Long-range horizontal excitation links collinear, coaxial receptive fields (Gilbert, Wiesel 1989; Bosking etal 1997; Schmidt etal 1997) 2. Short-range disynaptic inhibition of target pyramidal via pool of intraneurons (Hirsch, Gilbert 1991) 3. Unambiguous groupings can form and generate feedforward outputs quickly (Thorpe etal 1996).
  • image p361fig10.10 Bipole grouping is achieved by long-range horizontal recurrent connections that also give rise to short-range inhibitory interneurons which inhibit nearby bipole cells as well as each other.
    || Bipole property controls perceptual grouping. Collinear input on both sides. Excitatory inputs summate. Inhibitory inputs normalize, Shunting inhibition! Two-against-one. Cell is excited.
  • image p362fig10.11 Feedback between layer 2/3 to the layer 6-to-4-to-2/3 feedback loop chooses the strongest grouping in cases where there is more than one. If only one grouping exists, then the circuit can function very quickly in a feedforward manner. When multiple groupings exist, the cortex "runs as fast as it can" to select the one with the most evidence to support it using the self-normalizing inhibition in the layer 6-to-4 off-surround.
    || How is the final grouping selected? Folded feedback LGN-> 6-> 4-> 2/3. 1. Layer 2/3 groupings feed back into 6-to-4 on-center off-surround: a) direct layer 2/3 -to-6 path; b) can also go via layer 5 (Blasdel etal 1985; Kisvarday etal 1989). 2. Strongest grouping enhanced by its on-center. 3. Inputs to weaker groupings suppressed by off-surround. 4. Interlaminar feedback creates functional columns. Activities of conflicting groupings are reduced by self-normalizing inhibition, slowing processing; intracortical feedback selects and contrast-enhances the winning grouping, speeding processing.
  • image p363fig10.12 The same laminar circuit design repeats in V1 and V2, albeit with specializations that include longer horizontal grouping axoms and figure-ground separation interactions.
    || V2 repeats V1 circuitry at larger spatial scale, LGN-> V1[6,4,2/3]-> V2[6,4,2/3]. V2 layer 2/3 horizontal axons longer-range than in V1 (Amir etal 1993). Therefore, longer-range groupings can form in V2 (Von der Heydt etal 1984)
  • image p364fig10.13 The bottom-up adaptive filter, intracortical grouping circuit, and intercortical top-down attentional circuit all use the same competitive decision circuit between layers 6 and 4, called the attention-preattention interface, with which to select the featural patterns that will be processed.
    || Bottom-up filters and intracortical grouping feedback use the same 6-to-4 decision circuit, LGN-> Vx[6,4,2/3]. competitive decision circuit, modulatory on-center off-surround network. Top-down intercortical attention also uses the same 6-to-4 decision circuit!
  • image p364fig10.14 This figure emphasizes how preattentive intracortical groupings and top-down intercortical attention share the same modulatory on-center, off-surround layer 4-to-6 decision circuit.
    || Explanation: grouping and attention share the same modulatory decision circuit. Layer 6-6-4-2/3 pathway shown; also a layer 6-1-2/3 path. intercortical attention, both act via a modulatory on-center off-surround decision circuit, intracortical feedback from groupings
  • image p367fig10.15 Data (left column) and simulation (right column) of how attention prevents a masking stimulus from inhibiting the response to the on-center of the cell from which the recording was made.
    || Attention protects target from masking stimulus (Reynolds etal 1999; Grossberg, Raizada 2000).
  • image p367fig10.16 Neurophysiological data (left image) and simulation (right image) of how a low-contrast target can be facilitated if it is surrounded by a paid (31May2023 Howell - is word correct?) of collinear flankers, and suppresssed by them if it has high contrast.
    || Flankers can enhance or suppress targets (Polat etal 1998; Grossberg, Raizada 2000). target alone, target + flankers, flankers alone.
  • image p368fig10.17 Neurophysiological data (left image) and simulation (right image) showing that attention has a greater effect on low contrast than high contrast targets.
    || Attention has greater effect on low contrast targets (DeWeerd etal 1999; Raizada, Grossberg 2001). Threshold increase (deg) vs Grating contrast (%), [no, with] attention
  • image p368fig10.18 Neurophysiological data (left image) and simulation (right image) of relative on-cell activities when the input to that cell may also be surroubded by iso-orientation or perpendicular textures.
    || Texture reduces response to a bar: iso-orientation suppression (Knierim, van Essen 1992), perpendicular suppression (Raizada, Grossberg 2001)
  • image p369fig10.19 Data from (Watanabe etal 2001) showing perceptual learning of the coherent motion direction, despite the lack of extra-foveal attention and awareness of the moving stimuli.
    || Unconscious perceptual learning of motion direction, % correct for two tests, compared to chance level results.
  • image p371fig11.01 FACADE theory explains how the 3D boundaries and surfaces are formed with which we see the world in depth.
    || 3D Vision and figure-ground perception (Grossberg 1987, 1994, 1997). How are 3D boundaries and 3D surfaces formed? How the world looks without assuming naive realism. Form And Color And DEpth theory (FACADE). Prediction: Visible figure-ground-separated Form-And-Color-And-DEpth are represented in cortical area V4.
  • image p372fig11.02 FACADE theory explains how multiple depth-selective boundary representations can capture the surface lightnesses and colors at the correct depths. The fact that both surface qualia and depth are determined by a single process implies that, for example, a change in brightness can cause a change in depth.
    || 3D surface filling-in. From filling-in of surface lightness and color to filling-in of surface depth. Prediction: Depth-selective boundary-gated filling-in defines the 3D surfaces that we see. Prediction: A single process fills-in lightness, color, and depth. Can a change in brightness cause a change in depth? YES! eg proximity-luminance covariance (Egusa 1983, Schwartz, Sperling 1983). Why is depth not more unstable when lighting changes? Prediction: Discounting the illuminant limits variability.
  • image p373fig11.03 Both contrast-specific binocular fusion and contrast-invariant boundary perception are needed to properly see the world in depth.
    || How to unify contrast-specific binocular fusion with contrast-invariant boundary perception? Contrast-specific binocular fusion: [Left, right] eye view [, no] binocular fusion. Contrast-invariant boundary perception: contrast polarity along the gray square edge reverses; opposite polarities are pooled to form object boundary.
  • image p374fig11.04 The three processing stages of monocular simple cells, and complex cells accomplish both contrast-specific binocular fusion and contrast-invariant boundary perception.
    || Model unifies contrast-specific binocular fusion and contrast-invariant boundary perception (Ohzawa etal 1990; Grossberg, McLoughlin 1997). [Left, right] eye V1-4 simple cells-> V1-3B simple cells-> V1-2/3A complex cells. Contrast-specific stereoscopic fusion by disparity-selective simple cells. Contrast-invariant boundaries by pooling opposite polarity binocular simple cells at complex cells layer 2/3A.
  • image p374fig11.05 The brain uses a contrast constraint on binocular fusion to help ensure that only contrasts which are derived from the same objects in space are binoculary matched.
    || Contrast constraint on binocular fusion. Left and right input from same object has similar contrast, Percept changes when one contrast is different. Fusion only occurs between bars of similar contrast (McKee etal 1994)
  • image p375fig11.06 The contrast constraint on binocular fusion is realized by obligate cells in layer 3B of cortical area V1.
    || Model implements contrast constraint on binocular fusion (cf. "obligate" cells Poggio 1991). An ecological constraint on cortical development. [left, right] eye cart V1-[4 monocular simple, 3B binocular simple, complex2/3A] cells. Inhibitory cells (red) ensure that fusion occurs when contrasts in left and right eye are approximately equal.
  • image p375fig11.07 The 3D LAMINART model uses both monocular and binocular simple cells to binocularly fuse like image contrasts. The remainder of the model generates 3D boundary and surface representations of multiple kinds of experiments as well as of natural scenes.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A], V2 thin stripe [4->2/3A], V4]. V1 blob [V1-4 monocular, V1 interior binocular] simple cells. [complex, simple, inhibitory] cells, on-center off-surround
  • image p376fig11.08 The contrast constraint on binocular fusion is not sufficient to prevent many of the false binocular matches that satisfy this constraint.
    || How to solve the correspondance problem? How does the brain inhibit false matches? Contrast constraint is not enough. [stimulus, multiple possible binocular matches] - Which squares in the two retinal images must be fused to form the correct percept?
  • image p376fig11.09 The disparity filter in V2 helps to solve the correspondence problem by eliminating spurious contrasts using line-of-sight inhibition.
    || Model V2 disparity filter solves the correspondence problem. An ecological constraint on cortical development. [left, right] eye view: False matches (black) suppressed by line-of-sight inhibition (green lines). "Cells that fire together wire together".
  • image p376fig11.10 The 3D LAMINART model shows how the disparity filter can be integrated into the circuit that completes 3D boundary representations using bipole grouping cells. It also explains how surface contours can strengthen boundaries that succeed in generating closed filling-in domains.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A] surface contour, V2 thin stripe (monocular surface) [4->2/3A], V2 interior [disynaptic inhibitory interneurons, bipole grouping cells, disparity filter, V4 binocular surface]. [complex, simple, inhibitory] cells, on-center off-surround
  • image p377fig11.11 DaVinci stereopsis phenomena occur when only one eye can receive visual inputs from part of a 3D scene due to occlusion by a nearer surface.
    || How does monocular information contribute to depth perception? DaVinci steropsis (Gillam etal 1999). Only by utilizing monocular information can visual system create correct depth percept. [left, right] eye view
  • image p378fig11.12 How monocular and binocular information are combined in V1 and V2 in the 3D LAMINART model.
    || Model utilizes monocular information. [left, right] eye cart V1-[4 monocular simple, 3B binocular simple, complex2/3A [mo,bi]nocular] cells, V2-4 binocular complex cells. black = monocular cells, blue = binocular cells. In V2, monocular inputs add to binocular inputs along the line of sight and contribute to depth perception.
  • image p379fig11.13 How the 3D LAMINART model explains DaVinci stereopsis. All the stages of boundary and surface formation are color coded to clarify their explanation. Although each mechanism is very simple, when all of them act together, the correct depthful surface representation is generated. See the text for details.
    || DaVinci stereopsis (Nakayama, Shimojo 1990). An emergent property of the previous simple mechanisms working together. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] pair [Binocular match: boundaries of thick bar -> Add monocular boundaries along lines-of-sight -> Line-of-sight inhibition kills weaker vertical boundaries -> 3D surface percept not just a disparity match!] pair [Binocular match: right edge of thin and thick bars -> Strongest boundaries: binocular and monocular boundaries add -> Vertical boundaries from monocular left edge of thin bar survive -> Filling-in contained by connected boundaries]. cart [very near, near, fixation plane, far, very far]
  • image p380fig11.14 The model explanation of DaVinci stereopsis when the input stimuli have opposite contrast polarities.
    || Polarity-reversed Da Vinci stereopsis (Nakayama, Shimojo 1990). Same explanation! (... as Figure 11.13 ...) [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p381fig11.15 The same model mechanisms explain the surface percept that is generated by the variant of DaVinci stereopsis that Gillam, Blackburn, and Nakayama studied in 1999.
    || DaVinci stereopsis (Gillam, Blackburn, Nakayama 1999). same model mechanisms. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p382fig11.16 The version of DaVinci steropsis wherein three narrow rectangles are binocularly matched with one thick rectangle can also be explained is a similar way.
    || DaVinci stereopsis of [3 narrow, one thick] rectangles (Gillam, Blackburn, Nakayama 1999). [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p383fig11.17 The bars in the left and right images that are in the same positions are marked in red to simplify tracking how they are processed at subsequent stages.
    || The Venetian blind effect (Howard, Rogers 1995). Every second bar on L in same position as every third bar on R. These bars are marked in red; see them match in Fixation Plane. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p384fig11.18 Surface and surface-to-boundary surface contour signals that are generated by the Venetian blind image.
    || Venetian blind effect (Howard, Rogers 1995). Every second bar on L in same position as every third bar on R. PERCEPT: 3-bar ramps sloping up from L to R with step returns. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p385fig11.19 Dichoptic masking occurs when the bars in the left and right images have sufficiently different contrasts.
    || Dichoptic masking (McKee, Bravo, Smallman, Legge 1994). [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p387fig11.22 Simulation of the boundaries that are generated by the Julesz stereogram in Figure 4.59 (top row) without (second row) and with (third row) surface contour feedback.
    || Boundary cart [V2-2, V2, V1] cart [near, fixation, far]
  • image p388fig11.23 Simulation of the surface percept that is seen in response to a sparse stereogram. The challenge is to assign large regions of ambiguous white to the correct surface in depth.
    || [left, right] retinal input. Surface [near, fixation, far] V4
  • image p388fig11.24 Boundary groupings capture the ambiguous depth-ambiguous feature contour signals and lift them to the correct surface in depth.
    || [surface, boundary] cart [near, fixation, far] V2.
  • image p389fig11.25 Boundaries are not just edge detectors. If they were, a shaded ellipse would look flat, and uniformly gray.
    || 3D vision and figure-ground separation. Multiple-scale, depth-selective boundary webs. [dark-light, light-dark] boundaries -> complex cells! If boundaries were just edge detectors, there would be just a bounding edge of the ellipse. After filling-in, it would look like this:.
  • image p390fig11.26 Although larger scales sometimes look closer (left image), that is not always true, as the right image of (Brown, Weisstein 1988) illustrates. The latter percept is, moreover, bistable. These images show the importance of interactions between groupings and multiple scales to determine perceived surface depths.
    || Multiple-scale depth-selective groupings determine perceived depth (Brown, Weisstein 1988). As an object approaches, it gets bigger on the retina. Does a big scale (RF) always signal NEAR? NO! The same scale can signal either near or far. Some scales fuse more than one disparity.
  • image p391fig11.27 (left image) Each scale can binocularly fuse a subset of spatial scales, with larger scales fusing more scales and closer ones than small scales. (right image) Cortical hypercolumns enable binocular fusion to occur in a larger scale even as rivalry occurs in a smaller scale.
    || Multiple-scale grouping and size-disparity correlation. Depth-selective cooperation and competition among multiple scales determines perceived depth: a) Larger scales fuse more depth; b) Simultaneous fusion and rivalry. Boundary prining using surface contours: Surface-to-boundary feedback from the nearest surface that is surrounded by a connected boundary eliminates redundant boundaries at the same position and further depths.
  • image p391fig11.28 (left image) Ocular dominance columns respond selectively to inputs from one eye or the other. (right image) Inputs from the two eyes are mapped into layer 4C of V1, among other layers.
    || Cortex V1[1, 2/3, 4A, 4B, 4C, 5, 6], LGN
  • image p392fig11.29 Boundary webs of the smallest scales are closer to the boundary edge of the ellipse, and progressively larger scale webs penetrate ever deeper into the ellipse image, due to the amount of evidence that they need to fire. Taken together, they generate a multiple-scale boundary web with depth-selective properties that can capture depth-selective surface filling-in.
    || 3D vision and figure-ground separation. Multiple-scale, depth-selective boundary webs. Instead, different size detectors generate dense boundary webs at different positions and depths along the shading gradient. Small-far, Larger-nearer, Largest-nearest. Each boundary web captures the gray shading in small compartments at its position and depths. A shaded percept in depth results.
  • image p392fig11.30 Multiple scales interact with bipole cells that represent multiple depths, and conversely. See the text for details.
    || How multiple scales vote for multiple depths. Scale-to-depth and depth-to-scale maps. Smallest scale projects to, and receives feedback from, boundary groupings that represent the furthest depths. Largest scale connects to boundary groupings that represent all depths. multiple-[depth, scale] dot [grouping, filter] cells. [small <-> large] vs [far <-> near]
  • image p393fig11.31 (Todd, Akerstrom 1987) created a series of 2D images from discrete black patches on a white disk and showed how the perceived depth varies with the factors summarized in the figure. The LIGHTSHAFT model quantitatively simulated their data.
    || Factors determining depth-from-texture percept. Perceived depth varies with texture element width, but only when elements are elongated and sufficiently aligned with one another to form long-range groupings. Data of (Todd, Akerstrom 1987) simulated by the LIGHTSHAFT model of (Grossberg, Kuhlmann 2007). [HP, LP, CCE, CCS, RO]
  • image p393fig11.32 Kulikowski stereograms involve binocular matching of out-of-phase (a) Gaussians or (b) rectangles. The latter can generate a percept of simultaneous fusion and rivalry. See the text for why.
    ||
  • image p394fig11.33 The Kaufman stereogram also creates a percept of simultaneous fusion and rivalry. The square in depth remains fused and the perpendicular lines in the two images are pervceived as rivalrous.
    || 3D groupings determine perceived depth, stereogram (Kaufman 1974). Vertical illusory contours are at different disparities than those of bounding squares. Illusory square is seen in depth. Vertical illusory contours are binocularly fused and determine the perceived depth of the square. Thin, oblique lines, being perpendicular, are rivalrous: simultaneous fusion and rivalry.
  • image p395fig11.34 A comparison of the properties of other rivalry models with those of the 3D LAMINART model (surrounded by red border). Significantly, only 3D LAMINART explains both stable vision and rivalry (green border).
    || Comparison of rivalry models
  • image p396fig11.35 Three properties of bipole boundary grouping in V2 can explain how boundaries oscillate in response to rivalry-inducing stimuli. Because all boundaries are invisible, however, these properties are not sufficient to generate a conscious percept of rivalrous surfaces.
    || 3 V2 boundary properties cause binocular rivalry. 1. Bipole grouping, 2. Orientational competition, 3. Actovity-dependent habituation
  • image p397fig11.36 Simulation of the temporal dynamics of rivalrous, but coherent, boundary switching.
    || Simulation of 2D rivalry dynamics. [Inputs, Temporal dynamics of V2 layer 2/3 boundary cells] cart [left, right]
  • image p398fig11.37 Simulation of the no swap baseline condition of (Logothetis, Leopold, Sheinberg 1996).
    || [Binocular, [left, right] eye] activity
  • image p399fig11.38 Simulation of the swap condition of (Logothetis, Leopold, Sheinberg 1996).
    || [Binocular, [left, right] eye] activity
  • image p399fig11.39 Simulation of the eye rivalry data of (Lee, Blake 1999).
    || [Binocular, [left, right] eye] activity
  • image p400fig11.40 When planar 2D parallelograms are justaposed, the resultant forms generate 3D percepts that are sensitive to the configuration of angles and edges in the fugure. See the text for why.
    || 3D representation of 2D images, Monocular cues (eg angles) can interact together to yield 3D interpretation. Monocular cues by themselves are often ambiguous. Same angles and shapes, different surface slants. How do these ambiguous 2D shapes contextually define a 3D object form?
  • image p401fig11.41 The 3D LAMINART model proposes how angle cells and disparity-gradient interact through learning to generate 3D representations of slanted objects.
    || 3D LAMINART model. [LGN, V1, V2, V4] Four key additions: 1. Angle cells - tuned to various angles; 2. Disparity-gradient cells - tuned to disparity gradients in the image; 3. weights from [angle to disparity-gradient] cells - learned while viewing 3D image; Colinear grouping between [angle to disparity-gradient] cells - disambiguates ambiguous groupings.
  • image p401fig11.42 A hypothetical cortical hypercolumn structure proposes how angle cells and disparity-gradient cells, including bipole cells that stay within a given depth, may self-organize during development.
    || Hypercolumn representation of angles [leftm right] cart [far-to-near, zero, near-to-far]
  • image p402fig11.43 A pair of disparate images of a scene from the University of Tsukuba. Multiview imagre database.
    || input [left, right]
  • image p402fig11.44 3D scenic reconstruction of the image in Figure 11.43 by the 3D LAMINART model.
    || Disparity [5, 6, 8, 10, 11, 14]: images of objects in common depth planes
  • image p403fig11.45 The multiple boundary and surface scales that were used to simulate a reconstruction of the SAR image in Figure 3.24.
    || SAR processing by multiple scales. [boundaries before completion, boundaries after completion, surface filling-in] versus scale [small, medium, large]. large scale bipole
  • image p405fig12.01 A What ventral cortical stream and Where/How dorsal cortical stream have been described for audition, no less than for vision.
    || Partietal lobe: where; Temporal lobe: what. V1-> [[what: IT], [where: PPC-> DLPFC]]. A1-> [[what: [ST-> VLPFC], VLPFC], [where: [PPC-> DLPFC], DLPFC]].
  • image p407fig12.03 Neurophysiological data showing how motor cortical cells code different vectors that are sensitive to both the direction of the commanded movement and its length.
    || (a) Single primary motor cortex neuron, onset of movement -> on..., radial architecture... (b) Motor cortex neuronal population, radial architecture...
  • image p409fig12.04 (top half) Neurophysiological data of vector cell responses in motor cortex. (bottom half) VITE model simulations of a simple movement in which the model
  • image p410fig12.05 VITE simulation of velocity profile invariance if the same GO signal gates shorter (a) or longer (b) movements. Note the higher velocities in (b).
    || [[short, long] cart [G, dP/dt]] vs time. G = GO signal, dP/dt = velocity profile.
  • image p411fig12.07 The left column simulation by VITE shows the velocity profile when the GO signal (G) starts with the movement. The right signal column shows that the peak velocity is much greater if a second movement begins when the GO signal is already positive.
    || Higher peak velocity due to target switching. VITE simulation of higher peak speed if second target rides on first GO signal. [[first, second] target cart [G, dP/dt]] vs time. Second target GO is much higher. G = GO signal, dP/dt = velocity profile.
  • image p411fig12.08 Agonist-antagonist opponent organization of difference vector (DV) and present position vector (PPV) processing stages and how GO signals gate them.
    ||
  • image p412fig12.09 How a Vector Associative Map, or VAM, model uses mismatch learning during its development to calibrate inputs from a target position vector (T) and a present position vector (P) via mismatch learning of adaptive weights at the difference vector (D). See the text for details.
    || Vector Associative Map model (VAP). During critical period, Endogenous Random Generator (ERG+) tirns on, activates P, and causes random movements that sample workspace. When ERG+ shuts off, posture occurs. ERG- then turns on (rebound) and opens Now Print (NP) gate, that dumps P into T. Mismatch learning enables adaptive weights between T and D to change until D (the mismatch) appoaches 0. Then T and P are both correctly calibrated to represent the same positions.
  • image p413fig12.10 Processing stages in cortical areas 4 and 5 whereby the VITE model combines outflow VITE trajectory formation signals with inflow signals from the spinal cord and cerebellum that enable it to carry out movements with variable loads and in the presence of obstacles. See the text for details.
    || area 4 (rostral) <-> area 5 (caudal).
  • image p414fig12.11 Neurophysiological data from cortical areas 4 and 5 (every other column) and simulations thereof (other columns) during a reach.
    || activation vs time. (a) area 4 phasic RT (IFV) (b) area 4 tonic (OPV) (c) area 4 phasic-tonic (OFPV) (d) area 4 phasic MT (DVV) (e) area 5 phasic (DV) (f) area 5 tonic (PPV)
  • image p415fig12.12 The combined VITE, FLETE, cerebellar, and multi-joint opponent muscle model for trajectory formation in the presence of variable forces and obstacles.
    ||
  • image p416fig12.13 The DIRECT model learns, using a circular reaction that is energized by an Endogenous Random Generator, or ERG, to make motor-equivalent volitionally-activated reaches. This circular reaction learns a spatial representation of a target in space. It can hereby make accurate reaches with clamped joints and on its first try using a tool under visual guidance; see Figure 12.16.
    || DIRECT model (Bulloch, Grossberg, Guenther 1993). learns by circular reaction. learns spatial reresentation to me4diate between vision and action. motor-equivalent reaching. can reach target with clamped joints. can reach target with a TOOL on the first try under visual guidance. How did tool use arise?!
  • image p416fig12.14 Computer simulations of DIRECT reaches with (b) a tool, (c) a clamped elbow, and (d) with a blindfold, among other constraints.
    || Computer simulationsd of direct reaches [unconstrained, with TOOL, elbow clamped at 140°, blindfolded]
  • image p417fig12.15 The DIRECT and DIVA models have homologous circuits to learn and control motor-equivalent reaching and speaking, with tool use and coarticulation resulting properties. See the text for why.
    || From Seeing and Reaching to Hearing and Speaking, Circular reactions (Piaget 1945, 1951, 1952). Homologous circuits for development and learning of motor-equivalent REACHING and SPEAKING. DIRECT TOOL use (Bullock, Grossberg, Guenther 1993), DIVA Coarticulation (Guenther 1995)
  • image p418fig12.16 Anatomical interpretations of the DIVA model processing stages.
    || [Feedforward control system (FF), Feedback control subsystem (FB)]. Speech sound map (Left Ventral Premotor Cortex (LVPC)), Cerebellum, Articulatory velocity and position maps (Motor Cortex (MC)), Somatosensory Error Map (Inferior Parietal Cortex (IPC)), Auditory Error Map (Superior Temporal Cortex (STC)), Auditory State Map (Superior Temporal Cortex)), Somatosensory State Map (Inferior Parietal Cortex)), articulatory musculature via subcortical nuclei, auditory feedback via subcortical nuclei
  • image p419fig12.17 The auditory continuity illusion illustrates the ART Matching Rule at the level of auditory streaming. Its "backwards in time" effect of future context on past conscious perception is a signature of resonance.
    || Auditory continuity illusion. input, percept. Backwards in time - How does a future sound let past sound continue through noise? Resonance! - It takes a while to kick in. After it starts, a future tone can maintain it much more quickly. Why does this not happen if there is no noise? - ART Matching Rule! TD harmonic filter is maodulatory without BU input. It cannot create something out of nothing.
  • image p420fig12.18 The ARTSTREAM model explains and simulates the auditory continuity illusion as an example of a spectral-pitch resonance. Interactions of ART Matching Rule and asymmetric competition mechanisms in cortical strip maps explain how the tone selects the consistent frequency from the noise in its own stream while separating the rest of the noise into another stream.
    || ARTSTREAM model (Grossberg 1999; Grossberg, Govindarajan, Wyse, Cohen 2004). SPINET. Frequency and pitch strips. Bottom Up (BU) harmonic sieve. Top Down (TD) harmonic ART matching. Exclusive allocation. Learn pitch categories based on early harmonic processing. A stream is a Spectral-Pitch Resonance!
  • image p422fig12.19 The ARTSTREAM model includes mechanisms for deriving streams both from pitch and from source direction. See the text for details.
    || [left, right] cart Peripheral processing = [input signal-> outer & middle ear preemphasis-> basilar membrane gammatone filterbank-> energy measure]. Spectral stream layer-> spectral summation layer-> delays-> [f-, tau] plane-> pitch stream layer-> pitch summation layer.
  • image p423fig12.20 The Spatial Pitch Network, or SPINET, model shows how a log polar spatial representation of the sound frequency spectrum can be derived from auditory signals occuring in time. The spatial representation allows the ARTSTREAM model to compute spatially distinct auditory streams.
    || SPINET model (Spatial Pitch Network) (Cohen, Grossberg, Wyse 1995). 1. input sound 2. Gamma-tone filter bank 3. Shaort-term average energy spectrum 4. MAP transfer function 5. On-center off-surround and rectification 6. Harmonic weighting 7. Harmonic summation and competition -> PITCH
  • image p424fig12.21 One of the many types of data about pitch processing that are simulated by the SPINET model. See the text for details.
    || Pitch shifts with component shifts (Patterson, Wightman 1976; Schouten 1962). Pitch vs lowest harmonic number.
  • image p424fig12.22 Decomposition of a sound (bottom row) in terms of three of its harmonics (top three rows).
    ||
  • image p425fig12.23 ARTSTREAM simulations of the auditory continuity illusion and other streaming properties (left column, top row). When two tones are separated by silence (Input), a percept of silence also separates them in a spectral-pitch resonance. (left column, bottom row). When two tones are separated by broadband noise, the percept of tone continues through the noise in one stream (stream 1) while the remainder of the noise occurs in a different stream (stream 2). (right column) Some of the other streaming properties that have been simulated by the ARTSTREAM model.
    || Auditory continuity does not occur without noise. Auditory continuity in noise. Other simulated streaming data.
  • image p426fig12.24 Spectrograms of /ba/ and /pa/ show the transient and sustained parts of their spectrograms.
    ||
  • image p428fig12.25 (left architecture) Auditory-articulatory feedback loop whereby babbled sounds active learning in an imitative map that is later used to learn to reproduce the sounds of other speakers. An articulatory-to-auditory expectation renders learning possible by making the auditory and motor data dimensionally consistent, as in the motor theory of speech. (right architecture) Parallel streams in the ARTSPEECH model for learning speaker-independent speech and language meaning, including a mechanism for speaker normalization (right cortical stream) and for learning speaker-dependent vocalic qualities (left cortical stream).
    || left: Speaker-dependent vocalic qualities; right: Speaker-independent speech and language meaning
  • image p430fig12.26 The NormNet model shows how speaker normalization can be achieved using specializations of the same mechanisms that create auditory streams. See the text for how.
    || [Anchor vs Stream] log frequency map. -> diagonals-> Speaker-independent acoustic item information-> [BU adaptive filter, TD learned expectation]-> leaned item recognition categories
  • image p431fig12.27 The strip maps that occur in ARTSTREAM and NormNet are variants of a cortical design that aalso creates ocular dominance columns in the visual cortex.
    || Adult organization of V1 (Grinvald etal http://www.weizmann.ac.il/brain/images/cubes.html). (1) Occular dominance columns (OCDs): Alternating strips of cortex respond preferentially to visual inputs of each eye (R/L corresponds to Right and Left eye inputs in the figure); Orientation columns: A smooth pattern of changing orientation preference within each ODC. Organized in a pinwheel like fashion.
  • image p432fig12.28 (left image) The SpaN model simulates how spatial representations of numerical quantities are generated in the parietal cortex. (right image) Behavior numerosity data and SpaN model simulations of it.
    || (Left) preprocessor-> spatial number map-> Comparison wave. (Right) data axis: number of lever presses; model axis: node position in the spatial number axis
  • image p433fig12.29 Learning of place-value number maps language categories in the What cortical stream into numerical strip maps in the Where cortical stream. See the text for details.
    || (1) spoken word "seven"-> (2) What processing stream- learned number category <-> (3) What-Where learned assoociations <- (4) Where processing stream- spatial number map <-(5) visual clues of seven objects
  • image p436fig12.30 The conscious ARTWORD, or cARTWORD, laminar cortical speech model simulates how future context can disambiguate noisy past speech sounds in such a way that the completed percept is consciously heard to proceed from past to future as a feature-item-list resonant wave propagates through time.
    || cARTWORD: Laminar cortical model macrocircuit (Grossberg, Kazerounian 2011) Simulates PHONEMIC RESTORATION: Cognitive Working Memory (processed item sequences) - [Excitatory-> inhibitory-> habituative-> adaptive filter-> adaptive filter-> adaptive filter with depletable synapse-> Acoustic [item, feature]
  • image p436fig12.31 Working memories do not store longer sequences of events in the correct temporal order. Instead, items at the beginning and end of the list are oftem called first, and with the highest probability.
    || Working memory. How to design a working memory to code "Temporal Order Information" in STM before it is stored in LTM. Speech, language, sensory-motor control, cognitive planning. eg repeat a telephone number unless you are distracted first. Temporal order STM is often imperfect, eg Free Recall. [probability, order] of recall vs list position. WHY?
  • image p437fig12.32 Data from a free recall experiment illustrate the bowed serial position curve.
    || Serial position function for free recall Data: (Murdock 1962 JEP 64, 482-488). % correct vs position of word on a 40-word list. Primacy gradient can be a mixture of STM and LTM read-out.
  • image p437fig12.33 Item and Order working memory models explain free recall data, as well as many other psychological and neurobiological data, by simulating how temporal series of events are stored as evolving spatial patterns of activity at content-addressable item categories. The categories with the largest activities are rehearsed first, and self-inhibit their activity as they do so in order to prevent tem from being rehearsed perseveratively. The laws whereby the items are stored in working memory obey basic design principles concerning list categories, or chunks, of sequences of stored items can be stably remembered.
    || Working memory models: item and order, or competitive queuing (Grossberg 1978; Houghton 1990; Page, Norris 1998). Event sequence in time stored as an evolving spatial pattern of activity. Primacy gradient of working memory activation stores correct temporal order at content-addressable cells. Maximally activated cell populations is performed next when a rehearsal wave is turned on. Output signal from chosen cell population inhibits its own activity to prevent perseveration: inhibition of return. Iterate until entire sequence is performed.
  • image p438fig12.34 The LTM Invariance Principle insists that words being stored in working memory for the first time (eg MYSELF) do not cause catastrophic forgetting of the categories that have already been learned for their subwords (eg MY, SELF, and ELF) or other subset linguistic groups.
    || LTM invariance principle. unfamiliar STM -> LTM familiar. How does STM storage of SELF influence STM storage of MY? It should not recode LTM of either MY or SELF!
  • image p439fig12.35 The Normalization Rule insists that the total activity of stored items in working memory has an upper bound that is approximately independent of the number of items that are stored.
    || Normalization Rule (Grossberg 1978). Total STM activity has a finite bound independent of the number of items (limited capacity of STM). Activity vs Items for [slow, quick] asymptotic energy growth.
  • image p439fig12.36 (1) Inputs to Item and Order working memories are stored by content-addressable item categories. (2) The relative activities of the item categories code the temporal order of performance. (3) In addition to excitatory recurrent signals from each working memory cell (population) to itself, there are also inhibitory recurrent signals to other working memory cells, in order to solve the noise-saturation dilemma. (4) A nonspecific rehearsal wave allows the most active cell to be rehearsed first. (5) As an item is being rehearsed, it inhibits its own activity using a feedback inhibitory interneuron. Persevervation performance is hereby prevented.
    || Item and order working memories. (1) Content-addressable item codes (2) Temporal order stored as relative sizes of item activities (3) Competition between working memory cells: Competition balances the positive feedback that enables the cells to remain active. Without it, cell activities may all saturate at their maximal values-> Noise saturation dilemma again! (4) Read-out by nonspecific reheasal wave- Largest activity is the first out (5) STM reset self-inhibition prevents perseveration: [input/self-excitatory, rehearsal wave]-> [output, self-inhibition]
  • image p440fig12.37 Simulation of a primacy gradient for a short list (left image) being transformed into a bowed gradient for a longer list (right image). Activities of cells that store the longer list are smaller die to the Normalization Rule, which follows from the shunting inhibition in the working memory network.
    || Primacy bow as more items stored. [activities, final y] (Left) Primacy gradient 6 items (Right) Bowed gradient 20 items
  • image p441fig12.38 The LTM Invariance Principle is realized if the relative sizes of the inputs to the list chunk level stay the same as more items are stored in working memory. This property, in turn, follows from shunting previously stored working memory activities when a ne4w item occurs.
    || LTM Invariance principle. Choose STM activities so that newly stored STM activities may alter the size of old STM activities without recoding their LTM patterns. In particular: New events do not change the relative activities of past event sequences, but may reduce their absolute activites. Why? Bottom-up adaptive filtering uses dot products: T(j) = sum[i=1 to n: x(i)*z(i,j) = total input to v(j). The relative sizes of inputs to coding nodes v(j) are preserved. x(i) -> w*x(i), 0 < w <= 1, leaves all past ratios T(j)/T(k) unchanged.
  • image p442fig12.39 (left column, top row) How a shunt plus normalization can lead to a bow in the stored working memory spatial pattern. Time increases in each row as every item is stored with activity 1 before it is shunted by w due to each successive item
  • image p442fig12.40 Given the hypothesis in Figure 12.39 (right column, bottom row) and a generalized concept of steady, albeit possibly decreasing, attention to each item as it is stored in working memory, only a primacy, or bowed gradient of activity across the working memory items can be stored.
    || LTM Invariance + Normalization. (... given conditions ...) Then the x(i) can ONLY form: [primacy gradient, recency gradient, unimodal bow]
  • image p443fig12.41 Neurophysiological data from the Averbeck etal sequential copying experiments show the predicted primacy gradient in working memory and the self-inhibition of activity as an item is stored. When only the last item remains stored, it has the highest activity becasuse it has been freed from inhibition by earlier items.
    || Neurophysiology of sequential copying
  • image p444fig12.42 The LIST PARSE laminar cortical model of working memory and list chunking that I published with Lance Pearson in 2008 simulated the Averbeck etal data in Figure 12.41, as in the left column of the figure. It also simulated cognitive data about working memory storage by human subjects. See the text for details.
    || LIST PARSE: Laminar cortical model of working memory and list chunking (Grossberg, Pearson 2008). Simulates data about: [immediate, delayed, continuous] distractor free recall; immediate serial recall; and variable-speed sequential perfomance of motor acts. [velocity, acceleration] vs time (ms) from recall cue.
  • image p445fig12.43 The LIST PARSE laminar cortical Cognitive Working Memory circuit, that is proposed to occur in ventrolateral prefrontal cortex, is homologous to the LAMINART circuit circuit that models aspects of how visual cortex sees. The Motor Working Memory, VITE Trajectory Generator, and Variable-Rate Volitional Control circuits model how other brain regions, including dorsolateral prefrontal cortex, motor cortex, cerebellum, and basal ganglia, interact with the Cognitive Working Memory to control working memory storage and variable-rate performance of item sequences.
    || List parse circuit diagram. Connectivity convention. sequence chunks [<- BU filter, TD expectation ->] working memory. Working memory and sequence chunking circuit is homologous to visual LAMINART circuit!
  • image p446fig12.44 (left column, top row) LIST PARSE can model linguistic data from human subjects. In this figure, model parameters are fixed to enable a close fit to data about error-type distributions in immediate free recall experiments, notably transposition errors. (right column, top row) Simulation and data showing bowing of the serial position curve, including an extended primacy gradient. (left column, bottom row) The simulation curve overlays data about list length effects, notably the increasing recall difficulty of longer lists during immediate serial recall (ISR). (right column, bottom row) Simulation (bottom image) and data (top image) of the limited temporal extent for recall.
    || (1. TL) Error-type distributions in immediate serial recall (Hanson etal 1996). % occurence vs serial position. Graph convention: Data- dashed lines; Simulations- solid lines. Six letter visual ISR. Order errors- transpositions of neighboring items are the most common. Model explanation: Noisy activation levels change relative oorder in primacy gradient. Similar activation of neighboring items most susceptible to noise. Model paramenters fitted on these data. (2. TR) Bowing of serial position curve (Cowan etal 1999). % correct vs serial position. Auditory ISR with various list lengths (graphs shifted rightward): For [, sub-]span lists- extended primacy, with one (or two) item recency; Auditory presentation- enhanced performance for last items. LIST PARSE: End effects- first and last items half as many members; Echoic memory- last presented item retained in separate store. (3. BL) List length effects, circles (Crannell, Parrish 1968), squares (Baddeley, Hitch 1975), solid line- simulation. % list correct vs list length. Variable list length ISR: longer lists are more difficult to recall. LIST PARSE: More items- closer activation levels and lower absolute activity level with enough inputs; Noise is more likely to produce order errors, Activity levels more likely to drop below threshold;. (4. BR) Limited temporal extent for recall (Murdoch 1961). % recalled vs retention interval (s). ISR task with distractor-filled retention intervals (to prevent rehersal): Increasing retention interval - decreases probabilty of recalling list correctly; Load dependence- longer list more affected by delays; Performance plateau- subjects reach apparent asymptote. LIST PARSE: Increase convergence of activities with time; loss of order information;.
  • image p447fig12.45 (left column) LIST PARSE simulations of the proportion of order errors as a function of serial position for 6 item lists with (a) an extended pause of 7 time units between the third and fourth items, and (b) pauses of 5 time units (solid curve) and 10 time units (dashed curve) between all items. (right column) Simulations (solid curves) and data (dashed curves) illustrating close model fits in various immediate free recall tasks.
    || (Left) Temporal grouping and presentation variability. Temporal grouping: Inserting an extended pause leads to inter-group bowing; Significantly different times of integration and activity levels across pause, fewer interchanges. (Right) Immediate free recall, and [delayed, continuous] distractor-free recall. Overt rehersal IFR task with super-span (ie 20 item) lists: Extended recency- even more extended with shorter ISIs; Increased probability of recall with diminished time from last rehearsal; Early items in list rehearsed most;. LIST PARSE (unique) for long lists: Incoming items from recency gradient; Rehearsal (re-presentation) based upon level of activity.
  • image p448fig12.46 A Masking Field working memory is a multiple-scale self-similar recurrent shunting on-center off-surround network. It can learn list chunks that respond selectively to lists of item chunks of variable length that are stored in an item working memory at the previous processing stage. Chunks that code for longer lists (eg MY vs MYSELF) are larger, and give rise to stronger recurrent inhibitory neurons (red arrows).
    || How to code variable length lists? MASKING FIELDS code list chunks of variable length (Cohen, Grossberg 1986, 1987; Grossberg, Kazerounian 2011, 2016; Grossberg, Meyers 2000; Grossberg, Pearson 2008). Multiple-scale self-similar WM: Masking field, adaptive filter. Variable length coding- Masjking fields select list chunks that are sensitive to WM sequences of variable length; Selectivity- Larger cells selectively code code longer lists; Assymetric competition- Larger cells can inhibit smaller cells more than conversely MAgic Number 7! Temporal order- different list chunks respond to the same items in different orders eg LEFT vs FELT;.
  • image p449fig12.47 This figure illustrates the self-similarity in a Masking Field of both its recurrent inhibitory connections (red arrows) and its top-down excitatory priming signals (green arrows) to the item chunk working memory.
    || Both recurrent inhibition and top-down excitatory priming are self-similar in a masking field. MYSELF <-> [MY, MYSELF]
  • image p452fig12.48 (left column) In experiments of (Repp etal 1978), the silence duration between the words GRAY and SHIP was varied, as was the duration of the fricative noise in S, with surprising results. (right column) The red arrow directs our attention to surprising perceptual changes as silence and noise durations increase. See the text for details.
    || Perceptual integration of acoustic cues, data (Repp etal 1978). GRAY-> silence duration-> SHIP (noise duration from start of word). Noise duration vs silence duration: GRAY SHIP <-> [GREAT SHIP <-> GRAY CHIP] <-> GREAT CHIP.
  • image p453fig12.49 The ARTWORD model that I published in 2000 with my PhD student Christopher Myers simulates data such as the (Repp etal 1978) data in Figure 12.68. See the text for details.
    || ARTWORD model (Grossberg, Myers 2000). Input phonetic features-> Phonemic item working memory-> Masking Field unitized lists-> Automatic gain control-> Phonemic item working memory. [habituative gate, adaptive filter]s.
  • image p453fig12.50 The ARTWORD perception cycle shows how sequences of items activate possible list chunks, which compete among each other and begin to send their top-down expectations back to the item working memory. An item-list resonance develops through time as a result.
    || ARTWORD perception cycle. (a) bottom-up activation (b) list chunk competition (c) item-list resonance (d) chunk reset due to habituative collapse.
  • image p454fig12.51 (left column) Even as a resonance with the list chunk GRAY begins to develop, if the delay between "gray" and "chip" is increased, greater habituation of this resonance may allow the GREAT chunk to begin to win, thereby smoothly transfering the item-list resonance from GRAY to GREAT through time. (right column) Simulation of a resonant treansfer from GRAY to GREAT, and back again as the silence interval between the words {gray" and "chip" increases. The red region between GRAY and GREAT curves calls attention to when GREAT wins. See the text for details.
    || Resonant transfer, as silence interval increases. (left) Delay GRAY resonance weakens. A delayed additional item can facilitate perception of a longer list. (right) GRAY-> GREAT-> GRAY.
  • image p455fig12.52 Simulation of cARTWORD dynamics in response to the complete list /1/-/2/-/3/. The relevant responses are surrounded by a red box.
    || Presentation of a normal sequence: input /1/-/2/-/3/. |c(i,1)-5| vs time (msec). List chunks select most predictive code. Order stored in WM layers. Resonant activity of /1/-/2/-/3/ in item and feature layers corresponds to conscious speech percept.
  • image p456fig12.53 Simulation of cARTWORD dynamics in response to the partial list /1/-silence-/3/ with /2/ replaced by silence. Only the representations of these items can be seen in the red box.
    || Presentation with silence duration: input /1/-silence-/3/. |c(i,1)-5| vs time (msec). List chunks select most predictive code. Order stored in WM layers. Gap in resonant activity of /1/-silence-/3/ in item and feature layers corresponds to perceived silence.
  • image p456fig12.54 Item /2/ is restored in the correct list position in response to the list /1/-noise-/3/.
    || Presentation with noise: input /1/-noise-/3/. |c(i,1)-5| vs time (msec). List chunks select the most predictive code. Order restored in WM layers. Resonant activity of /1/-/2/-/3/ in item and feature layers corresponds to restoration of item /2/ replaced by noise in input.
  • image p457fig12.55 Item /4/ is restored in the correct list position in response to the list /1/-noise-/5/. This and the previous figure show how future context can disambiguate past noisy sequences that are otherwise identical.
    || Presentation with noise: input /1/-noise-/5/. |c(i,1)-5| vs time (msec). List chunks select the most predictive code. Order restored in WM layers. Resonant activity of /1/-/4/-/3/ in item and feature layers corresponds to restoration of item /4/ replaced by noise in input.
  • image p459fig12.56 (Grossberg, Pearson 2008) proposed that the ability of working memories to store repeated items in a sequence represents rank information about the position of an item in a list using numerical hypercolumns in the prefrontal cortex (circels with numbered sectors: 1,2,3,4). These numerical hypercolumns are conjointly activated by inputs from item categories and from the analog spatial representation of numerosity in the parietal cortex. Thes parietal representations (overlapping Gausian activity profiles that obey a Weber Law) had earlier been modeled by (Grossberg, Repin 2003). See the text for details.
    || Item-order-rank working memory, rank information from parietal numerosity cicuit (Grossberg, Peaarson 2008; Grossberg, Repin 2003). [Sensory working memory-> adaptive filter-> list chunk-> attentive prime-> Motor working memory]-> [large, small] numbers-> transfer functions with variable thresholds and slopes-> uniform input-> integrator amplitude-> number of transient sensory signals.
  • image p460fig12.57 The lisTELOS architecture explains and simulates how sequences of saccadic eye movement commands can be stored in a spatial working memory and recalled. Multiple brain regions are needed to coordinate these processes, notably three different basal ganglia loops to replace saccade storage, choice, and performance, and the supplementary eye fields (SEF) to choose the next saccadic command from a stored sequence. Because all working memories use a similar network design, this model can be used as a prototype for storing and recalling many other kinds of cognitive, spatial, and motor information. See the text for details.
    || lisTELOS model- Spatial working memory (Silver, Grossberg, Bulloch, Histed, Miller 2011). Simulates how [PPC, PFC, SEF, FEF, SC] interact with 3 BG loops to learn and perform sequences of saccadic eye movements.
  • image p461fig12.58 The lisTELOS model built upon key processes that were earlier modeled by the TELOS model. See the text for details.
    || TELOS model (Brown, Bulloch, Grossberg 1999, 2004). shows [BG nigro-[thalamic, collicular], FEF, ITa, PFC, PNR-THAL, PPC, SEF, SC, V1, V4/ITp, Visual Cortex input] and [GABA].
  • image p462fig12.59 The TELOS model clarifies how reactive vs. planned eye movements may be properly balanced against one another, notably how a fast reactive movement is prevented from occuring in response to onset of a cue that requires a different, and more contextually appropriate, response, even if the latter response takes longer to be chosen and performed. The circuit explains how "the brain knows it before it knows" what this latter response should be by changing the balance of excitation to inhibition in the basal ganglie (BG) to keep the reactive gate stays shut until the correct target position can be chosen by a frontal-parietal resonance.
    || Balancing reactive vs. planned movements (Brown, Bulloch, Grossberg 2004). (a) shows [FEF, PPC]-> [BG, SC], and BG-> SC. (b) FTE vs time (msec) for [fixation, saccade, overlap, gap, delayed saccade] tasks.
  • image p463fig12.60 Rank-related activity in prefrontal cortex and supplementary eye fields from two different experiments. See the text for details.
    || Rank-related activity in PFC and SEF. Prefrontal cortex (Averbeck etal 2003) [sqare, inverted triangle]. Supplementary eye field (Isoda, Tanju 2002).
  • image p464fig12.61 (left column) A microstimulating electrode causes a spatial gradient of habituation. (right column) The spatial gradient of habituation that is caused by microstimulation alters the order of saccadic performance of a stored sequence, but not which saccades are performed, using interactions between the prefrontal cortex (PFC) working memory and the supplemental eye field (SEF) saccadic choice.
    || (left) Microstimulation causes habituation (Grossberg 1968). Stimulation caused habituation. Cells close to the stimulation site habituate most strongly. (right) Stimulation biases selection PFC-> SEF-> SEF. PFC Activity gradient in working memory, SEF Microstimulation causes habituation, During selection habituated nodes are less likely to win this competition.
  • image p464fig12.62 The most habituated positions have their neuronal activites most reduced, other things being equal, as illustrated by the gradient from deep habituation (red) to less habituation (pink). The saccadic performance orders (black arrows) consequently tend to end in the most habituated positions that have been stored.
    || The most habituated position is foveated last. For each pair of cues, the cue closest to the stimulation site is most habituated -- and least likely to be selected. Because stimulation spreads in all directions, saccade trajectories tend to converge.
  • image p465fig12.63 Neurophysiological data (left image) and lisTELOS stimulation (right figure) showing how microstimulation biases saccadic performance order but not the positions to which the saccades will be directed. See the text for details.
    || Saccade trajectories converge to a single location in space. Microstimulation biased selection so saccade trajectories converged toward a single location in space. [Data, model] contra <-> Ipsi (msec)
  • image p467fig12.64 Some of the auditory cortical regions that respond to sustained or transient sounds. See text for details.
    || Some auditory cortical regions. Core <-> belt <-> parabelt. [Belt, Core, ls, PAi, Parabelt, PGa, TAs, TE, TP, TPO, st s].
  • image p468fig12.65 Linguistic properties of the PHONET model and some of the data that it simulates. The upper left image summarizes the asymmetric transient-to-sustained gain control that helps to create invariant intraword ratios during variable-rate speech. The lower left image summarizes the rate-dependent gain control of the ARTPHONE model that creates rate-invariant working memory representations in response to sequences of variable-rate speech. The right image summarizes the kind of paradoxical VC-CV category boundary data of (Repp 1980) that ARTPHONE simulates. See the text for details.
    || (left upper) [transient, sustained] [working memory, filter, category]. (left lower) phone inputs-> [input rate estimate, features], Features w <- habituative transmitter gates -> categories-> rate invariant phonetic output, input rate estimate-> gain control-> [features, categories] rate-dependent integration of categories and features. (right) % 2-stop vs VC-CV silent interval (msec): [ib-ga, ib-ba, iga, iba].
  • image p469fig12.66 (left column) A schematic of how preserving relative duration, as in the first and third images, of consonant and vowel pairs can preserve a percept, in this case of /ba/, but not doing so, as in the first and second images, can cause a change in percept, as from /ba/ to /wa/, as in the data of (Miller, Liberman 1979) that PHONET simulates. (right column) Changing frequency extent can also cause a /ba/ - /wa/ transition, as shown in data of (Schwab, Sawusch, Nusbaum 1981) that PHONET also simulates.
    || (left image) Maintaining relative duration as speech speeds up preserves percept (Miller, Liberman 1979). frequency vs time- [/ba/, /wa/, /ba/] (right image) Changing frequency extent causes /b/-/wa/ transition (Schwab, Sawusch, Nusbaum 1981). frequency vs time- [/ba/, /wa/] Dt extent.
  • image p469fig12.67 PHONET contains transient and sustained cells that respond to different kinds of sounds, notably the transients of certain consonants and the sustained sounds of certain vowels. It then uses the transient working memory to gain contol the integration rate of the sustained working memory to which these different detectors input.
    || Phonetic model summary. (left) Acoustic tokens [consonant, vowel]. (middle) Acoustic detectors [transient (sensitive to rate), Sustained (sensitive to duration)]. (right) Working memory, Spatially stored transient pattern (extent) + gain control-> spatially stored sustained pattern.
  • image p471fig12.68 A mismatch reset of /b/ in response to the /g/ in [ib]-[ga] can rapidly shut off the [ib] percept, leading to the percept of [ga] after an interval of silence. In contrast, resonant fusion of the two occurences of /b/ in [ib]-[ba] can cause a continuous percept of sound [iba] to occur during times at which silence is heard in response to [ib]-[ga].
    || Mismatch vs resonant fusion
  • image p473fig12.69 Error rate and mean reaction time (RT) data from the lexical decision experiments of (Schvanevelt, McDonald 1981). ART Matching Rule properties explain these data in (Grossberg, Stone 1986).
    || (left) Error rate vs type of prime [R, N, U], [non,] word. (right) Mean RT (msec) vs type of prime [R, N, U], [non,] word.
  • image p474fig12.70 The kind of model macrocircuit that was used in (Grossberg, Stone 1986) to explain lexical decision task data.
    || inputs-> A1 <-> A2 oconic sensory features <-> A3 item and order in sensory STM <-> A4 list parsing in STM (masking field) <-> A5 semantic network (self-feedback). [A4, A5] <-> V* visual object recognition system. M1-> [outputs, A1]. M1 <-> M2 iconic motor features <-> M3 item and order in motor STM. A2-> M2. A3-> M3.
  • image p476fig12.71 Word frequency data of (Underwood, Freund 1970) that were explained in (Grossberg, Stone 1986).
    || percent errors vs frequency of old words [L-H to H-H, L-L to H-L].
  • image p481fig13.01 Macrocircuit of the functional stages and anatomical interpretations of the Cognitive-Emotional-Motor, or CogEM, model.
    || Drive-> hypothalamus value categories <-> amygdala incentive motivational learning-> Orbitofrontal cortex- object-value categories <-> sensory cortex- invariant object categories- conditioned reinforcer learning-> amygdala-> hypothalamus.
  • image p483fig13.02 The object-value categories in the orbitofrontal cortex require converging specific inputs from the sensory cortex and nonspecific incentive motivational inputs from the amygdala in order to fire. When the orbitofrontal cortex fires, it can deliver top-down ART Matching Rule priming signals to the sensory cortical area by which it was activated, thereby helping to choose the active recognition categories there that have the most emotional support, while suppressing others, leading to attentional blocking of irrelevant cues.
    || Cognitive-Emotional-Motor (CogEM) model. Drive-> amygdala incentive motivational learning-> orbitofrontal cortex- need converging cue and incentive iputs to fire <-> sensory cortex- conditioned renforcer learning-> amygdala. CS-> sensory cortex. Motivated attention closes the cognitive-emotional feedback loop, focuses on relevant cues, and causes blocking of irrelevant cues.
  • image p483fig13.03 The predicted processing stages of CogEM have been supported by anatomical studies of connections between sensory cortices, amygdala, and orbitofrontal cortex.
    || Adapted from (Barbas 1995). sensory cortices = [visual, somatosensory, auditory, gustatory, olfactory]. sensory cortices-> amygdala-> orbital prefrontal cortex. sensory cortices-> orbital prefrontal cortex. [visual cortex, amygdala]-> lateral prefrontal cortex.
  • image p484fig13.04 The top-down feedback from the orbitofrontal cortex closes a feedback loop that supports a cognitive-emotional resonance. If this resonance can be sustained long enough, it enables us to have feelings at the same time that we experience the categories that caused them.
    || Cognitive-Emotional resonance. Basis of "core consciousness" and "the feeling of what happens". (Damasio 1999) derives heuristic version of CogEM model from his clinical data. Drive-> amygdala-> prefrontal cortex-> sensory cortex, resonance around the latter 3. How is this resonance maintained long enough to become conscious?
  • image p484fig13.05 Classical conditioning is perhaps the simplest kind of associative learning.
    || Classical conditioning (nonstationary prediction). Bell (CS)-> (CR), Shock (US)-> Fear (UR), associative learning.
  • image p485fig13.06 (left column) An inverted-U occurs in conditioned reinforcer strength as a function of the ISI between the CS and the US. Why is learning attenuated at 0 ISI? (right column) Some classical conditioning data that illustrate the inverted-U in conditioning as a function of the ISI.
    || InterStimulus Interval (ISI) effect. Data from (Dmith etal 1969; Schneiderman, Gormezano 1964).
  • image p485fig13.07 The paradigm of secondary conditioning. See the text for details.
    || Secondary conditioning (Advertising!). [CS1, C2] become conditioned reinforcers.
  • image p486fig13.08 The blocking paradigm illustrates how cues that do not predict different consequences may fail to be attended.
    || Blocking- minimal adaptive prediction. Phase [I, II] - CS2 is irrelevant.
  • image p486fig13.09 Equally salient cues can be conditioned in parallel to an emotional consequence.
    || Parallel processing of equally salient cues vs overshadowing (Pavlov).
  • image p486fig13.10 Blocking follows if both secondary conditioning and attenuation of conditioning at a zero ISI occur.
    || Blocking = ISI + secondary conditioning.
  • image p487fig13.11 The three main properties of CogEM that help to explain how attentional blocking occurs.
    || CogEM explanation of attentional blocking. Internal drive input <-> Conditioned reinforcer learning (self-recurrent) <-> Competition for STM <- Motor learning. 1. Sensory representations compete for limited capacity STM. 2. Previously reinforced cues amplify their STM via positive feedback. 3. Other dues lose STM via competition.
  • image p488fig13.12 (left column) How incentive motivational feedback amplifies activity of a sensory cortical cell population. (right column) A sensory cortical cell population whose activity is amplified by incentive motivational feedback can suppress the activities of less activated populations via self-normalizing recurrent competitive interactions.
    || Motivational feedback and blocking. (left) sensory input CS, STM activity without motivational feedback, STM activity with motivational feedback. (right) STM suppressed by competition, STM amplified by (+) feedback.
  • image p489fig13.13 (top row) If a positive ISI separates onset of a CS and US, then the CS can sample the consequences of the US during the time interval before it is inhibited by it. (bottom row) A CogEM simulation of the inverted-U in conditioning as a function of the ISI betweeen CS and US.
    || Positive ISI and conditioning.
  • image p490fig13.14 In order for conditioning to work properly, the sensory representation needs to have at least two successive processing stages. See the text for why.
    || Model of Cognitive-Emotional circuit. Drive-> Drive representation-> ??? <-> Sensory STM <-CS
  • image p490fig13.15 The CogEM circuit is an ancient design that is found even in mollusks like Aplysia. See the text for details.
    || Aplysia (Buononamo, Baxter, Byrne, Neural Networks 1990; Grossberg, Behavioral and Brain Sciences 1983). Facilitator neuron ~ drive representation.
  • image p492fig13.16 (left column) In order to satisfy all four postulates, there needs to be UCS-activated arousal of polyvalent CS-activated sampling neuron. (right column) The arousal needs to be nonspecific in order to activate any of the CSs that could be paired with the UCS.
    || Polyvalent CS sampling and US-activated nonspecific arousal.
  • image p493fig13.17 (top row) Overcoming the ostensible contradiction that seems to occur when attempting to simultaneously realize hypotheses (3) and (4). (bottom row) The problem is overcome by assuming the existence of US-activated drive representation to which CSs can be associated, and that activate nonspecific incentive motivational feedback to sensory representations.
    || Learning nonspecific arousal and CR read-out. (top) Learning to control nonspecific arousal, Learning to read-out the CR (bottom) Drive representation, Incentive motivation.
  • image p494fig13.18 Realizing the above constraints favor one particular circuit. Circuits (a) and (b) are impossible. Circuit (d) allows previously occurring sensory cues to be stored in STM. Circuit (e) in addition enables a CS can be stored in STM without initiating conditioning in the absence of a US.
    || Learning to control nonspecific arousal and read-out of the CR: two stages of CS. (d) & (e) polyvalent cells.
  • image p494fig13.19 (left column, top row) Secondary conditioning of both arousal and a specific response are now possible. (bottom row) The CogEM circuit may be naturally extended to include multiple drive representations and inputs. (right column, top row) The incentive motivational pathways is also conditionable in order to enable motivational sets to be learned.
    || Secondary conditioning. Homology: conditionable incentive motivation. Multiple drive representations and inputs.
  • image p496fig13.20 (top image) A single avalanche sampling cell can learn an arbitrary space-time pattern by sampling it as a temporally ordered series of spatial patterns using a series of outstars. Once an avalanche
  • image p497fig13.21 (left column) An early embodiment of nonspecific arousal was a command cell in such primitive animals as crayfish. (right column) The songbird pattern generator is also an avalanche. This kind of circuit raises the question of how the connections self-organize through developmental learning.
    || Nonspecific arousal as a command cell. Crayfish swimmerets (Stein 1971). Songbird pattern generator (Fee etal 2002)+. Motor-> RA-> HVC(RA).
  • image p498fig13.22 (left column, top row) Adaptive filtering and conditioned arousal are both needed to regulate what cues can learn to activate particular space-time patterns. These developments lead inexorably to basic cognitive abilities, as embodied in the 3D LAMINART models for 3D vision and figure-ground perception (Chapter 11) and the 3D ARTSCAN SEARCH model for invariant object learning, recognition, and 3D search (Chapter 6). (right column, top row) Conditioned arousal enables only emotionally important cues to activate a motivationally relevant space-time pattern. (bottom row) Conditioned arousal and drive representations arise naturally from the unlumping of avalanche circuits to make them selective to motivationally important cues. The MOTIVATOR model is a natural outcome of this unlumping process (this chapter).
    || (top) Adaptive filtering and Conditioned arousal. Towards Cognition: need to filter inputs to the command cell. Towards Emotion: important signals turn arousal ON and OFF. (bottom) Conditioned arousal and Drive representations. Competition between conditioned arousal sources at drive representations, eg amygdala.
  • image p499fig13.23 (left column) Self-organization in avalanches includes adaptive filtering by outstars [?instars?], serial learning of temporal order, and learned read-out of spatial patterns by outstars. (right column) Serial learning of temporal order occurs in recurrent associative networks.
    || (left) Self-organizing avalanches [instars, serial learning, outstars]. (right) Serial list learning.
  • image p500fig13.24 Both primary excitatory and inhibitory conditioning can occur using opponent processes and their antagonistic rebounds.
    || Opponent processing. Cognitive drive associations. Primary associations: excitatory [CS, US, Fear], inhibitory [CS, US, Fear, Relief rebound].
  • image p501fig13.25 When an unbiased transducer is embodied by a finite rate physical process, mass action by a chemical transmitter is the result.
    || Unbiased transducer (Grossberg 1968). S=input, T=output, ?SB?=SB B is the gain. Suppose T is due to release of chemical transmitter y at a synapse: release rate T = S*y (mass action); Accumulation y ~= B.
  • image p501fig13.26 A simple differential equation describes the processes of transmitter accumulation and release that do their best, at a finite rate, to carry out unbiased transduction.
    || Transmitter accumulation and release. Transmitter y cannot be restored at an infinite rate: T = S*ym y ~= B, Differential equations: d[dt: y] = A*(B - y) - S*y = accumulate - release. Transmitter y tries to recover to ensure unbiased transduction. What if it falls behind? Evolution has exploited the good properties that happen then.
  • image p502fig13.27 Despite the fact that less transmitter y is available after persistent activation by a larger input signal S, the gated output signal S*y is larger die to the mass action gating of S by y.
    || Minor mathematical miracle. At equilibrium: 0 = d[dt: y] = A*(B - y) - S*y. Transmitter y decreases when input S increases: y = A*B/(A + S). However, output S*y increases with S!: S*y = S*A*B/(A + S) (gate, mass action).
  • image p502fig13.28 Fast increments and decrements in an input S lead to slow habituation of the habituative gate, or medium-term memory, transmitter y. The output T is a product of these fast and slow variables, and consequently exhibits overshoots, habituation, and undershoots in its response.
    || Habituative transmitter gate: Input; Habituative gate d[dt: y] = A*(B - y) - S*y; Output [overshoot, habituation, undershoot]s Weber Law.
  • image p503fig13.29 The ON response to a phasic ON input has Weber Law properties due to the divisive terms in its equilibrium response, which are due to the habituative transmitter.
    || ON-response to phasic ON-input. S1 = f(I+J): y1 = A*B/(A+S1), T1 = s1*y1 = A*B*S1/(A+S1); S2 = f(I): y2 = A*B/(A+S2), T2 = s2*y2 = A*B*S2/(A+S2);. ON = T1 - T2 = (A^2*B*(f(I+J)-f(I)) / (A+f(I)) / (A+f(I+J)) Note Weber Law. When f has a threshold, small I requires larger J to fire due to numerator, but makes suprathreshold ON bigger due to denominator. When I is large, quadratic in denominator and upper bound of f make ON small.
  • image p504fig13.30 OFF rebound occurs when the ON-input shuts off due to the imbalance that is caused by the ON input in the habituation of the transmitters in the ON and OFF channels. The relative sizes of ON responses and OFF rebounds is determined by the arousal level I.
    || OFF-rebound due to phasic input offset. Shut off J (Not I!). Then: S1 = f(I), S2 = f(I); y1 ~= A*B/(A+f(I+J)) < y2 ~= A*B/(A+f(I)) y1 and y2 are SLOW; T1 = S1*y1, T2 = S2*y2, T1 < T2;. OFF = T2 - T1 = A*B*f(I)*(f(I+J) - f(I)) / (A+f(I)) / (A + f(I+J)), Note Weber Law due to remembered previous input. Arousal sets sensitivity of rebound: OFF/ON = f(I)/A. Why is the rebound transient? Note equal f(I) inputs.
  • image p504fig13.31 Behavioral contrast can occur during reinforcement learning due to decreases in either positive or negative reinforcers. See Figure 13.32 for illustrative operant conditioning data.
    || Behavioral contrast: rebounds! Shock level vs trials. 1. A sudden decrease in frequency or amount of food can act as a negative reinforcer: Frustration. 2. A sudden decrease in frequency or amount of shock can act as a positive reinforcer: Relief.
  • image p505fig13.32 Response suppression and the subsequent antagonist rebounds are both calibrated by the inducing shock levels.
    || Behavioral contrast (Reynolds 1968). Responses per minute (VI schedule) vs Trial shock level.
  • image p505fig13.33 An unexpected event can disconfirm ongoing processing by triggering a burst of nonspecific arousal that causes antagonistic rebounds in currently active gated dipoles, whether cognitive or affective.
    || Novelty reset: rebound to arousal onset. 1. Equilibrate to I and J: S1 = f(I+J); y1 = A*B/(A+S1); S2 = f(I+J); y2 = A*B/(A+S2);. 2. Keep phasic input J fixed; increase arousal I to I* = I + ∆I: (a) OFF reaction if T1 < T2; OFF = T2 - T1 = f(I*+J)*y2 - f(I*)*y1 = { A*B*(f(I*) - f(I*+J)) - B*(f(I*)*f(I+J) - f(I)*f(I*+J)) } / (A+f(I)) / (A + f(I+J)). 3. How to interpret this complicated equation?
  • image p506fig13.34 With a linear signal function, one can prove that the rebound increases with both the previous phasic input intensity J and the unexpectedness of the disconfirming event that caused the burst of nonspecific arousal.
    || Novelty reset: rebound to arousal onset.
  • image p506fig13.35 A shock, or other reinforcing event, can have multiple cognitive and emotional effects on different brain processes.
    || Multiple functional roles of shock. 1. Reinforcement sign reversal: An isolated shock is a negative reinforcer; In certain contexts, a shock can be a positive reinforcer. 2. STM-LTM interaction: Prior shock levels need to be remembered (LTM) and used to calibrate the effect of the present shock (STM). 3. Discriminative and situational cues: The present shock level is unexpected (novel) with respect to the shock levels that have previously been contingent upon experimental cues: shock as a [1.reinforcer, 2. sensory cue, 3. expectancy].
  • image p509fig13.36 How can life-long learning occur without passive forgetting or associative saturation?
    || Associative learning. 1. Forgetting (eg remember childhood experiences): forgetting [is NOT passive, is Selective]; 2. Selective: larger memory capacity; 3. Problem: why doesn
  • image p510fig13.37 A disconfirmed expectation can cause an antagonistic rebound that inhibits prior incentive motivational feedback, but by itself is insufficient to prevent associative saturation.
    || Learn on-response. 1. CS-> ON, disconfirmed expectation-> antagonistic rebound, OFF-channel is conditioned 2. CS-> [ON, OFF]-> net, zero net output. What about associative saturation?
  • image p510fig13.38 Dissociation of the read-out of previously learned adaptive weights, or LTM traces, and of the read-in of new weight values enables back-propagating dendritic action potentials to teach the new adaptive weight values.
    || Dissociation of LTM read-out and read-in. Backpropagating dendritic action potentials as teaching signals. 1. LTM Denditic spines (Rall 1960
  • image p510fig13.39 Shunting competition and informational noise suppression in affective gated dipoles, plus back-propagating action potentials for teaching signals, enable the net normalized adaptive weights to be learned. They never saturate!
    || Learn net dipole output pattern. Opponent "decision" controls learning. Cf. competitive learning. Learning signal, opponent extinction.
  • image p512fig13.40 A conditioning paradigm that illustrates what it means for conditioned excitators to extinguish.
    || Conditioned excitor extinguishes. 1. Learning phase: CS1 bell-> US, CS1-> Fear(-). 2. Forgetting phase: CS1 bell-> Forgetting. 3. The expectation of shock is disconfirmed.
  • image p513fig13.41 A conditioning paradigm that illustrates what it means for conditioned inhibitors not to extinguish.
    || Conditioned inhibitor does not extinguish. 1. Learning phase: CS1 light-> shock, CS1-> Fear(-); Forgetting phase: n/a;. 2. Learning phase : CS1 + CS bell-> no shock; CS2-> relief;. Forgetting phase: CS2 bell- no forgetting. SAME CS could be used! SAME "teacher" in forgetting phase! Something else must be going on , or else causality would be violated!
  • image p513fig13.42 A conditioned excitor extinguishes because the expectation that was learned of a shock during the learning phase is disconfirmed during the forgetting phase.
    || Conditioned excitor extinguishes. Learning phase: CS1 bell-> US; CS1-> Fear(-); CS1-> shock; CS1 is conditioned to an expectation of shock. Forgetting phase: CS2 bell-> forgetting;. The expectation of shock is disconfirmed.
  • image p513fig13.43 A conditioned inhibitor does not extinguish because the expectation that was learned of no shock during the learning phase is not disconfirmed during the forgetting phase.
    || Conditioned excitor extinguishes. 1. Learning phase: CS1 light-> Shock; CS1-> Fear(-);. Forgetting phase: n/a;. 2. Learning phase: CS1 bell + CS2-> NO shock; CS2-> relief(+); CS2-> no shock;. Forgetting phase: CS2 bell!-> no forgetting;. The expectation that "no shock" follows CS2 is NOT disconfirmed!
  • image p514fig13.44 Analog of the COgEM model in Figure 6.1 of (Damasio 1999).
    || (a) map of object X-> map of proto-self at inaugural instant-> [, map of proto-self modified]-> assembly of second-order map. (b) map of object X enhanced-> second-order map imaged.
  • image p519fig14.01 Coronal sections of prefrontal cortex. Note particulary the areas 11, 13, 14, and 12o.
    ||
  • image p520fig14.02 Macrocircuit of the main brain regions, and connections between them, that are modelled in the unified predictive Adaptive Resonance Theory (pART) of cognitive-emotional and working memory dynamics. Abbreviations in red denote brain regions used in cognitive-emotional dynamics. Those in green denote brain regions used in working memory dynamics. Black abbreviations denote brain regions that carry out visual perception, learning and recognition of visual object categories, and motion perception, spatial representation and target tracking. Arrow denote non-excitatory synapses. Hemidiscs denote adpative excitatory synapses. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown. Also not shown are output signals from cortical areas to motor responses. V1: striate, or primary, visual cortex; V2 and V4: areas of prestriate visual cortex; MT: Middle Temporal cortex; MST: Medial Superior Temporal area; ITp: posterior InferoTemporal cortex; ITa: anterior InferoTemporal cortex; PPC: Posterior parietal Cortex; LIP: Lateral InterParietal area; VPA: Ventral PreArculate gyrus; FEF: Frontal Eye Fields; PHC: ParaHippocampal Cortex; DLPFC: DorsoLateral Hippocampal Cortex; HIPPO: hippocampus; LH: Lateral Hypothalamus; BG: Basal Ganglia; AMYG: AMYGdala; OFC: OrbitoFrontal Cortex; PRC: PeriRhinal Cortex; VPS: Ventral bank of the Principal Sulcus; VLPFC: VentroLateral PreFrontal Cortex. See the text for further details.
    ||
  • image p523fig14.03 (a) The MOTIVATOR neural model generalizes CogEM by also including the basal ganglia. It can hereby explain and simulate complementary functions of the amygdala and basal ganglia (SNc) during conditioning and learned performance. The basal ganglia generate Now Print signals in response to unexpected rewards. These signals modulate learning of new associations in many brain regions. The amygdala supports motivated attention to trigger actions that are expected to occur in response to conditioned or unconditioned stimuli. Object Categories represent visual or gustatory inputs in anterior inferotemporal (ITA) and rhinal (RHIN) cortices, respectively. Value Categories represent the value of anticipated outcomes on the basis of hunger and satiety inputs, in amygdala (AMYG) and lateral hypothalamus (LH). Object-Value Categories resolve the value of competing perceptual stimuli in medial (MORB) and lateral (ORB) orbitofrontal cortex. The Reward Expectation Filter detects the omission or delivery of rewards using a circuit that spans ventral striatum (VS), ventral pallidum (VP), striosomal delay (SD) cells in the ventral striatum, the pedunculopontine nucleus (PPTN) and midbrain dopaminergic neurons of the substantia nigra pars compacta/ventral tegmental area (SNc/VTA). The circuit that processes CS-related visual information (ITA, AMTG, ORB) operates in parallel with a circuit that processes US-related visual and gustatory information (RHIN, AMYG, MORB). (b) Reciprocal adaptive connections between hypothalamus and amygdala enable amygdala cells to become learned value categories. The bottom region represents hypothalmic cells, which receive converging taste and metabolite inputs whereby they become taste-drive cells. Bottom-up signals from activity patterns across these cells activate competing value category, or US Value Representations, in the amygdala. A winning value category learns to respond selectively to specific combinations of taste-drive activity patterns and sends adaptive top-down priming signals back to the taste-drive cells that activated it. CS-activated conditioned reinforcer signals are also associatively linked to value categories. Adpative connections end in (approximately) hemidiscs. See the text for details.
    ||
  • image p524fig14.04 (a) Model basal ganglia circuit for the control of dopaminergic Now Print signals from the substantia nigra pars compacta, or SNc, in response to unexpected rewards. Cortical inputs (Ii), activated by conditioned stimuli, learn to excite the SNc via a multi-stage pathway from the vantral striatum (S) to the ventral pallidum and then on to the PPTN (P) and the SNc (D). The inputs Ii excite the ventral striatum via adaptive weights WIS, and the ventral striatum excites the SNc with strength W_PD. The striosomes, which contain an adaptive spectral timing mechanism [xij, Gij, Yij, Zij], learn to generate adaptively timed signals that inhibit reward-related activation of the SNc. Primary reward signals (I_R) from the lateral hypothalamus both excite the PPTN directly (with strength W_RP) and act as training signals to the ventral striatum S (with strength W_RS) that trains the weights W_IS. Arrowheads denote excitatory pathways, circles denote inhibitory pathways, and hemidiscs denote synapses at which learning occurs. Thick pathways denote dopaminergic signals.
    ||
  • image p530fig14.05 Displays used by (Buschman, Miller 2007) in their visual search experiments. See the text foir details.
    || Fixation 500 ms-> Sample 1000 ms-> Delay 500 ms-> Visual [pop-out, search]- reaction time.
  • image p531fig14.06 Classification of scenic properties as texture categories by the ARTSCENE model. See the text for details.
    || Image-> Feature extraction (texture principal component rankings)-> Learning feature-to-scene mapping (texture category principal component rankings)<- scene class. Large-to-small attentional shrouds as principle component higher.
  • image p531fig14.07 Voting in the ARTSCENE model achieves even better prediction of scene type. See the text for details.
    || Image-> Feature extraction (texture principal component rankings)-> Learning feature-to-scene mapping (texture category principal component rankings)-> evidence accumulation (sum)-> scene class winner-take-all inference. Large-to-small attentional shrouds as principle component higher.
  • image p532fig14.08 Macrocircuit of the ARTSCENE Search neural model for learning to search for desired objects by using the sequences of already experienced objects and their locations to predict what and where the desired object is. V1 = First visual area or primary visual cortex; V2 = Second visual area; V4 = Fourth visual area; PPC = Posterior Parietal Cortex; ITp = posterior InferoTemporal cortex; ITa = anterior InferoTemporal cortex; MTL = Medial Temporal Lobe; PHC = ParaHippoCampal cortex; PRC = PeriRhinal Cortex; PFC = PreFrontal Cortex; DLPFC = DorsoLateral PreFrontal Cortex; VPFC = Ventral PFC; SC = Superior Colliculus.
    ||
  • image p533fig14.09 Search data and ARTSCENE Search simulations of them in each pair of images from (A) to (F). See the text for details.
    || 6*[data vs simulation], [Response time (ms) versus epoch].
  • image p540fig15.01 The timing of CS and US inputs in the delay and trace conditioning paradigms.
    || Delay and trace conditioning paradigms. [CS, US] vs [Delay, Trace]. To perform an adaptively timed CR, trace conditioning requires a CS memory trace over the Inter-Stimulus Interval (ISI).
  • image p541fig15.02 The neurotrophic Spectrally Timed Adaptive Resonance Theory, or nSTART, model of (Franklin, Grossberg 2017) includes hippocampus to enable adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between CS and US.
    || Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. [CS, US] -> Sensory Cortex (SC) <- motivational attention <-> category learning -> Prefrontal Cortex (PFC). SC conditioned reinforcement learning-> Amygdala (cannot bridge the temporal gap) incentive motivational learning-> PFC. SC adaptively timer learning and BDNF-> Hippocampus (can bridge the temporal gap) BDNF-> PFC. PFC adaptively timed motor learning-> cerebellum.
  • image p541fig15.03 Stages in the processing of adaptively timed conditioning, leading to timed responses in (d) that exhibit both individual Weber laws and an inverted U in conditioning as a function of ISI. See the text for details.
    || Curves of [Response vs ISI].
  • image p542fig15.04 Conditioning data from (Smith 1968; Millenson etal 1977). The former shows the kind of Weber Law and inverted U that were simulated in Figure 15.3. The latter shows that, if there are two ISIs during an experiment, then the animals learn to adaptively time their responses with two properly scaled Weber laws.
    || (left) One ISI (Smith 1968) [mean membrane extension (mm) versus time after CS onset (msec)]. (right) Two ISIs (Millenson etal 1977) [200, 100] msec CS test trials, [mean momentary CS amplitude (mm) vs time after CS onset (msec)]. (bottom) Conditioned eye blinks, made with nictitating membrane and/or eyelid, are adaptively timed: peak closure occurs at expected time(s) of arrival of the US following the CS and obeys a Weber Law.
  • image p543fig15.05 Simulation of conditioning with two ISIs that generate their own Weber Laws, as in the data shown in Figure 15.4.
    || Learning with two ISIs: simulation: R = sum[all: f(xi)*yi*xi] vs msec. Each peak obeys Weber Law! strong evidence for spectral learning.
  • image p543fig15.06 The circuit between dentate granule cells and CA1 hippocampal pyramid cells seems to compute spectrally timed responses. See the text for details.
    || Hippocampal interpretation. 1. Dentate granule cells (Berger, Berry, Thompson 1986): "increasing firing...in the CS period...the latency...was constant". 2. Pyramidal cells: "Temporal model" Dentate granule cells-> CA3 pyramids. 3. Convergence (Squire etal 1989): 1e6 granule cells, 1.6e5 CA3 pyramids. 80-to-1 (ri).
  • image p544fig15.07 In response to a step CS and sustained storage by I_CS of that input, a spectrum of responses xi at different rates ri develops through time.
    || Spectral timing: activation. CS-> I_CS-> All xi. STM sensory representation. Spectral activation d[dt: xi] = ri*[-A*xi + (1 - B*xi)*I_CS].
  • image p544fig15.08 The spectral activities xi generate sigmoid signals f(xi) before the signals are, in turn, gated by habituative transmitters yi.
    || Habituative transmitter gate. transmitter.
  • image p544fig15.09 As always, the habituative transmitter gate yi increases in response to accumulation and decreases due to gated inactivation, leading to the kinds of transmitter and output responses in the right hand column.
    || Habituative transmitter gate (Grossberg 1968). 1. d[dt: yi] = c*(1-yi) - D*f(xi)*yi, C-term - accumulation, D-term gated inactivation. 2. Sigmoid signal f(xi) = xi^n / (B^n + xi^n). 3. Gated output signal f(xi)*yi.
  • image p545fig15.10 When the activity spectrum xi generates a spectrum of sigmoidal signals (f(xi), the corresponding transmitters habituate at different rates. The output signals f(xi)*yi therefore generate a series of unimodal activity profiles that peak at different times, as in Figure 15.3a.
    || A timed spectrum of sampling intervals. [f(xi) activation, yi habituation, f(xi)*yi gated sampling] spectra. gated = sampling intervals.
  • image p545fig15.11 The adaptive weight, or LTM trace , zi learns from the US input I_US at times when the sampling signal f(xi)*yi is on. It then gates the habituative sampling signal f(xi)*yi to generate a doubly gated response f(xi)*yi*zi.
    || Associative learning, gated steepest descent learning (Grossberg 1969). d[dt: zi] = E*f(xi)*yi*[-zi + I_US], E-term read-out of CS gated signal, []-term read-out of US. Output from each population: f(xi)*yi*zi doubly gated signal.
  • image p546fig15.12 The adaptive weights zi in the spectrum learn fastest whose sampling signals are large when the US occurs, as illustrated by the green region in this simulation of (Grossberg, Schmajuk 1989).
    || Computer simulation of spectral learning. (left) fast (right) slow. Constant ISI: 6 cells fast to slow, 4 learning trials, 1 test trial.
  • image p546fig15.13 The total learned response is a sum R of all the doubly gated signals in the spectrum.
    || Adaptive timing is a population property. Total output signal: R = sum[i: f(xi)*yi*zi]. Adaptive timing is a collective property of the circuit. "Random" spectrum of rates achieves good collective timing.
  • image p547fig15.14 An individual
  • image p547fig15.15 Expected non-occurences do not prevent the processing of sensory events and their expectations. Rather, they prevent mismatches of those expectations from triggering orienting reactions.
    || Expected non-occurrence of goal. Some rewards are reliable but delayed in time. Does not lead to orienting reactions: How? Both expected and unexpected nonoccurrences are diue to mismatch of a sensory event with learned expectations. Expected non-occurrences do not inhibit sensory matching: eg a pigeon can see an earlier-than-usual food pellet. Hypothesis: Expected non-occurrences inhibit the process whereby sensory mismatch activates orienting reactions. Mismatch not-> orient.
  • image p548fig15.16 Homologous recognition learning and reinforcement learning macrocicuits enable adaptively timed conditioning in the reinforcement learning circuit to increase inhibition of the orienting system at times when a mismatch in the recognition system would have reduced inhibition of it.
    || Homolog between ART and CogEM model, complementary systems. [Recognition, Reinforcement] learning vs [Attentional, Orienting] system. Reinforcement: timing, drive representation.
  • image p548fig15.17 The timing paradox asks how inhibition of an orienting response (-) can be spread throughout the ISI, yet accurately timed responding can be excited (+) at the end of the ISI.
    || Timing paradox. [CS light, US shock] vs t. ISI = InterStimulus Interval = expected delay of reinforcer. Want timing to be accurate. Want to inhibit exploratory behaviour throught ISI.
  • image p549fig15.18 The Weber Law solves the timing paradox by creating an adaptively timed response throughout the ISI that peaks at the ISI. Within the reinforcement learning circuit, this response can maintain inhibition of the orienting system A at the same time as it generates adaptively timed incentive motivation to the orbitofrontal cortex.
    || Weber Law: reconciling accurate and distributed timing. Resolution: Output can inhibit orienting, peak response probability. What about different ISIs? Standard deviation = peak time. Weber law rule.
  • image p549fig15.19 How the adaptively timed hippocampal spectrum T inhibits (red arrow) the orienting system A as motivated attention in orbitofrontal cortex Si(2) peaks at the ISI.
    || Conditioning, Attention, and Timing circuit. Hippocampus spectrum-> Amgdala orienting system-> neocortex motivational attention. Adaptive timing inhibits orienting system and maintains adaptively timed Motivated Attention on the CS.
  • image p550fig15.20 Adaptively timed conditioning of Long Term Depression, or LTD, occurs in the cerebellum at synapses between parallel fibres and Purkinje cells, thereby reducing inhibition of subcortical nucleus cells and enabling them to express their learned movement gains within the learned time interval. Also see Figure 15.21.
    || [CS-Activated input pathways parallel fibres, US-Activated climbing fibres]-> [Subcortical nucleus (gain control), Cerebella cortex- Purkinje cells (timing)].
  • image p551fig15.21 The most important cells types and circuitry of the cerebellum: Purkinje cells (PC) receive excitatory inputs from the climbing fibres (CF) that originate in the inferior olive (IO) and from parallel fibres (PF), which are the axons for granule cells (GC). GCs, in turn, receive inputs from the mossy fibres (MF) coming from the precerebellar nuclei (PCN). The PF also inhibit PC via basket cells (BC), thereby helping to select the most highly activated PC. The PC generate inhibitory outputs from the cerebellum cortex to the deep cerebellar nuclei (DCN), as in Figure 15.20. Excitatory signals are denoted by (+) and inhibitory signals by (-). Other notations: GL- granular layer; GoC- golgi cells; ML- molecular layer; PCL- Purkinje cell layer; SC- stellate cell; WM- white matter.
    ||
  • image p551fig15.22 Responses of a retinal cone in the turtle retina to brief flashes of light of increasing intensity.
    || response vs msc.
  • image p552fig15.23 Cerebellar biochemistry that supports the hypothesis of how mGluR supports adaptively timed conditioning at cerebellar Purkinje cells. AMPA, Amino-3-hydroxy-5-methyl4-isoxazole priopionic acid-sensitive glutamate receptor; cGMP, cyclic guanosine monophosphate; DAG, diacylglycerol; glu, glutamate; GC, guanylyl cyclase; gK, Ca+-dependent K+ channel protein; GTP, guanosine triphosphate; IP 3
  • image p556fig15.24 (a) Data showing normally timed responding (solid curve) and short latency responses after lesioning cerebellar cortex (dashed curve). (b) computer simulation of short latency response after ablation of model cerebellar cortex.
    ||
  • image p557fig15.25 Computer simulations of (a) adaptively timed long term depression at Purkinje cells, and (b) adaptively timed activation of cereballar nuclear cells.
    || response vs time (msec)
  • image p557fig15.26 Brain regions and processes that contribute to autistic behavioral symptoms when they become imbalanced in prescribed ways.
    || Basal Gamglia prolonged gate opening <-> { Amygdala emotionally depressed-> [hippocampus- hyperspecific learning; Cerebellum- adaptive timing fails; hypofrontal blocking fails, no Theory of Mind]-> Neocortex; Neocortex- rewards not received-> Amygdala}.
  • image p559fig15.27 Brain regions and processes that contribute to the release of dopaminergic Now Print signals by the substantia nigra pars compacta, or SNc, in response to unexpected reinforcing events. See the text for details.
    || Model of spectrally timed SNc learning (Brown, Bulloch, Grossberg 1999). Delayed inhibitory expectations of reward. Dopamine cells signal an error in reqard prediction timing or magnitude. Immediate excitatory predictions of reward. Lateral hypothalamus (Primary Reward Input)-> [(+)ventral striatum <-> ventral pallidium (+)-> PPTN(+)-> SNc]. SNc-> [dopamine signal -> ventral striatum, Striosomal cells]. Conditioned Stimuli (CS)(+)-> [ventral striatum, striosomal cells]. Striosomal cells(-)-> SNc.
  • image p559fig15.28 Neurophysiological data (left column) and model simulations (right column) of SNc responses. See the text for details.
    || membrane potential vs time
  • image p560fig15.29 Excitatory pathways that support activation of the SNc by a US and the conditioning of a CS to the US.
    || Excitatory pathway. Primary reward (apple juice) briefly excites lateral hypothalamus. Hypothalamic-PPTN excitation causes SNc dopamine burst. Hypothalamic activity excites ventral striatum for training. Active CS working memory signals learn to excite ventral striatum. Lateral hypothalamus (Primary Reward Input)-> [(+)ventral striatum <-> ventral pallidium(+)-> PPTN(+)-> SNc]. SNc-> [dopamine signal -> ventral striatum. Conditioned Stimuli working memory trace (CS)(+)-> ventral striatum.
  • image p560fig15.30 The inhibitory pathway from striosomal cells to the SNc is able to inhibit the SNc when a reward occurs with expected timing and magnitude.
    || Inhibitory pathway. Learning: CS-striosomal LTP occurs due to a three-way coincidence [An active CS working memory input, a Ca2+ spike, a dopamine burst]; Signaling: The delayed Ca2+ spike facilitates striosomal-SNc inhibition;. Striosomal cells learn to predict both timing and magnitude of reward signal to cancel it: reward expectation;. Conditioned stimuli (CS) LTP-> Striosomal cells <- dopamine | (-)-> SNc->.
  • image p561fig15.31 The CS activates a population of striosomal cells that respond with different delays in order to enable adaptively timed inhibition of the SNc.
    || Expectation timing (Fiala, Grossberg, Bulloch 1996; Grossberg, Merrill 1992, 1996; Grossberg, Schmajuk 1989). How do cells bridge hundreds of milliseconds? Timing spectrum (msec). 1. CS activates a population of cells with delayed transient signals: MGluR. 2. Each has a different delay, so that the range of delays covers the entire interval. 3. Delayed transients gate both learning and read-out of expectations.
  • image p561fig15.32 The SNc can generate both dopamine bursts and dips in response to rewards whose amplitude is unexpectedly large or small.
    || Inhibitory pathway: expectation magnitude. 1. If reward is greater than expected, a dopamine burst causes striosomal expectation to increase. 2. If reward is less than expected, a dopamine dip causes striosomal expectation to decrease. 3. This is a negative feedback control system for learning. Conditioned stimuli (CS)-> Striosomal cells <- dopamine | (-)-> SNc->.
  • image p563fig15.33 The basal ganglia gate neural processing in many parts of the brain. The feedback loop through the lateral orbitofrontal cortex (blue arrow, lateral orbitofrontal) is the one that MOTIVATOR models.
    || MOTIVATOR models one of several thalamocortical loops through basal ganglia (Adapted from Fundamental Neuroscience. 2002 Copyright Elsevier). [cortex-> striatum-> pallidum S. nigra-> thalamus] vs [motor, oculomotor, dorsolateral prefrontal, lateral orbitofrontal, anterior cingulate]. thalamus-> [striatum, cortex].
  • image p563fig15.34 The colored regions are distinct parts of the basal ganglia in the loops depicted in Figure 15.33.
    || Distinct basal ganglia zones for each loop (Adapted from Fundamental Neuroscience. 2002 Copyright Elsevier).
  • image p564fig15.35 (a) A pair of recurrent shunting on-center off-surround networks for control of the fore limbs and hind limbs. (b) Varying the GO signal to these networks can trigger changes in movement gaits. See the text for details.
    ||
  • image p565fig15.36 (a) The FOVEATE model circuit for the control of saccadic eye movements within the peri-pontine reticular formation. (b) A simulated saccade staircase. See the text for details.
    || [left, right] eye FOVEATE model. [vertical vs horizontal] position (deg).
  • image p566fig15.37 Steps in the FOVEATE model
  • image p567fig15.38 (a) The Gated Pacemaker model for the control of circadian rythms is a recurrent shunting on-center off-surround network whose excitatory feedback signals are gated by habituative transmitters. Tonic arousal signals energize the pacemaker. Diurnal (left) and nocturnal (right) pacemakers are determined by whether phasic light signals turn the pacemaker on or off. An activity-dependent fatigue signal prevents the pacemaker from becoming overly active for too long. (b) Two simulations of circadian activity cycles during different schedules of light (L) and dark (D). See the text for details.
    || sourceOn-> on-cells (recurrent) <-(-) (-)> off-cells (recurrent) <-sourceOff. on-cells-> activity-> off-cells. off-cells-> fatigue. Diurnal: sourceOn=[light, arousal]; sourceOff=arousal;. Nocturnal: sourceOn=arousal; sourceOff=[arousal, light];.
  • image p568fig15.39 Circuits of the MOTIVATOR model that show hypothalamic gated dipoles.
    || [inputs, -> [object, value] categories-> object-value categories-> [reward expectation filter, [FEF, EAT] outputs]. reward expectation filter [DA dip, arousal burst]-> alpha1 non-specific arousal-> value categories. Msi drive inputs-> value categories.
  • image p569fig15.40 The direct and indirect basal ganglia circuits that control GO and STOP movement signals. See the text for details.
    || [Direct path GO(+), Indirect path STOP(+), dopamine from SNc(+-)]-> striatum. GO-> GPi/SNr-> Thalamus (VA/Vlo) <-> frontal cortex. STOP-> GPe <-> STN-> GPi/SNr. NAc-> GPi/SNr.
  • image p573fig16.01 The experimental chamber (A) and neurophysiological recordings from a rat hippocampus (B) that led to the discovery of place cells. See the text for details.
    ||
  • image p574fig16.02 Neurophysiological recordings of 18 different place cell receptive fields. See the text for details.
    ||
  • image p575fig16.03 As a rat navigates in its experimental chamber (black curves), neurophysiological recordings disclose the firing patterns (in red) of (a) a hippocampal place cell and (b) an entrorhinal grid cell.
    ||
  • image p578fig16.04 Cross-sections of the hippocampal regions and the inputs to them. See the text for details.
    || EC-> CA1-> CA3-> DG. Layers [V/V1, II, II].
  • image p580fig16.05 Macrocircuit of the GridPlaceMap model, which can learn both 2D grid cells and place cells in response to realistic trajectories of navigating rats using a hierarchy of SOMs with identical equations.
    || GridPlaceMap model: rate-based and spiking (Pilly, Grossberg 2012). Pre-wired 1D stripe cells, learns both 2D frid and place cells! Same laws for both; both select most frequent and energetic inputs. Place cells emerge gradually in response to developing grid cells. [place-> grid-> stripe] cells-> path integration-> vestibular signals
  • image p581fig16.06 The learning of hexagonal grid cell receptive fields as an animal navigates an open field is a natural consequence of simple trigonometric properties of the positions at which the firing of stripe cells that are tuned to different directions will co-occur.
    || The Trigonometry of spatial navigation. Coactivation of stripe cells.
  • image p582fig16.07 Stripe cells were predicted in (Mhatre, Gorchetchnikov, Grossberg 2012) to convert linear velocity signals into the distances travelled in particular directions. They are modeled by directionally-sensitive ring attractors, which help to explain their periodic activation as an animal continues to move in a given direction. See the text for details.
    || Stripe cells. Stripe cells are predicted to exist in (or no later than) EC layer (III, V/VI). Linear path integrators: represent distance traveled using linear velocity modulated with head direction signal. Ring attractor circuit: the activity bump represents distance traveled, stripe cells with same spatial period and directional preference fire with different spatial phases at different ring positions. Distance is computed directly, it does not require decoding by oscillatory interference. Periodic stripe cell activation due to ring anatomy: periodic boundary conditions. Stripe firing fields with multiple orientations, phases and scales.
  • image p582fig16.08 Some experimental evidence for stripe-like cell receptive fields has been reported. The band cells posited by Neil Burgess also exhibit the one-dimensional firing symmetry of stripe cells, but are modeled by oscillatory intererence. See the text for details.
    || Evidence for stripe-like cells. Entorhinal cortex data (Sargolini, Fyhn, Hafting, McNaughton, Witter, Moser, Moser 2006; Krupic, Burgess, O
  • image p583fig16.09 The GRIDSmap model used algorithmically defined stripe cells to process realistic rat trajectories. The stripe cell outputs then formed inputs to the adaptive filter of a self-organizing map which learned hexagonal grid cell receptive fields.
    || GRIDSmap. Self-organizing map receives inputs from stripe cells and learns to respond to most frequent co-activation patterns. Stripe cells combine speed and head direction to create a periodic 1D position code. Virtual rat navigated using live rat trajectories from Moser Lab. Speed and head direction drives stripe cells.
  • image p583fig16.10 The GRIDSmap model is embedded into a more complete representation of the processing stages from receipt of angular head velocity and linear velocity signals to this learning of place cells.
    || GRIDSmap. Pre-wired 2D stripe cells, learns 2D grid cells. vestibular cells [angular head velocity-> head direction cells, linear velocity]-> stripe cells- small scale 1D periodic spatial code (ECIII)-> SOM grid cells entorhinal cortex- small scale 2D periodic spatial scale-> SOM place cells hippocampal cortex- large scale 2D spatial code (dentate/CA3). Unified hierarchy of SOMs.
  • image p584fig16.11 GRIDSmap simulation of the learning of hexagonal grid fields. See the text for details.
    || Simulation results. Multiple phases per scale. response vs lenght scale (0.5m+).
  • image p584fig16.12 Temporal development of grid cell receptive fields on successive learning trials (1,3,5,7,25,50,75,100).
    || Temporal development of grid fields. Cells begin to exhibit grid structure by 3rd trial. Orientations of the emergent grid rotate to align with each other over trials.
  • image p585fig16.13 Hexagonal grid cell receptive fields develop if their stripe cell directional preferences are separated by 7, 10, 15, 20, or random numbers degrees. The number and directional selectivities of stripe cells can thus be chosen within broad limits without undermining grid cell development.
    ||
  • image p585fig16.14 Superimposing firing of stripe cells whose directional preferences differ by 60 degrees supports learning hexagonal grid cell receptive fields in GRIDSmap.
    || GRIDSmap: from stripe cells to grid cells. Grid-cell Regularity from Integrated Distance through Self-organizing map. Superimposing firing of stripe cells oriented at intervals of 60 degrees. Hexagonal grid!
  • image p586fig16.15 Superimposing stripe cells oriented by 45 degrees does not lead to learning of rectangular grids in GRIDSmap, but it does in an oscillatory inference model.
    || Why is a hexagonal grid favored? Superimposing firing of stripe cells oriented at intervals of 45 degrees. Rectangular grid. This and many other possibilities do not happen in vivo. They do happen in the oscillatory inference model. How are they prevented in GRIDSmap?
  • image p586fig16.16 In the place cell learning model of (Gorchetnikov, Grossberg 2007), three populations of five cells each of entorhinal grid cells (only two are shown) with different spatial periods input to the model
  • image p587fig16.17 A finer analysis of the 2D trigonometry of spatial navigation showed that both the frequency and amplitude of coactivations by stripe cells determine the learning of hexagonal grid fields.
    || A refined analysis: SOM amplifies most frequent and energetic coactivations (Pilly, Grossberg 2012). [linear track, 2D environment]. (left) Stripe fields separated by 90°. 25 coactivations by 2 inputs. (right) Stripe fields separated by 60°. 23 coactivations by 3 inputs.
  • image p588fig16.18 Simulations of coordinated learning of grid cell receptive fields (second row) and unimodal place cell receptive fields (third row) by the hierarchy of SOMs in the GridPlaceMap model. Note the exquisite regularity of the hexagonal grid cell firing fields.
    || [stripe, grid, place] cells vs [spikes on trajectory, unsmoothed rate map, smoothed rate map].
  • image p589fig16.19 Neurophysiological data showing the smaller dorsal grid cell scales and the larger ventral grid cell scales.
    || Spatial scale of grid cells increase along the MEC dorsoventral axis (Hafting etal 2005; Sargolini etal 2006; Brun etal 2008). [dorsal (left), ventral (right)] cart [rate map, autocortelogram]. How does the spatial scale increase along the MEC dorsoventral axis?
  • image p590fig16.20 Integration rate of grid cells decreases along the dorsoventral gradient of the Medial Entorhinal Cortex, or MEC.
    || Dorsoventral gradient in the rate of synaptic integration of MEC layer II stellate cells (Garden etal 2008). Cross-section of [Hp, CC, LEC, MEC. (A left column) [dorsal, ventral] mV? vs msec. (B center column) [half width (ms), rise time (ms), amplitude (mV)] vs location (μm). (C right upper) responses (D right lower) width (ms) vs loacation (μm).
  • image p590fig16.21 Frequency of membrane potential oscillations in grid cells decreases along the dorsoventral gradient of the MEC.
    || Dorsoventral gradient in the frequency of membrane potential oscillations of MEC layer II stellate cells (Giocomo etal 2007). (C left column) Oscillation (Hz) vs distance from dorsal surface (mm). (D right upper) [dorsal, ventral oscillations 5mV-500ms. (E right lower) [dorsal, ventral oscillations 100ms. Both membrane potential oscillation frequency and resonance frequency decrease from the dorsal to ventral end of MEC.
  • image p591fig16.22 Time constants and duration of afterhyperpolarization currents of grid cells increase along the dorsoventral gradient of the MEC.
    || Dorsoventral gradient in afterhyperpolarization (AHP) kinetics of MEC layer II stellate cells (Navratilova etal 2012). [mAHP time constant (ms), Half-width (mm)] vs distance from the dorsal surface (mm), at [-55, -50, -45] mV. Time constants and duration of AHP increase from the dorsal to the ventral end of MEC layer II. Effectively, the relative refractory period is longer for ventral stellate cells in MEC layer II.
  • image p591fig16.23 The Spectral Spacing Model uses a rate gradient to learn a spatial gradient of grid cell receptive field sizes along the dorsoventral gradient of the MEC.
    || Spectral spacing model. Map cells responding to stripe cell inputs of multiple scales. Grid cells: MEC layer II (small scale 2D spatial code). Stripe cells: PaS / MEC deep layer (small scale 1D spatial code). Path Integration. Vestibular signals- linear velocity and angular head velocity. SOM. How do entorhinal cells solve the scale selection problem?
  • image p592fig16.24 Parameter settings in the Spectral Spacing Model that were used in simulations.
    || Simulation settings. Activity vs distance (cm). Learning trials: 40.
  • image p593fig16.25 Spectral Spacing Model STM, MTM, and LTM equations. The rate spectrum that determines the dorsoventral gradient of multiple grid cell properties is defined by μm.
    || Spectral Spacing Model equations. [STM, MTM, LTM]. μm = rate spectrum.
  • image p593fig16.26 Data (left column) and simulations (right column) of the gradient of increasing grid cell spacing along the dorsoventral axis of MEC.
    || Gradient of grid spacing along dorsoventral axis of MEC (Brun etal 2008). data-[Distance (m?), Median grid spacing (m?)] simulations-[Grid spacing (cm), Grid spacing (cm)] vs response rate.
  • image p594fig16.27 Data (left column) and simulations (right column) of the gradient of increasing grid cell field width along the dorsoventral axis of MEC.
    || Gradient of field width along dorsoventral axis of MEC (Brun etal 2008). data-[Distance (m?), Width autocorr peak (m?)] simulations-[Grid field width (cm), Width autocorr peak (cm)] vs response rate.
  • image p595fig16.28 Data (left column) and simulations (right column) about peak and mean grid cell response rates along the dorsoventral axis of MEC.
    || Peak and mean rates at different locations along DV axis of MEC (Brun etal 2008). Peak rate (Hz) vs [data- DV quarter, simulations- Response rate].
  • image p596fig16.29 Data (top row) and simulations (bottom row) showing decreasing frequency of subthreshold membrane potential oscillations along the DV axis of MEC.
    || Subthreshold membrane potential oscillations at different locations along DV axis of MEC (Giocomo etal 2020; Yoshida etal 2011). Data [oscillations (Hz) vs distance from dorsal surface (mm) @[-50, -45] mV, Frequency (Hz) vs [-58, -54, -50] mV]. Simulations MPO frequency (Hz) s [response, habituation] rate.
  • image p596fig16.30 Data (top row) and simulations (bottom row) of spatial phases of learned grid and place cells.
    || Spatial phases of learned grid and place cells (Hafting etal 2005). Data: Cross-correlogram of rate maps of two grid cells; Distribution of phase difference: distance from origin to nearest peak in cross-correlogram. Simulations: Grid cell histogram of spatial correlation coefficients; Place cell histogram of spatial correlation coefficients.
  • image p597fig16.31 Data (a) and simulations (b-d) about multimodal place cell receptive fields in large spaces. The simulations are the result of learned place fields.
    || Multimodal place cell firing in large spaces (Fenton etal 2008; Henriksen etal 2010; Park etal 2011). Number of cells (%) vs Number of place fields. [2, 3] place fields, 100*100 cm space.
  • image p597fig16.32 Data (top row) and simulations (bottom row) about grid cell development in juvenile rats. Grid score increases (a-b and d), whereas grid spacing remains fairly flat (c and e).
    || Model fits data about grid cell development (Wills etal 2010; Langston etal 2010). Data: [Gridness, grid score, inter-field distance (cm)]. Simulations: [Gridness score, Grid spacing (cm)] vs trial.
  • image p598fig16.33 Data (top row) and simulations (bottom row) of changes in place cell properties in juvenile rats, notably about spatial information (a,c) and inter-trial stability (b,d).
    || Model fits data about grid cell development (Wills etal 2010). [Data, Simulation] vs [spatial information, inter-trial stability]. x-axis [age (postnatal day), trial].
  • image p598fig16.34 The spiking GridPlaceMap model generates theta-modulated place and grid cell firing, unlike the rate-based model.
    || Theta-modulated cells in spiking model. [place, grid] cell vs [membrane potential (mV vs time), frequency vs inter-spike intervals (s), power spectra (normalized power vs frequency (Hz))].
  • image p599fig16.35 Data (a) and simulations (b,c) about anatomically overlapping grid cell modules. (a) shows the anatomical distribution of grid cells belonging to different modules in one animal. DV location (mm) vs postrhinal border. (b) shows the simulated distribution of learned grid cell spacings from two stripe cell scales. frequency (%) vs grid spacing (cm). mu = [1, 0.6]. (c) shows what happens when half the cells respond with one rate and half another rate. (d) shows the same with three rates. (e-g) show spatial maps and autocorrelograms of grid cells that arise from the different rates in (d). [rate map, autocorelogram] vs [score [1.07, 0.5, 0.67], spacing (cm) [23.58, 41, 63.64]].
    ||
  • image p600fig16.36 The entorhinal-hipppocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories. See the text for details.
    || Entorhinal-hippocampal interactions as an ART system. Hippocampal place cells as spatial categories. Angular head velocity-> head direction cells-> stripe cells- small scale 1D periodic code (ECIII) SOM-> grid cells- small scale 2D periodic code (ECII) SOM-> place cells- larger scale spatial map (DG/CA3)-> place cells (CA1)-> conjunctive-coding cells (EC V/VI)-> top-down feedback back to stripe cells- small scale 1D periodic code (ECIII). stripe cells- small scale 1D periodic code (ECIII)-> place cells (CA1).
  • image p602fig16.37 Data showing the effect of hippocampal inactivation by muscimol on grid cell firing before, during, and six hours after the muscimol, reading from left to right.
    || Hippocampal inactivation disrupts grid cells (Bonnevie etal 2013). muscimole inactivation. spikes on trajectory: [before, after min [6-20, 20-40, 40-60, 6h]]. rate map (Hz) [18.6, 11.4, 9.5, 6.7, 10.8]. spatial autocorrelogram g=[1.12, 0.05, -0.34, 0.09, 1.27].
  • image p603fig16.38 Role of hippocampal feedback in maintaining grid fields. (a) Data showing the effect of hippocampal inactivation before and during muscimol inhibition of hippocampal cells, as in Figure 16.37. (b) Model simulation with normal grid fields. (c) Model simulation that emulates the effect of hippocampal inhibition on grid fields.
    || (a) Data: hippocampal inactivation [before, after] cart [spikes on trajectory (p: [18.6, 6.7] Hz), spatial autocorrelogram (g= [1.12, 0.09])]. (b) Model: noise-free path integration, [spikes on trajectory (p: 14.56 Hz), rate map, spatial autocorrelogram (g= 1.41), dynamic autocorrelogram (g=0.6)]. (c) Model: noisy path integration + non-specific tonic inhibition, [spikes on trajectory (p: 11.33 Hz), rate map, spatial autocorrelogram (g= 0.05), dynamic autocorrelogram (g=0.047)].
  • image p605fig16.39 Data showing effects of medial septum (MS) inactivation on grid cells and network theta oscillations in medial entorhinal cortex (MEC). (A) Examples of disruption in the spatial expression of the hexagonal grid structure for two grid cells (Brandon etal 2011). (B) Temporal reduction in the power and frequency of network theta oscillations (Koenig etal 2011). (C) Temporary reduction in the gridness score, mean firing rate, and patial stability of grid cells (Koenig etal 2011).
    || Disruptive effects of Medial Septum inactivation in Medial Entorhinal Cortex (Brandon etal 2011; Koenig etal 2011). (A) Rate map [rate map, spatial autocorrelations, trajectory] vs [baseline, sub-sampled, medial septum inactivation, 3-6 hour recovery, 24 hour recovery], [rate map (Hz- m, p), spatial autocorrelations (gridness)][ 1.2, 7.2, 1.1; 0.25, 1.7, 0.6; 0.25, 2.5, -0.53; 0.7, 5.1, 0.55; 1.0, 5.3, 1.3; 2.1, 15, 0.19; 1.7, 12, 0.71; 1.7, 3.2, -0.22; 1.8, 9.1, 0.68; 2.5, 13, 0.46]. (B) [normalized power at 7-9 Hz, frequency (Hz)] vs 5-minute periods. (C) [mean gridness score (+-SEM), mean firing rate (% of baseline), mean correlation coeff (+-SEM)] vs 10-minute periods.
  • image p607fig16.40 Effects of medial septum (MS) inactivation on grid cells. (a) Each row shows data and different data-derived measures of grid cell responsiveness, starting from the left with the baseline response to the middle column with maximal inhibition. (b) Data showing the temporary reduction in the gridness scores during MS inactivation, followed by recovery. (c) Simulation of the collapse in gridness, achieved by reduction in cell response rates to mimic reduced cholinergic transmission. (d,e) Simulations of the reduction in gridness scores in (d) by reduction of cell response rates, in (e) by changing the leak conductance. See the text for details.
    ||
  • image p611fig16.41 How back-propagating action potentials, supplemented by recurrent inhibitory interneurons, control both learning within the synapses on the apical dendrites of winning pyramidal cells, and regulate a rythm by which associative read-out is dissociated from read-in. See the text for details.
    ||
  • image p612fig16.42 Macrocircuit of the main SOVEREIGN subsystems.
    || [reward input, drive input, drive representation (DR), visual working memory and planning system (VWMPS), visual form and motion system (VFMS), motor approach and orienting system (MAOS), visual input (VisIn), motor working memory and planning system (MWMPS), motor approach and orienting system (MAOS), motor plant (MotP), Proprioceptive Input (PropIn), Vestibular Input (VesIn), Environmental feedback (EnvFB). DR [incentive motivational learning-> [VWMPS, MWMPS], -> VFMS, -> MAOS], VWMPS [conditioned reinforcer learning-> DR, MAOS], VFMS [visual object categories-> VWMPS, reactive movement commands-> MAOS], MWMPS [conditioned reinforcer learning-> DR, planned movement commands-> MAOS], MAOS [motor map positions-> MWMPS, motor outflow-> MotP], VisIn-> VFMS, VesIn-> MAOS, EnvFB-> [VisIn, MotP, VesIn].
  • image p613fig16.43 The main visual form and motion processing stream mechanisms of SOVEREIGN, many of them described at length in previous chapters.
    || Render 3-D scene (R3DS), figure-ground separation (FGS), log-polar transform (LPT), Gaussian coarse-coding (GCC), Invariant visual target map (IVTM), What Fuzzy ART (WhatFuzz), body spatial coordinates (BSC), where reactive visual TPV storage (WRVTS), Directional transient cell network (DTCN), Motion direction hemifild map (MDHM), Hemifiled left/right scoring (HLRS), reactive visual control signal (RVCS), Parvo/Magno/Erg competition (PMEC), Approach and Orient GOp (AOGp), GOm (GOm). R3DS [parvo-> FGS, magno-> DTCN], FGS-> [LPT, WRVTS], LPT-> GCC-> IVTM-> WhatFuzz, BSC-> [RVTS, PMEC], PMEC-> [gateRVTS-> RVTS, gateRVCS-> RVCS], DTCN-> MDHM-> HLRS, HLRS-> [PMEC, RVCS], AOGp-> gateRVTS, GOm-> gateRVCS.
  • image p613fig16.44 The main target position vector (TPV), difference vector (DV), and volitional GO computations in SOVEREIGN that bring together reactive and planned signals to control decision-making and action. See the text for details.
    || Reactive visual TPV (RVT), NETs (NETs), S-MV mismatch (SMVM), NETmv (NETmv), reactive visual TPV storage (RVTS), reactive DV1 (RD1), NET (NET), motivated what and where decisions (MWWD), Planned DV1 (PD1), tonic (Tonic), top-down readout mismatch (TDRM), Parvo gate (tonic) (PG), Orienting GOp offset (OGpO). RVT-> [NETs, RVTS], NETs-> [SMVM, NET], SMVM-> NET, NETmv-> SMVM, RVTS-> [NETs, RD1], NET-> [RD1, PD1, TDRM], MWWD-> PD1, PD1-> Tonic-> TDRMPG-> NETs, OGpO-> [NETmv, PD1].
  • image p614fig16.45 The main distance (d) and angle (a) computations that bring together and learn dimensionally-consistent visual and motor information whereby to make the currently best decisions and actions. See the text for details.
    || Reactive Visual TPV [m storage], NETm S-MV mismatch, MV mismatch, NETmv, PPVv, PPVm, Vestibular feedback, motor copy.
  • image p615fig16.46 SOVEREIGN uses homologous processing stages to model the (a) What cortical stream and the (b) Where cortical stream, including their cognitive working memories and chunking networks, and their modulation by motivational mechanisms. See the text for details.
    ||
  • image p615fig16.47 SOVEREIGN models how multiple READ circuits, operating in parallel in response to multiple internal drive sources, can be coordinated to realize a sensory-drive heterarchy that can maximally amplify the motivationally most currently favored option.
    ||
  • image p616fig16.48 SOVEREIGN was tested using a virtual reality 3D rendering of a cross maze (a) with different visual cues at the end of each corridor.
    ||
  • image p616fig16.49 The animat learned to convert (a) inefficient exploration of the maze into (b) an efficient direct learned path to the goal.
    ||
  • image p617fig16.50 The perirhinal and parahippocampal cortices enable adaptively timed reinforcement learning and spatial navigational processes that are modeled by Spectral Spacing models in the What and Where cortical streams, respectively, to be fused in the hippocampus.
    || What and Where inputs to the hippocampus (Diana, Yonelinas, Ranganath 2007). Adaptively timed conditioning and spatial naviga039tbl01.03 tion. Hippocampus <-> Entorhinal Cortex <-> [Perirhinal Cortex <-> what, Parahippocampal Cortex <-> where].
  • image p627tbl17.01 Homologs between reaction-diffusion and recurrent shunting cellular network models of development.
    || byRows: (reaction-diffusion, recurrent shunting net) (activator, excitatory activity) (inhibitor, inhibitory activity) (morphogenic source density, inputs) (firing of morphogen gradient, contrast enhancement) (maintenance of morphogen gradient, short-term memory) (power or sigmoidal signal functions, power or sigmoidal signal functions) (on-center off-surround interactions via diffusion, on-center off-surround interactions via signals) (self-stabilizing distributions of morphogens if inhibitors equilibrate rapidly, short-term memory pattern if inhibitors equilibrate rapidly) (periodic pulses if inhibitors equilibrate slowly, periodic pulses if inhibitors equilibrate slowly) (regulation, adaptation).
  • image p628fig17.01 A hydra
    ||
  • image p628fig17.02 Schematics of how different cuts and grafts of the normal Hydra in (a) may (*) or may not lead to the growth of a new head. See the text for details.
    ||
  • image p629fig17.03 How an initial morphogenetic gradient may be contrast enhanced to exceed the threshold for head formation in its most active region.
    || head formation threshold, final gradient, initial gradient.
  • image p630fig17.04 Morphogenesis: more ratios (Wolpert 1969). Shape preserved as size increases. French flag problem. Use cellular models! (Grossberg 1976, 1978) vs chemical or fluid reaction-diffusion models (Turing 1952; Gierer, Meinhardt 1972).
    ||
  • image p631fig17.05 How a blastula develops into a gastrula. See the text for details.
    || 1. The vegetal pole of the blastula flattens, [Animal, vegetal] hemisphere, blastocoel. 2. Some cells change shape and move inward to form the archenteron, Elastopore. 3. Other cells break free, becoming mesenchyme. 4. Then extensions of mesenchyme cells attach to the overlying ctoderm, Archenteron. 5. The archenteron elongates, assisted by the contraction of mesenchyme cells. 6. The mouth will form, where the archenteron meets ectoderm. 7. The blastopone will form the anus of the mature animal. [Mesenchyme, Ectoderm, Endoderm, Blastocoel, Archenteron, Mesenchyme]. Concept 38.3, www.macmillanhighered.com
  • image p634fig17.06 Summing over a population of cells with binary output signals whose firing thresholds are Gaussianly distributed (left image) generates a total output signal that grows in a sigmoidal fashion with increasing input size (dashed vertical line).
    || How binary cells with a Gaussian distribution of output thresholds generates a sigmoidal population signal. [# of binary cells with threshold T, Total output signal] vs Cell firing thresholds T. Cell population with firing thresholds Gaussianly distributed around a mean value. As input increases (dashed line), more cells in population fire with binary signals. Total population output obeys a sigmoid signal function f.
  • Introduction webPage, questions driving this "webSite" (collection of webPages, defined by the menu above) are :
  • How difficult would it be to augment "Transformer Neural Networks" (TrNNs) with Grossberg This section is repeated in the Introduction webPage.
  • Grossberg 2021 p229c2h0.60 SMART computer simulations demonstrate that a good enough match of a top-down expectation with a bottom-up feature pattern generates an attentive resonance during which the spikes of active cells synchronize in the gamma frequency range of 20-70 Hz (Figure 5.40). Many labs have reported a link between attention and gamma oscillations in the brain, including two articles published in 2001, one from the laboratory of Robert Desimone when he was at the the National Institute of Mental Health in Bethseda (Fries, Reynolds, Rorie, Desimone 2001), and the other from the laboratory of Wolf Singer in Frankfurt (Engel, Fries, Singer 2001). You
  • Grossberg 2021 p081c2h0.66 sub-section: How does one evolve a computational brain?
    The above discussion illustrates that no single step of theoretical derivation can derive a whole brain. One needs a method for deriving a brain in stages, or cycles, much as evolution has incrementally discovered ever more complex brains over many thousands of years. The following theoretical method has been successfully applied many times since I first used it in 1957. It embodies a kind of conceptual evolutionary process for deriving a brain.

    Because "brain evolution needs to achieve behavioural success", we need to start with data that embodiey indices of behavioral success. That is why, as illustrated in Figure 2.37 Modelling method and cycle, one starts with Behavioral Data from scores or hundreds of psychological experiments. These data are analyszed as the result of an individual adapting autonomously in real time to a changing world. This is the Arty of Modeling. It requires that one be able to infer from static data curves the dynamical processes that control individual behaviors occuring in real time. One of the hardest things that I teach to my students to do is "how to think in real time" to be able to carry out this speculative leap.

    Properly carried out, this analysis leads to the discovery of new Design Principles that are embodied by these behavioral processes. The Design Principles highlight the functional meaning of the data, and clarify how individual behaviors occurring in real time give rise to these static data curves.

    These principles are then converted into the simplest Mathematical Model using a method of minimal anatomies, which is a form of Occam
  • see incorporate reader questions into theme webPage
    see Navigation: [menu, link, directory]s
  • p153 Howell: grepStr
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
    bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :
    Note that a separate webPage lists a very small portion of Stephan Grossberg
  • J.E. Kaal, A. Otte, J.A. Sorensen, J.G. Emming 2021 "The nature of the atom" www.Curtis-Press.com, 268pp ISBN 978-1-8381280-2-9 https://StructuredAtom.org/
  • rationalwiki.org "Quantum consciousness" (last update 07Nov2022, viewed 16Jul2023)
    also critiques of the article above
  • Terrence J. Sejnowski 21Aug2023 "Large Language Models and the Reverse Turing Test", Neural Computation (2023) 35 (3): 309–342 (33 pages) https://direct.mit.edu/neco/issue (also copy in case original link fails)
  • Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin 12Jun2017 "Attention Is All You Need" [v5] Wed, 6 Dec 2017 03:30:32 UTC https://arxiv.org/abs/1706.03762
  • Wikipedia Consciousness
  • Menu
  • Grossbergs list of [chapter, section]s.html - Note that the links on this webPage can be used to individually view all captioned images.
  • directory of captioned images - users can easily view all of the captioned images, especially if they are downloaded onto their computer. Many image viewers have "forward, backward] arrows to go through these sequentially, or right-click to open a link in a window.
  • core bash script for extracting captions from webPage listing, convert them to images, then vertically appending them to the figure.
  • my bash utility to [position, move] windows. This is normally used to start up 6 workspaces on my computer (Linux Mint Debian Edition), each with 5-10 apps in separate windows.
  • Prepared themes with links to the captioned images - there are a huge number of themes from the book to focus on. I have prepared a few as examples.
  • What is consciousness? - video example not ready as of 30Aug2023. I save videos as "ogv/ogg" files, and open standard format. The "VLC media viewer" is the program that I use to view them. I have found that although some of the standard video viewers complain, when pushed into the process ogv files can be viewed with them.
  • Navigation: [menu, link, directory]s
  • Theme webPage generation by bash script
  • Notation for [chapter, section, figure, table, index, note]s
  • incorporate reader questions into theme webPages
  • A very primitive bash script is used to generate the search results for ALL themes in the Themes webPage. Many readers will already have far better tools for this from the Computational Intelligence area etc.
    Because the theme webPage is automatically generated, and frequently re-generated as I update the list of themes and sources, I do NOT edit the file directly. The output format can be confusing, due to the special formatted [chapter, section] headings, and large tables which will keep the readers guessing whether they are still within the theme they want to peruse (as per the Table of Contents). Perhaps I can upgrade the searches in time to reduce the confusion, and to split themes in a better way.
  • list of [chapter, section]s
  • list of [figure, table]s
  • selected index items - I have NO intention of re-typing the entire index!
  • Grossberg quotes
  • reader Howell notes - this is an example of building your own webPage of [note, comment, thought]s when reading the book, which can them be added to the bash script for searches. Are notes in addition to [figure, table] captions, mostly comprised of text within the image, but also including quotes of text in the book. Rarely, it includes comments by Howell preceded by "Howell".
    The latter are distinct from "readers notes" (see, for example : Grossberg The reader may want to create their own file of comments based on this example, or augment this list with their [own, others More importantly, and as an easy first adaptation of Grossbergs [core, fun, strange] concepts.html thematic listings, you probably want to get rid of Howell
  • downloading the entire webDirectories below to some directory on your filesystem, say {yourDir} : TrNNs_ART , bin (hopefully I
  • adapt the bash script bash script: thematic [search, collect]s.sh to your own system, and run. This will require re-defining several environmental variables for your, such as :
  • thematic sub-lists appear in the webPage "Grossberg
  • 29Sep2023 Here is a list of various problems with the captioned images and their links on the webPage Grossbergs list of [figure, table]s.html :
    10Aug2023 I haven
  • How difficult would it be to augment "Transformer Neural Networks" (TrNNs) with Grossberg 10Aug2023 This webPage has not yet been worked on. It will touch on one of three questions of this webSite as mentioned in the Introduction :
  • How difficult would it be to augment "Transformer Neural Networks" (TrNNs) with Grossberg 10Aug2023 I haven
  • directory status & updates copyrights Celebrating 20 years of neural networks!
    directory status & updates copyrights
  • Probably the most important section of this webPage is "Computations with multiple RNA strands". Most other sections provide context.
  • extracellular - callerID-SNNs (Spiking Neural Networks), as introduced in another webPage
  • 4-value logic (Colin James
  • bitShifts (like hexadecimal microprocessor machine code) for time series following. This is considered in the callerID-SNNs project. 13Dec2023 (https://en.wikipedia.org/wiki/Transfer_RNA)
  • 2006 MindCode WCCI2006 Vancouver "Howell 060215 Genetic specification of neural networks" :
  • 2015 - 2020 MindCode
  • voice musings of [scattered, random] thoughts - This really is just me, "arm waving and yapping", trying to identify [missing, lost] items. Does biology have "[relative, absolute] addressing" (relative - local proximity on same DNA or RNA strand, absolute - address may even be on different chromosome)? I don While I have long been a fan of the work of Stephen Grossberg and his colleagues, I was very surprised with his 2021 Book "Conscious Mind, Resonant Brain". (This link shows a menu that lists details of many themes from his book, plus commentary on well-known concepts of consciousness.) His book went far beyond my awareness of his work (obviously I was horribly out of date). [Right, wrong, true, false], it also goes far beyond any other [concept, work] that I am aware of in explaining how [neurons, the brain] work. The results are not simple, nor are they amenable to the normal "yap, wave your arms" that we all like so much. Maybe that In any case, I will work on Grossberg There are only two concepts of consciousness with which I am comfortable, biologically based concepts from Grossberg and colleagues, and the late John Taylor
  • Glenn Borchardt Of possible interest to geologists: Puetz, Borchardt 150925 Quasi-periodic fractal patterns in geomagnetic reversals, geological activity, and astronomical events.pdf
  • Howell 2006 "Genetic specification of recurrent neural networks" (draft version of my WCCI2006 conference paper)
  • MindCode 2023 description
  • MindCode 2023 program coding (QNial programming language) this is a simple one-line listing of each operator for each file
  • callerID-SNNs Introduction (this webPage)
  • callerID-SNNs program coding (QNial programming language)
  • bash library: file operations used extensively, sometimes hybridized with the QNial programming language All of these are very incomplete, but the lists are a handy back-reference so that I don
  • Introduction - Conceptual pseudo-basis for MindCode 2020 old description of MindCode
  • MindCode components
  • Historical [DNA, Protein, Evolutionary Computing, ANN] hybrid basis for epiDNA-NNs
  • MindCode - arbitrary selections from Multiple Conflicting Hypothesis
  • Assumed rules of the game
  • Questions, not answers
  • Static epiDNA-NN
  • Dynamic epiDNA-NN coding
  • [Neurological, biological] basis for epiDNA coding
  • Ontogeny
  • Specialized epiDNA-NNs for MindCode
  • Hybrids of [algorithms, conventional computing, ANNs, MindCode]
  • directory status & updates copyrights
    directory status & updates copyrights
    directory status & updates copyrights
    directory status & updates copyrights
    directory status & updates copyrights
    directory status & updates copyrights
    directory status & updates copyrights
    directory status & updates copyrights
    directory status & updates copyrights
    directory status & updates copyrights
    directory status & updates copyrights
    directory status & updates copyrights
  • directory status & updates copyrights
    directory status & updates copyrights
  • "... Consciousness, at its simplest, is sentience and awareness of internal and external existence.[1] However, its nature has led to millennia of analyses, explanations and debates by philosophers, theologians, linguists, and scientists. Opinions differ about what exactly needs to be studied or even considered consciousness. ..."(Wiki2023)
  • Only a very small number of theories of consciousness are listed on this webPage, compared to the vast number of [paper, book]s on the subject coming out all of the time. "Popular theories" as listed on Wikipedia, are shown, assuming that this will be important for non-experts. But the only ones that really count for this webSite are the "Priority model of consciousness".
    Readers will have completely different [interest, priority]s than I, so they would normally have different "Priority model of consciousness", and rankings of the conscousness theories. To understand my selections and rankings, see Introduction to this webSite.
  • this webSite I like the description in Wikipedia (Wiki2023):
    The following additional definitions are also quoted from (Wiki2023) :
    ..." (Wiki2023)
    ..." (Wiki2023)
    ..." (Wiki2023)
    Grossberg 16Jul2023 I am currently lacking a coherent overall webPage for Grossberg The following listing is taken from What is consciousness: from historical to Grossberg, and repeats some of the points in this section above : conscious ART (cART), etc
  • A surprisingly small number of neural architectures can simulate [extensive, diverse] [neuro, pyscho]logical data at BOTH the [sub, ]conscious levels, and for [perception, action] of [sight, auditory, touch, language, cognition, emotion, etc]. This is similar to what we see in physics.
  • simple grepStr search results : ..."(Wiki2023)
    Byoung-Kyong Min 2010 "A Thalamic reticular networking model of consciousness"
    (Wiki2023)
    Wikipedia: Models of consciousness, retrieved Apr2023 (Wiki2023)
    ..." (Wiki2023)
    ..." (Wiki2023)
    "... The Neural correlates of consciousness (NCC) formalism is used as a major step towards explaining consciousness. The NCC are defined to constitute the minimal set of neuronal events and mechanisms sufficient for a specific conscious percept, and consequently sufficient for consciousness. In this formalism, consciousness is viewed as a state-dependent property of some undefined complex, adaptive, and highly interconnected biological system.[3][4][5] ..." (Wiki2023, full article: Wiki2023 - Neural_correlates_of_consciousness, also cited by Grossberg 2021)
    Another idea that has drawn attention for several decades is that consciousness is associated with high-frequency (gamma band) oscillations in brain activity. This idea arose from proposals in the 1980s, by Christof von der Malsburg and Wolf Singer, that gamma oscillations could solve the so-called binding problem, by linking information represented in different parts of the brain into a unified experience.[80] Rodolfo Llinás, for example, proposed that consciousness results from recurrent thalamo-cortical resonance where the specific thalamocortical systems (content) and the non-specific (centromedial thalamus) thalamocortical systems (context) interact in the gamma band frequency via synchronous oscillations.[81] ..." (Wiki2023 - Consciousness#Neural_correlates)
    Howell 19Jul2023 Note that Grossberg "... Integrated Information Theory (IIT) offers an explanation for the nature and source of consciousness. Initially proposed by Giulio Tononi in 2004, it claims that consciousness is identical to a certain kind of information, the realization of which requires physical, not merely functional, integration, and which can be measured mathematically according to the phi metric. ..." (UTM - Integrated information theory)
    "... Integrated information theory (IIT) attempts to provide a framework capable of explaining why some physical systems (such as human brains) are conscious,[1] why they feel the particular way they do in particular states (e.g. why our visual field appears extended when we gaze out at the night sky),[2] and what it would take for other physical systems to be conscious (Are other animals conscious? Might the whole Universe be?).[3] ... In IIT, a system Wikipedia lists numerous criticisms of IIT, but I have not yet quoted from that, other than to mention the authors : Wikipedia: Models of consciousness
    "... Sociology of human consciousness uses the theories and methodology of sociology to explain human consciousness. The theory and its models emphasize the importance of language, collective representations, self-conceptions, and self-reflectivity. It argues that the shape and feel of human consciousness is heavily social. ..."(Wiki2023, full webPage Wiki2023
    "... Daniel Dennett proposed a physicalist, information processing based multiple drafts model of consciousness described more fully in his 1991 book, Consciousness Explained. ..." (Wiki2023, full webPage Wiki2023)
    ..." (Wiki2023)
    "... Functionalism is a view in the theory of the mind. It states that mental states (beliefs, desires, being in pain, etc.) are constituted solely by their functional role – that is, they have causal relations to other mental states, numerous sensory inputs, and behavioral outputs. ..." (Wiki2023, full webPage Wiki2023)
    "... Electromagnetic theories of consciousness propose that consciousness can be understood as an electromagnetic phenomenon that occurs when a brain produces an electromagnetic field with specific characteristics.[7][8] Some electromagnetic theories are also quantum mind theories of consciousness.[9] ..." (Wiki2023)
    "... "No serious researcher I know believes in an electromagnetic theory of consciousness,"[16] Bernard Baars wrote in an e-mail.[better source needed] Baars is a neurobiologist and co-editor of Consciousness and Cognition, another scientific journal in the field. "It Stuart Hameroff separately worked in cancer research and anesthesia, which gave him an interest in brain processes. Hameroff read Penrose rationalwiki.org presents a hard-nosed critique of various "quantum consciousness" theories, from which the following quote is taken :
  • "... Large Language Models (LLMs) have been transformative. They are pre-trained foundational models that are self-supervised and can be adapted with fine-tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. GPT-3 and more recently LaMDA can carry on dialogs with humans on many topics after minimal priming with a few examples. However, there has been a wide range of reactions and debate on whether these LLMs understand what they are saying or exhibit signs of intelligence. This high variance is exhibited in three interviews with LLMs reaching wildly different conclusions. A new possibility was uncovered that could explain this divergence. What appears to be intelligence in LLMs may in fact be a mirror that reflects the intelligence of the interviewer, a remarkable twist that could be considered a Reverse Turing Test. If so, then by studying interviews we may be learning more about the intelligence and beliefs of the interviewer than the intelligence of the LLMs. As LLMs become more capable they may transform the way we interact with machines and how they interact with each other. Increasingly, LLMs are being coupled with sensorimotor devices. LLMs can talk the talk, but can they walk the walk? A road map for achieving artificial general autonomy is outlined with seven major improvements inspired by brain systems and how LLMs could in turn be used to uncover new insights into brain function. ..." (Sejnowski 2022)
    Sejnowski
  • Navigation: [menu, link, directory]s
  • Theme webPage generation by bash script
  • Notation for [chapter, section, figure, table, index, note]s
  • incorporate reader questions into theme webPages
  • Grossberg 2021 p081c2h0.66 sub-section: How does one evolve a computational brain?
    The above discussion illustrates that no single step of theoretical derivation can derive a whole brain. One needs a method for deriving a brain in stages, or cycles, much as evolution has incrementally discovered ever more complex brains over many thousands of years. The following theoretical method has been successfully applied many times since I first used it in 1957. It embodies a kind of conceptual evolutionary process for deriving a brain.

    Because "brain evolution needs to achieve behavioural success", we need to start with data that embodiey indices of behavioral success. That is why, as illustrated in Figure 2.37 Modelling method and cycle, one starts with Behavioral Data from scores or hundreds of psychological experiments. These data are analyszed as the result of an individual adapting autonomously in real time to a changing world. This is the Arty of Modeling. It requires that one be able to infer from static data curves the dynamical processes that control individual behaviors occuring in real time. One of the hardest things that I teach to my students to do is "how to think in real time" to be able to carry out this speculative leap.

    Properly carried out, this analysis leads to the discovery of new Design Principles that are embodied by these behavioral processes. The Design Principles highlight the functional meaning of the data, and clarify how individual behaviors occurring in real time give rise to these static data curves.

    These principles are then converted into the simplest Mathematical Model using a method of minimal anatomies, which is a form of Occam
  • image p059fig02.05 Bowed serial position curve. This kind of data emphasizes the importance of modelling how our brains give rise to our minds using nonlinear systems of differential equations.
    || Effects of [inter, intra]trial intervals (Hovland 1938). # of errors vs list position. [w (sec), W (sec)] = (2 6) (4 6) (2 126) (4 126). Nonoccurance of future items reduces the number of errors in response to past items. Thes data require a real-time theory for their explanation! that is, DIFFERENTIAL equations.
  • image p064fig02.10 The Shunting Model includes upper and lower bounds on neuronal activities. These bound have the effect of multiplying additive terms by excitatory and inhibitory automatic gain terms that enable such models to preserve their sensitivity to inputs whose size may vary greatly in size through time, while also approximately normalizing their total activities.
    || STM: Shunting Model (Grossberg, PNAS 1967, 1968). Mass action in membrane equations. Bi/Ci -> xi(t) -> O -> -Fi/Ei. Bounded activations, automatic gain control. d[dt: xi(t)] = - Ai*xi + (Bi - Ci*xi)sum[j=1 to n: fj(xj(t))*Dji*yji*zji + Ii] - (Ei*Xi + Fi)*sum[j=1 to n: gj(xj)*Gji*Yji*Zji + Ji]. Includes the Additive Model.
  • image p064fig02.11 Medium-Term Memory (MTM) and Long-Term Memory (LTM) equations complement the Additive and Shunting Models of STM. MTM is typically defined by a chemical transmitter that is released from the synaptic knobs of a neuron (Figure 2.03). Its release or inactivation in an activity-dependent way is also called habituation. LTM defines how associative learning occurs between a pair of neurons whose activities are approximately correlated through time. See the text for details.
    || Medium and Long Term memory.
    MTMhabituative transmitter gated[dt: yki(t)] = H*(K - yki) - L*fk(xk)*yki
    LTMgated steepest descent learningd[dt: zki(t)] = Mk*fk(xk)*(hi(xi) - zki)
  • image p068fig02.14 Hodgkin and Huxley developed a model to explain how spikes travel down the squid giant axon.
    || Neurophysiology (single cell): spike potentials in squid giant axon (Hodgekin, Huxley 1952, Nobel Prize). time -> (dendrites -> cell body -> axon).
    C*dp[dt: V] = α*dp^2[dX^2: V] + (V(+) - V)*g(+) + (V(-) - V)*g(-) + (V^p - V)*g^p
    g(+) = G(+)(m,h), g(-) = G(-)(n), G^p = const, [m, h, n] - ionic processes, V - voltage
    Precursor of Shunting network model (Rail 1962). (Howell: see p075fig02.24 Membrane equations of neurophysiology. Shunting equation
  • image p074fig02.23 The equations for a shunting on-center off-surround network. Shunting terms lead to many beautiful and important properties of these networks, which are found ubiquitously, in one form or another, in all cellular tissues.
    || Shunting on-center off-surround network.
    Mass action: d[dt: xi] = -A*xi +(B - xi)*Ii -xi*sum[k≠i: Ik]
    Turn on unexcited sitesTurn off excited sites
    At equilibrium:
    0 = d[dt: xi] = -(A + Ii + sum[k≠i: Ik])*xi + B*Ii = -(A + I)*xi + B*Ii
    xi = B*Ii/(A + I) = B*θi*I/(A + I) = θi* B*I/(A + I)No saturation!
    Infinite dynamical range
    Automatic gain control
    Compute ratio scale
    Weber law
    x = sum[k-1 to n: xk] = B*I/(A + I) ≤ B Conserve total activity
    NORMALIZATION
    Limited capacty
    Real-time probability
  • image p075fig02.24 The membrane equations of neurophysiology describe how cell voltages change in response to excitatory, inhibitory, and passive input channels. Each channel is described by a potential difference multiplied by a conductance. With the special choices shown in the lower right-hand corner, this equation defines a feedforward shuntin on-center off-surround network.
    || Membrane equations of neurophysiology.
    C*dp[dt] = (V(+) - V)*g(+) +(V(-) - V)*g(-) +(V(p) - V)*g(p)
    Shunting equation (not additive)
    V Voltage
    V(+), V(-), V(p) Saturating voltages
    g(+), g(-), g(p) Conductances
    V(+) = B, C = 1; V(-) = V(p) = 0; g(+) = Ii; g(-) = sum[k≠i: Ik];
    lower V: V(+) = V(p) Silent inhibition, upper V: V(+). (Howell: see p068fig02.14 Grossberg
  • image p079fig02.32 Matching amplifies the matched pattern due to automatic gain control. See terms I and J in the equation.
    || Substrate of resonance. Match (in phase) of BU and TD input patterns AMPLIFIES matched pattern due to automatic gain control by shunting terms. J = sum[i: Ji], I = sum[i: Ii], θi = (Ii + Ji)/(I + J)
    xi = (B + C)*(I + J)/(A + I + J)*[θi -C/(B + C)]
    Need top-down expectations to be MODULATORY.
  • image p202fig05.17 This figure summarizes the simplest equations whereby the adaptive weights of a winning category learn the input pattern that drove it to win, or more generally a time-average of all the input patterns that succeeded in doing so.
    || Geometry of choice and learning, learning trains the closest LTM vector
  • image p359fig10.07 The two bottom-up pathways from LGN to layer 4 of V1 can together activate layer 4 and contrast-normalize layer 4 responses.
    || Bottom-up contrast normalization (Grossberg 1968, 1973; Sperling, Sondhi 1968; Heeger 1992; Douglas etal 1995; Shapley etal 2004). Together, direct LGN-to-4 path and 6-to-4 on-center off-surround provide contrast normalization if cells obey shunting or membrane equation dynamics.
  • image p501fig13.26 A simple differential equation describes the processes of transmitter accumulation and release that do their best, at a finite rate, to carry out unbiased transduction.
    || Transmitter accumulation and release. Transmitter y cannot be restored at an infinite rate: T = S*ym y ~= B, Differential equations: d[dt: y] = A*(B - y) - S*y = accumulate - release. Transmitter y tries to recover to ensure unbiased transduction. What if it falls behind? Evolution has exploited the good properties that happen then.
  • image p505fig13.33 An unexpected event can disconfirm ongoing processing by triggering a burst of nonspecific arousal that causes antagonistic rebounds in currently active gated dipoles, whether cognitive or affective.
    || Novelty reset: rebound to arousal onset. 1. Equilibrate to I and J: S1 = f(I+J); y1 = A*B/(A+S1); S2 = f(I+J); y2 = A*B/(A+S2);. 2. Keep phasic input J fixed; increase arousal I to I* = I + ∆I: (a) OFF reaction if T1 < T2; OFF = T2 - T1 = f(I*+J)*y2 - f(I*)*y1 = { A*B*(f(I*) - f(I*+J)) - B*(f(I*)*f(I+J) - f(I)*f(I*+J)) } / (A+f(I)) / (A + f(I+J)). 3. How to interpret this complicated equation?
  • image p580fig16.05 Macrocircuit of the GridPlaceMap model, which can learn both 2D grid cells and place cells in response to realistic trajectories of navigating rats using a hierarchy of SOMs with identical equations.
    || GridPlaceMap model: rate-based and spiking (Pilly, Grossberg 2012). Pre-wired 1D stripe cells, learns both 2D frid and place cells! Same laws for both; both select most frequent and energetic inputs. Place cells emerge gradually in response to developing grid cells. [place-> grid-> stripe] cells-> path integration-> vestibular signals
  • image p586fig16.16 In the place cell learning model of (Gorchetnikov, Grossberg 2007), three populations of five cells each of entorhinal grid cells (only two are shown) with different spatial periods input to the model
  • image p593fig16.25 Spectral Spacing Model STM, MTM, and LTM equations. The rate spectrum that determines the dorsoventral gradient of multiple grid cell properties is defined by μm.
    || Spectral Spacing Model equations. [STM, MTM, LTM]. μm = rate spectrum.
  • Grossberg 2021 p081c2h0.66 sub-section: How does one evolve a computational brain?
    The above discussion illustrates that no single step of theoretical derivation can derive a whole brain. One needs a method for deriving a brain in stages, or cycles, much as evolution has incrementally discovered ever more complex brains over many thousands of years. The following theoretical method has been successfully applied many times since I first used it in 1957. It embodies a kind of conceptual evolutionary process for deriving a brain.

    Because "brain evolution needs to achieve behavioural success", we need to start with data that embodiey indices of behavioral success. That is why, as illustrated in Figure 2.37 Modelling method and cycle, one starts with Behavioral Data from scores or hundreds of psychological experiments. These data are analyszed as the result of an individual adapting autonomously in real time to a changing world. This is the Arty of Modeling. It requires that one be able to infer from static data curves the dynamical processes that control individual behaviors occuring in real time. One of the hardest things that I teach to my students to do is "how to think in real time" to be able to carry out this speculative leap.

    Properly carried out, this analysis leads to the discovery of new Design Principles that are embodied by these behavioral processes. The Design Principles highlight the functional meaning of the data, and clarify how individual behaviors occurring in real time give rise to these static data curves.

    These principles are then converted into the simplest Mathematical Model using a method of minimal anatomies, which is a form of Occam
  • image pxvifig00.01 Macrocircuit of the visual system
  • image p087fig03.01 A macrocircuit of key visual processes (in green) and the cortical areas in which they primarily occur (in red), from the retina to the Prefrontal Cortex (PFC), including both the What and Where cortical streams. The [bottom-up, horizontal, and top-down] interactions help each of these processes to overcome computationally complementary processing deficiencies that they would experience without them, and aso to read-out top-down expectations that help to stabilize learning while they focus attention on salient objects and positions.
    || Emerging unified theory of visual intelligence. [What, Where] streams. Bottom-up and top-down interactions overcome COMPLEMENTARY processing deficiencies.
  • image p168fig04.44 Macrocircuit of the main boundary and surface formation stages that take place from the lateral geniculate nucleus, or LGN, through cortical areas [V1, V2, V4]. See the text for details.
    || image p228fig05.37 A macrocircuit of the neurotrophic Spectrally Timed ART, or nSTART, model. I developed nSTART with my PhD student Daniel Franklin. It proposes how adaptively timed learning in the hippocampus, bolstered by Brain Derived Neurotrophic Factor, or BDNF, helps to ensure normal memory consolidation.
    || habituative gates, CS, US, Thalamus (sensory cortex, category learning, conditioned reinforcer learning, adaptively time learning and BDNF), Amygdala (incentive motivation learning), Hippocampus (BDNF), Prefrontal Cortex (attention), Pontine nuclei, Cerebellum (adaptively timed motor learning)
  • image p246fig05.48 Microcircuits of the LAMINART model that I developed with Rajeev Raizada. See the text for details of how they integrate bottom-up adaptive filtering, horizontal bipole grouping, and top-down attentional matching that satisfied the ART Matching Rule.
    ||
  • image p346fig09.16 A macrocircuit of some of the main brain regions that are used to move the eyes. Black boxes denote areas belonging to the saccadic eye movement systes (SAC), white boxes the smooth pursuit eye system (SPEM), and gray boxes, both systems. The abbreviations for the different brain regions are: LIP - Lateral Intra-Parietal area; FPA - Frontal Pursuit Area; MST - Middle Superior Temporal area; MT - Middle Temporal area; FEF - Frontal Eye Fields; NRPT - Nucleus Reticularis Tegmenti Pontis; DLPN - Dorso-Lateral Pontine Nuclei; SC - Superior Colliculus; CBM - CereBelluM; MVN/rLVN - Medial and Rostro-Lateral Vestibular Nucleii; PPRF - a Peri-Pontine Reticular Formation; TN - Tonic Neurons
    ||
  • image p436fig12.30 The conscious ARTWORD, or cARTWORD, laminar cortical speech model simulates how future context can disambiguate noisy past speech sounds in such a way that the completed percept is consciously heard to proceed from past to future as a feature-item-list resonant wave propagates through time.
    || cARTWORD: Laminar cortical model macrocircuit (Grossberg, Kazerounian 2011) Simulates PHONEMIC RESTORATION: Cognitive Working Memory (processed item sequences) - [Excitatory-> inhibitory-> habituative-> adaptive filter-> adaptive filter-> adaptive filter with depletable synapse-> Acoustic [item, feature]
  • image p474fig12.70 The kind of model macrocircuit that was used in (Grossberg, Stone 1986) to explain lexical decision task data.
    || inputs-> A1 <-> A2 oconic sensory features <-> A3 item and order in sensory STM <-> A4 list parsing in STM (masking field) <-> A5 semantic network (self-feedback). [A4, A5] <-> V* visual object recognition system. M1-> [outputs, A1]. M1 <-> M2 iconic motor features <-> M3 item and order in motor STM. A2-> M2. A3-> M3.
  • image p481fig13.01 Macrocircuit of the functional stages and anatomical interpretations of the Cognitive-Emotional-Motor, or CogEM, model.
    || Drive-> hypothalamus value categories <-> amygdala incentive motivational learning-> Orbitofrontal cortex- object-value categories <-> sensory cortex- invariant object categories- conditioned reinforcer learning-> amygdala-> hypothalamus.
  • image p520fig14.02 Macrocircuit of the main brain regions, and connections between them, that are modelled in the unified predictive Adaptive Resonance Theory (pART) of cognitive-emotional and working memory dynamics. Abbreviations in red denote brain regions used in cognitive-emotional dynamics. Those in green denote brain regions used in working memory dynamics. Black abbreviations denote brain regions that carry out visual perception, learning and recognition of visual object categories, and motion perception, spatial representation and target tracking. Arrow denote non-excitatory synapses. Hemidiscs denote adpative excitatory synapses. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown. Also not shown are output signals from cortical areas to motor responses. V1: striate, or primary, visual cortex; V2 and V4: areas of prestriate visual cortex; MT: Middle Temporal cortex; MST: Medial Superior Temporal area; ITp: posterior InferoTemporal cortex; ITa: anterior InferoTemporal cortex; PPC: Posterior parietal Cortex; LIP: Lateral InterParietal area; VPA: Ventral PreArculate gyrus; FEF: Frontal Eye Fields; PHC: ParaHippocampal Cortex; DLPFC: DorsoLateral Hippocampal Cortex; HIPPO: hippocampus; LH: Lateral Hypothalamus; BG: Basal Ganglia; AMYG: AMYGdala; OFC: OrbitoFrontal Cortex; PRC: PeriRhinal Cortex; VPS: Ventral bank of the Principal Sulcus; VLPFC: VentroLateral PreFrontal Cortex. See the text for further details.
    ||
  • image p532fig14.08 Macrocircuit of the ARTSCENE Search neural model for learning to search for desired objects by using the sequences of already experienced objects and their locations to predict what and where the desired object is. V1 = First visual area or primary visual cortex; V2 = Second visual area; V4 = Fourth visual area; PPC = Posterior Parietal Cortex; ITp = posterior InferoTemporal cortex; ITa = anterior InferoTemporal cortex; MTL = Medial Temporal Lobe; PHC = ParaHippoCampal cortex; PRC = PeriRhinal Cortex; PFC = PreFrontal Cortex; DLPFC = DorsoLateral PreFrontal Cortex; VPFC = Ventral PFC; SC = Superior Colliculus.
    ||
  • image p580fig16.05 Macrocircuit of the GridPlaceMap model, which can learn both 2D grid cells and place cells in response to realistic trajectories of navigating rats using a hierarchy of SOMs with identical equations.
    || GridPlaceMap model: rate-based and spiking (Pilly, Grossberg 2012). Pre-wired 1D stripe cells, learns both 2D frid and place cells! Same laws for both; both select most frequent and energetic inputs. Place cells emerge gradually in response to developing grid cells. [place-> grid-> stripe] cells-> path integration-> vestibular signals
  • image p599fig16.35 Data (a) and simulations (b,c) about anatomically overlapping grid cell modules. (a) shows the anatomical distribution of grid cells belonging to different modules in one animal. DV location (mm) vs postrhinal border. (b) shows the simulated distribution of learned grid cell spacings from two stripe cell scales. frequency (%) vs grid spacing (cm). mu = [1, 0.6]. (c) shows what happens when half the cells respond with one rate and half another rate. (d) shows the same with three rates. (e-g) show spatial maps and autocorrelograms of grid cells that arise from the different rates in (d). [rate map, autocorelogram] vs [score [1.07, 0.5, 0.67], spacing (cm) [23.58, 41, 63.64]].
    ||
  • image p612fig16.42 Macrocircuit of the main SOVEREIGN subsystems.
    || [reward input, drive input, drive representation (DR), visual working memory and planning system (VWMPS), visual form and motion system (VFMS), motor approach and orienting system (MAOS), visual input (VisIn), motor working memory and planning system (MWMPS), motor approach and orienting system (MAOS), motor plant (MotP), Proprioceptive Input (PropIn), Vestibular Input (VesIn), Environmental feedback (EnvFB). DR [incentive motivational learning-> [VWMPS, MWMPS], -> VFMS, -> MAOS], VWMPS [conditioned reinforcer learning-> DR, MAOS], VFMS [visual object categories-> VWMPS, reactive movement commands-> MAOS], MWMPS [conditioned reinforcer learning-> DR, planned movement commands-> MAOS], MAOS [motor map positions-> MWMPS, motor outflow-> MotP], VisIn-> VFMS, VesIn-> MAOS, EnvFB-> [VisIn, MotP, VesIn].
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
  • bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :image p025fig01.16 (left panel) The main processing stages of the Cognitive-Emotional-Motor (CogEM) model have anatomical interpretations in terms of sensory cortex, amygdala, and prefrontal cortex. Chapter 13 will describe in greater detail how CS cues activate invariant object categories in the sensory cortex, value categories in the amygdala, and object-value categories in the prefrontal cortex, notably the orbitofrontal cortex. The amygdala is also modulated by internal drive inputs like hunger and satiety. (right panel) Anatomical data support this circuit, as do many neurophysiological data.
    || drive -> amygdala -> prefrontal cortex <-> sensory cortex -> amygdala. [visual, somatosensory, auditory, gustatory, olfactory] cortex -> [amygdala, Orbital Prefrontal Cortex]. amygdala -> Lateral Prefrontal Cortex
  • image p481fig13.01 Macrocircuit of the functional stages and anatomical interpretations of the Cognitive-Emotional-Motor, or CogEM, model.
    || Drive-> hypothalamus value categories <-> amygdala incentive motivational learning-> Orbitofrontal cortex- object-value categories <-> sensory cortex- invariant object categories- conditioned reinforcer learning-> amygdala-> hypothalamus.
  • image p483fig13.02 The object-value categories in the orbitofrontal cortex require converging specific inputs from the sensory cortex and nonspecific incentive motivational inputs from the amygdala in order to fire. When the orbitofrontal cortex fires, it can deliver top-down ART Matching Rule priming signals to the sensory cortical area by which it was activated, thereby helping to choose the active recognition categories there that have the most emotional support, while suppressing others, leading to attentional blocking of irrelevant cues.
    || Cognitive-Emotional-Motor (CogEM) model. Drive-> amygdala incentive motivational learning-> orbitofrontal cortex- need converging cue and incentive iputs to fire <-> sensory cortex- conditioned renforcer learning-> amygdala. CS-> sensory cortex. Motivated attention closes the cognitive-emotional feedback loop, focuses on relevant cues, and causes blocking of irrelevant cues.
  • image p483fig13.03 The predicted processing stages of CogEM have been supported by anatomical studies of connections between sensory cortices, amygdala, and orbitofrontal cortex.
    || Adapted from (Barbas 1995). sensory cortices = [visual, somatosensory, auditory, gustatory, olfactory]. sensory cortices-> amygdala-> orbital prefrontal cortex. sensory cortices-> orbital prefrontal cortex. [visual cortex, amygdala]-> lateral prefrontal cortex.
  • image p484fig13.04 The top-down feedback from the orbitofrontal cortex closes a feedback loop that supports a cognitive-emotional resonance. If this resonance can be sustained long enough, it enables us to have feelings at the same time that we experience the categories that caused them.
    || Cognitive-Emotional resonance. Basis of "core consciousness" and "the feeling of what happens". (Damasio 1999) derives heuristic version of CogEM model from his clinical data. Drive-> amygdala-> prefrontal cortex-> sensory cortex, resonance around the latter 3. How is this resonance maintained long enough to become conscious?
  • image p487fig13.11 The three main properties of CogEM that help to explain how attentional blocking occurs.
    || CogEM explanation of attentional blocking. Internal drive input <-> Conditioned reinforcer learning (self-recurrent) <-> Competition for STM <- Motor learning. 1. Sensory representations compete for limited capacity STM. 2. Previously reinforced cues amplify their STM via positive feedback. 3. Other dues lose STM via competition.
  • image p489fig13.13 (top row) If a positive ISI separates onset of a CS and US, then the CS can sample the consequences of the US during the time interval before it is inhibited by it. (bottom row) A CogEM simulation of the inverted-U in conditioning as a function of the ISI betweeen CS and US.
    || Positive ISI and conditioning.
  • image p490fig13.15 The CogEM circuit is an ancient design that is found even in mollusks like Aplysia. See the text for details.
    || Aplysia (Buononamo, Baxter, Byrne, Neural Networks 1990; Grossberg, Behavioral and Brain Sciences 1983). Facilitator neuron ~ drive representation.
  • image p494fig13.19 (left column, top row) Secondary conditioning of both arousal and a specific response are now possible. (bottom row) The CogEM circuit may be naturally extended to include multiple drive representations and inputs. (right column, top row) The incentive motivational pathways is also conditionable in order to enable motivational sets to be learned.
    || Secondary conditioning. Homology: conditionable incentive motivation. Multiple drive representations and inputs.
  • image p514fig13.44 Analog of the COgEM model in Figure 6.1 of (Damasio 1999).
    || (a) map of object X-> map of proto-self at inaugural instant-> [, map of proto-self modified]-> assembly of second-order map. (b) map of object X enhanced-> second-order map imaged.
  • image p523fig14.03 (a) The MOTIVATOR neural model generalizes CogEM by also including the basal ganglia. It can hereby explain and simulate complementary functions of the amygdala and basal ganglia (SNc) during conditioning and learned performance. The basal ganglia generate Now Print signals in response to unexpected rewards. These signals modulate learning of new associations in many brain regions. The amygdala supports motivated attention to trigger actions that are expected to occur in response to conditioned or unconditioned stimuli. Object Categories represent visual or gustatory inputs in anterior inferotemporal (ITA) and rhinal (RHIN) cortices, respectively. Value Categories represent the value of anticipated outcomes on the basis of hunger and satiety inputs, in amygdala (AMYG) and lateral hypothalamus (LH). Object-Value Categories resolve the value of competing perceptual stimuli in medial (MORB) and lateral (ORB) orbitofrontal cortex. The Reward Expectation Filter detects the omission or delivery of rewards using a circuit that spans ventral striatum (VS), ventral pallidum (VP), striosomal delay (SD) cells in the ventral striatum, the pedunculopontine nucleus (PPTN) and midbrain dopaminergic neurons of the substantia nigra pars compacta/ventral tegmental area (SNc/VTA). The circuit that processes CS-related visual information (ITA, AMTG, ORB) operates in parallel with a circuit that processes US-related visual and gustatory information (RHIN, AMYG, MORB). (b) Reciprocal adaptive connections between hypothalamus and amygdala enable amygdala cells to become learned value categories. The bottom region represents hypothalmic cells, which receive converging taste and metabolite inputs whereby they become taste-drive cells. Bottom-up signals from activity patterns across these cells activate competing value category, or US Value Representations, in the amygdala. A winning value category learns to respond selectively to specific combinations of taste-drive activity patterns and sends adaptive top-down priming signals back to the taste-drive cells that activated it. CS-activated conditioned reinforcer signals are also associatively linked to value categories. Adpative connections end in (approximately) hemidiscs. See the text for details.
    ||
  • image p548fig15.16 Homologous recognition learning and reinforcement learning macrocicuits enable adaptively timed conditioning in the reinforcement learning circuit to increase inhibition of the orienting system at times when a mismatch in the recognition system would have reduced inhibition of it.
    || Homolog between ART and CogEM model, complementary systems. [Recognition, Reinforcement] learning vs [Attentional, Orienting] system. Reinforcement: timing, drive representation.
  • image p080fig02.33 An opposite-attracts rule during the development of intracellular connections can lead to a mature network that realizes informational noise suppression.
    || How do noise suppression parameters arise? Symmetry-breaking during morphogenesis? Opposites attract rule.
    Intracellular parameters C/B = 1/(1 - n) Intercellular parameters
    Predicts that:
    • Intracellular excitatory and inhibitory saturation points can control the growth during development of :
    • Intercellular excitatory and inhibitory connections.
  • image p012fig01.08 A sigmoidal signal function is a hybrid signal that combines the best properties of [faster, same, slower]-than linear signals. It can suppress noise and store a partially contrast-enhanced activity pattern. slower-than-linear saturates pattern; approximately linear- preserves pattern and normalizes; faster-than-linear- noise suppression and contrast-enhancement.
    || Sigmoidal signal: a hybrid. (upper) saturates pattern- slower-than-linear; (middle) preserves pattern and normalizes- approximately linear. (lower) noise suppression and contrast enhancement- faster-than-linear.
  • image p078fig02.30 Choosing the adaptation level to achieve informational noise suppression.
    || Noise suppression. Attenuate Zero Spatial frequency patterns: no information. Ii vs i (flat line), xi vs i (flat line at zero)
    B >> C: Try B = (n - 1)*C or C/(B + C) = 1/n
    Choose a uniform input pattern (no distinctive features): All θi = 1/n
    xi = (B + C)*I/(A + I)*[θi -C/(B + C)] = 0 no matter how intense I is.
  • image p078fig02.31 How noise suppression enables matching of bottom-up and top-down input patterns.
    || Noise suppression -> pattern matching. mismatch (out of phase) suppressed, match (in phase) amplifies pattern.
  • image p080fig02.33 An opposite-attracts rule during the development of intracellular connections can lead to a mature network that realizes informational noise suppression.
    || How do noise suppression parameters arise? Symmetry-breaking during morphogenesis? Opposites attract rule.
    Intracellular parameters C/B = 1/(1 - n) Intercellular parameters
    Predicts that:
    • Intracellular excitatory and inhibitory saturation points can control the growth during development of :
    • Intercellular excitatory and inhibitory connections.
  • image p080fig02.34 How to achieve informational noise suppression in a network with multiple parallel processing channels.
    || Symmetry-breaking: dynamics and anatomy.
    Dynamics:
    • excitatory range is amplified
    • inhibitory range is compressed
    Anatomy:
    • narrow on-center
    • broad off-surround
    Noise suppression: attenuates uniform patterns
    Contour direction: enhances pattern gradients
  • image p081fig02.36 Informational noise suppression in network with Gaussian on-center and off-surround function as contour detectors that are sensitive to ratio-contrast.
    || Noise suppression and contour detection.
    If B*sum[k=1 to n: Cki] <= D*sum[k=1 to n: Eki] then:
    • uniform patterns are suppressed
    • contrasts are selectively enhanced
    • contours are detected
    Ii vs i, xi vs i
    Responses are selective to [REFLECTANCE, SPATIAL SCALE], eg color [feature, surface] contours.
  • image p510fig13.39 Shunting competition and informational noise suppression in affective gated dipoles, plus back-propagating action potentials for teaching signals, enable the net normalized adaptive weights to be learned. They never saturate!
    || Learn net dipole output pattern. Opponent "decision" controls learning. Cf. competitive learning. Learning signal, opponent extinction.
  • image p009fig01.06 Primacy gradient of activity stored in working memory within a recurrent shunting on-center off-surround network. Rehersal is controlled by a nonspecific rehersal wave and self-inhibitory feedback of the item that is currently being rehearsed. Rehearsal is controlled by a nonspecific rehearsal wave and self-inhibitory feedback of the item that is currently being rehearsed. Green = excitatory, red = inhibitory
    || inputs? -> item and order WM storage -> competitive selection-> rehearsal wave -> outputs
  • image p024fig01.15 A REcurrent Associative Dipole, or READ, circuit is a recurrent shunting on-center off-surround network with habituative transmitter gates. Sensory cues sample it with LTM traces and thereby become conditioned reinforcers.
    ||
  • image p038fig01.25 The ART Matching Rule stabilizes real time learning using a [top-down, modulatory on-center, off-surround] network. Object attention is realized by such a network. See text for additional discussion.
    || ART Matching Rule [volition, categories, features]. [one, two] against one.
  • image p073fig02.22 An on-center off-surround network is capable of computing input ratios.
    || Computing with patterns.
  • How to compute the pattern-sensitive variable: θi = Ii / sum[k=1 to n: Ik]?
    Needs interactions! What type? θi = Ii / sum[k ≠ i: Ik]
    Ii↑ ⇒ θi↑ excitation, Ik↑ ⇒ θk↓, k ≠ i inhibition
    On-center off-surround network.
  • image p074fig02.23 The equations for a shunting on-center off-surround network. Shunting terms lead to many beautiful and important properties of these networks, which are found ubiquitously, in one form or another, in all cellular tissues.
    || Shunting on-center off-surround network.
    Mass action: d[dt: xi] = -A*xi +(B - xi)*Ii -xi*sum[k≠i: Ik]
    Turn on unexcited sitesTurn off excited sites
    At equilibrium:
    0 = d[dt: xi] = -(A + Ii + sum[k≠i: Ik])*xi + B*Ii = -(A + I)*xi + B*Ii
    xi = B*Ii/(A + I) = B*θi*I/(A + I) = θi* B*I/(A + I)No saturation!
    Infinite dynamical range
    Automatic gain control
    Compute ratio scale
    Weber law
    x = sum[k-1 to n: xk] = B*I/(A + I) ≤ B Conserve total activity
    NORMALIZATION
    Limited capacty
    Real-time probability
  • image p075fig02.24 The membrane equations of neurophysiology describe how cell voltages change in response to excitatory, inhibitory, and passive input channels. Each channel is described by a potential difference multiplied by a conductance. With the special choices shown in the lower right-hand corner, this equation defines a feedforward shuntin on-center off-surround network.
    || Membrane equations of neurophysiology.
    C*dp[dt] = (V(+) - V)*g(+) +(V(-) - V)*g(-) +(V(p) - V)*g(p)
    Shunting equation (not additive)
    V Voltage
    V(+), V(-), V(p) Saturating voltages
    g(+), g(-), g(p) Conductances
    V(+) = B, C = 1; V(-) = V(p) = 0; g(+) = Ii; g(-) = sum[k≠i: Ik];
    lower V: V(+) = V(p) Silent inhibition, upper V: V(+). (Howell: see p068fig02.14 Grossberg
  • image p076fig02.25 An on-center off-surround network can respond to increasing on-center excitatory inputs without a loss of sensitivity. Instead, as the off-surround input increases, the region of a cell
  • image p076fig02.26 The mudpuppy retina exhibits the shift property that occurs in the feedforward shunting on-center off-surround network in Figure 2.25. As a result, its sensitivity also shifts in response to different background off-surrounds, and therefore exhibits no compression (dashed purple lines).
    || Mudpuppy retina neurophysiology.
    I center, J background
    a) Relative figure-to-ground
    b) Weber-Fechner I*(A + J)^(-I)
    c) No hyperpolarization, SHUNT: Silent inhibition
    d) Shift property(Werblin 1970) xi(K,J) vs K = ln(I)
    Adaptation- sensitivity shifts for different backgrounds. NO COMPRESSION.
  • image p077fig02.27 A schematic of the on-center off-surround network that occurs in the mudpuppy retina, including three main cell types: receptors, horizontal cells, and bipolar cells.
    || Mechanism: cooperative-competitive dynamics.
    On-center off-surround (Kuffler 1953) cat retina
    Subtractive lateral inhibition (Hartline, Ratcliff 1956/7+) limulus retina.
    R receptor -> H horizontal -> B bipolar (Werblin, Dowling, etal 1969+) mudpuppy retina.
  • image p080fig02.34 How to achieve informational noise suppression in a network with multiple parallel processing channels.
    || Symmetry-breaking: dynamics and anatomy.
    Dynamics:
    • excitatory range is amplified
    • inhibitory range is compressed
    Anatomy:
    • narrow on-center
    • broad off-surround
    Noise suppression: attenuates uniform patterns
    Contour direction: enhances pattern gradients
  • image p081fig02.35 The equilibrium activities of a shunting netwok with Gaussian on-center off-surround kernels are sensitive to the ratio-contrasts of the input patterns that they process. The terms in the denominator of the equilibrium activities accomplish this using the shunting on-center and off-surround terms.
    || Ratio-contrast detector. flat versus [Gaussian Cki, flattened Gaussian? Eki]
    d[dt: xi] = -A*xi +(B - xi)*sum[k≠i: Ik]*Cki -(xi + D)*sum[k=1 to n: Ik*Eki]
    Cki = C*e^(-μ*(k - i)^2), Eki = E*e^(-μ*(k - i)^2)
    At equilibrium: xi = I*sum[k=1 to n: θk*Fki] / (A + I*sum[k=1 to n: θk*Gki])
    Fki = B*Cki -D*Eki (weighted D.O.G)
    Gki = Cki +Eki (S,O,G)
    • Reflectance processing
    • Contrast normalization
    • Discount illuminant
  • image p081fig02.36 Informational noise suppression in network with Gaussian on-center and off-surround function as contour detectors that are sensitive to ratio-contrast.
    || Noise suppression and contour detection.
    If B*sum[k=1 to n: Cki] <= D*sum[k=1 to n: Eki] then:
    • uniform patterns are suppressed
    • contrasts are selectively enhanced
    • contours are detected
    Ii vs i, xi vs i
    Responses are selective to [REFLECTANCE, SPATIAL SCALE], eg color [feature, surface] contours.
  • image p106fig03.24 In response to the Synthetic Aperture image (upper corner left), a shunting on-center off-surround network "discounts the illiminant" and thereby normalizes cell activities to compute feature contours, without causing saturation (upper right corner). Multiple-scale boundaries form in response to spatially coherent activities in the feature contours (lower left corner) and create the webs, or containers, into which the feature contours fill-in the final surface representations (lower right corner).
    || Do these ideas work on hard problems? SAR!
    input imagefeature contoursboundary contoursfilled-in surface
    Synthetic Aperture Radar: sees through weather 5 orders of magnitude of power in radar returndiscounting the illuminant
    • normalizes the image: preseves RELATIVE activities without SATURATION
    • shows individual PIXELS
    boundaries complete between regions where normalized feature contrasts changefilling-in averages brightnesses within boundary compartments
  • image p176fig04.53 The on-center off-surround network within position and across depth helps to explain why brighter Kanizsa squares look closer.
    || inhibition vs. depth. p176c1h0.25 "... to qualitatively understand how this example of proximity-luminance covariance works. It follows directly from the boundary pruning by surface contour feedback signals (Figure 4.51) that achieves complementary consistency and initiates figure-ground perception. ...". p176c1h0.45 "... these inhibitory sigals are part of an off-surround network whose strength decreases as the depth difference increases between the surface that generates the signal and its recipient boundaries. ...". p176c1h0.8 "... Within FACADE theory, the perceived depth of a surface is controlled by the boundaries that act as its filling-in generators and barriers (Figure 3.22), since these boundaries select the depth-sselective FIDOs within whin filling-in can occur, and thereby achieve surface capture. These boundaries, in turn, are themselves strengthened after surface-to-boundary contour feedback eliminates redundant boundaries that cannot support sucessful filling-in (Figure 4.51). These surface contour feedback signals have precisely the properties that are needed to explain why brighter Kanizsa squares look closer! ..."
  • image p192fig05.05 ON and OFF cells in the LGN respond differently to the sides and ends of lines.
    || [ON, OFF]-center, [OFF, ON]-surround (respectively). OFF-center cells maximum response at line end (interior), ON-center cells maximum response along sides (exterior)
  • image p253fig06.02 After bottom-up surface inputs activate spatial attentional cells, they send top-down topographic excitatory signals back to the surface representations. This recurrent shunting on-center off-surround network contrast enhances larger attentional activities while approximately normalizing the total spatial attentional activity. A surface-shroud resonance hereby forms that selects an attentional shroud, enhances the perceived contrast of the attended surface (light blue region), and maintains spatial attention on it.
    || Surface-shroud resonance. perceptual surfaces -> competition -> spatial attention. (Carrasco, Penpeci-Talgar, and Eckstein 2000, Reynolds and Desimone 2003)
  • image p300fig08.12 A single flash activates a Gaussian receptive field across space whose maximum is chosen by a winner-take-all recurrent on-center off-surround network.
    || Gaussian receptive fields are sufficient! (Grossberg, Rudd 1992). Single flash. Suppose that a single flash causes a narrow peak of activity at the position where it occurs. It generates output signals through a Gaussian filter that produces a Gaussian activity profile at the next processing stage., A recurrent on-center off-surround network chooses the maximum activity and suppresses samaller activities. Winner-take-all
  • image p315fig08.35 The output signals from the directional grouping network obey the ART Matching Rule. They thereby select consistent motion directional signals while suppressing inconsistent ones, and do not distort that the spared cells code. The aperture problem is hereby solved by the same mechanism that dynamically stabilizes the learning of directional grouping cells.
    || How to select correct direction and preserve speed estimates? Prediction: Feedback from MSTv to MT- obeys ART Matching Rule; Top-down, modulatory on-center, off-surround network (Grossberg 1976, 1980; Carpenter, Grossberg 1987, 1991); Explains how directional grouping network can stably develop and how top-down directional attention can work. (Cavanough 1992; Goner etal 1986; Sekuer, Ball 1977; Stelmach etal 1994). Directional grouping network (MSTv) <-> Directional long-range filter (MT). Modulatory on-center selects chosen direction and preserves speed. Off-surround inhibits incompatible directions.
  • image p340fig09.07 Log polar remapping from the retina to cortical area V1 and beyond converts expansion, translation, and spiral flows on the retina into parallel flows, with different orientations, on the cortical map.
    || Log polar remapping of optic flow. retina -> cortex. Any combination of expansion and circular motion centered on the fovea maps to cortex as a single direction. Retinal Cartesian coordinates (x,y) map to cortical polar coordinates (r,theta). This makes it easy to compute directional receptive fields in the cortex!
  • image p345fig09.15 Double opponent directional receptive fields in MT are capable of detecting the motion of objects relative to each other and their backgrounds.
    || Motion opponency in MT (Born, Tootell 1992). Motion opponent (Grossberg etal), Differential motion (Royden etal), Subtractive motion cells (Neumann etal). ON center directionally selective: [excit, inhibit]ed by motion in [one, opponent] direction. OFF surround directionally selective: [excit, inhibit]ed by motion in [opponent, center] direction.
  • image p354fig10.01 The lamiar cortical circuit that realizes how we pay attention to an object sends signals from layer 6 of a higher cortical level to layer 6 of a lower cortical level and then back up to layer 4. This "folded feedback" circuit realizes a top-down, modulatory on-center, off-surround circuit that realizes the ART Matching Rule.
    || Top-down attention and folded feedback. Attentional signals also feed back into 6-to-4 on-center off-surround. 1-to-5-to-6 feedback path: Macaque (Lund, Booth 1975) cat (Gilbert, Wiesel 1979). V2-to-V1 feedback is on-center off-surround and affects layer 6 of V1 the most (Bullier etal 1996; Sandell, Schiller 1982). Attended stimuli enhanced, ignored stimuli suppressed. This circuit supports the predicted ART Matching Rule! [LGN, V[1,2][6->1]]
  • image p359fig10.06 Another, albeit indirect, pathway from LGN exists that can also excite layer 4 of V1. Why are not these two pathways redundant? The answer, ultimately, how to do with how cortex learns, as well as with how it pays attention. See the text for details.
    || Another bottom-up input to layer 4: Why?? Layer 6-to-4 on-center off-surround (Grieve, Sillito 1991, 1995; Ahmedetal 1994, 1997). LGN projects to layers 6 and 4. Layer 6 excites spiny stellates in column above it. Medium range connections onto inhibitory neurons. 6-t-4 path acts as on-center off-curround.
  • image p359fig10.07 The two bottom-up pathways from LGN to layer 4 of V1 can together activate layer 4 and contrast-normalize layer 4 responses.
    || Bottom-up contrast normalization (Grossberg 1968, 1973; Sperling, Sondhi 1968; Heeger 1992; Douglas etal 1995; Shapley etal 2004). Together, direct LGN-to-4 path and 6-to-4 on-center off-surround provide contrast normalization if cells obey shunting or membrane equation dynamics.
  • image p360fig10.08 The bottom-up on-center off-surround from LGN-to-6-to-4 has a modulatory on-center because of its role in realizing the ART Matching Rule and, with it, the ability of the cortex to dynamically stabilize its learned memories.
    || Modulation of priming by 6-to-4 on-center (Stratford etal 1996; Callaway 1998). On-center 6-to-4 excitation is inhibited down to being modulatory (priming, subthreshold). On-center 6-to-4 excitation cannot activate layer 4 on its own. Clarifies need for direct path. Prediction: plays key role in stable grouping, development and learning. ART Matching Rule!
  • image p362fig10.11 Feedback between layer 2/3 to the layer 6-to-4-to-2/3 feedback loop chooses the strongest grouping in cases where there is more than one. If only one grouping exists, then the circuit can function very quickly in a feedforward manner. When multiple groupings exist, the cortex "runs as fast as it can" to select the one with the most evidence to support it using the self-normalizing inhibition in the layer 6-to-4 off-surround.
    || How is the final grouping selected? Folded feedback LGN-> 6-> 4-> 2/3. 1. Layer 2/3 groupings feed back into 6-to-4 on-center off-surround: a) direct layer 2/3 -to-6 path; b) can also go via layer 5 (Blasdel etal 1985; Kisvarday etal 1989). 2. Strongest grouping enhanced by its on-center. 3. Inputs to weaker groupings suppressed by off-surround. 4. Interlaminar feedback creates functional columns. Activities of conflicting groupings are reduced by self-normalizing inhibition, slowing processing; intracortical feedback selects and contrast-enhances the winning grouping, speeding processing.
  • image p364fig10.13 The bottom-up adaptive filter, intracortical grouping circuit, and intercortical top-down attentional circuit all use the same competitive decision circuit between layers 6 and 4, called the attention-preattention interface, with which to select the featural patterns that will be processed.
    || Bottom-up filters and intracortical grouping feedback use the same 6-to-4 decision circuit, LGN-> Vx[6,4,2/3]. competitive decision circuit, modulatory on-center off-surround network. Top-down intercortical attention also uses the same 6-to-4 decision circuit!
  • image p364fig10.14 This figure emphasizes how preattentive intracortical groupings and top-down intercortical attention share the same modulatory on-center, off-surround layer 4-to-6 decision circuit.
    || Explanation: grouping and attention share the same modulatory decision circuit. Layer 6-6-4-2/3 pathway shown; also a layer 6-1-2/3 path. intercortical attention, both act via a modulatory on-center off-surround decision circuit, intracortical feedback from groupings
  • image p367fig10.15 Data (left column) and simulation (right column) of how attention prevents a masking stimulus from inhibiting the response to the on-center of the cell from which the recording was made.
    || Attention protects target from masking stimulus (Reynolds etal 1999; Grossberg, Raizada 2000).
  • image p375fig11.07 The 3D LAMINART model uses both monocular and binocular simple cells to binocularly fuse like image contrasts. The remainder of the model generates 3D boundary and surface representations of multiple kinds of experiments as well as of natural scenes.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A], V2 thin stripe [4->2/3A], V4]. V1 blob [V1-4 monocular, V1 interior binocular] simple cells. [complex, simple, inhibitory] cells, on-center off-surround
  • image p376fig11.10 The 3D LAMINART model shows how the disparity filter can be integrated into the circuit that completes 3D boundary representations using bipole grouping cells. It also explains how surface contours can strengthen boundaries that succeed in generating closed filling-in domains.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A] surface contour, V2 thin stripe (monocular surface) [4->2/3A], V2 interior [disynaptic inhibitory interneurons, bipole grouping cells, disparity filter, V4 binocular surface]. [complex, simple, inhibitory] cells, on-center off-surround
  • image p423fig12.20 The Spatial Pitch Network, or SPINET, model shows how a log polar spatial representation of the sound frequency spectrum can be derived from auditory signals occuring in time. The spatial representation allows the ARTSTREAM model to compute spatially distinct auditory streams.
    || SPINET model (Spatial Pitch Network) (Cohen, Grossberg, Wyse 1995). 1. input sound 2. Gamma-tone filter bank 3. Shaort-term average energy spectrum 4. MAP transfer function 5. On-center off-surround and rectification 6. Harmonic weighting 7. Harmonic summation and competition -> PITCH
  • image p448fig12.46 A Masking Field working memory is a multiple-scale self-similar recurrent shunting on-center off-surround network. It can learn list chunks that respond selectively to lists of item chunks of variable length that are stored in an item working memory at the previous processing stage. Chunks that code for longer lists (eg MY vs MYSELF) are larger, and give rise to stronger recurrent inhibitory neurons (red arrows).
    || How to code variable length lists? MASKING FIELDS code list chunks of variable length (Cohen, Grossberg 1986, 1987; Grossberg, Kazerounian 2011, 2016; Grossberg, Meyers 2000; Grossberg, Pearson 2008). Multiple-scale self-similar WM: Masking field, adaptive filter. Variable length coding- Masjking fields select list chunks that are sensitive to WM sequences of variable length; Selectivity- Larger cells selectively code code longer lists; Assymetric competition- Larger cells can inhibit smaller cells more than conversely MAgic Number 7! Temporal order- different list chunks respond to the same items in different orders eg LEFT vs FELT;.
  • image p564fig15.35 (a) A pair of recurrent shunting on-center off-surround networks for control of the fore limbs and hind limbs. (b) Varying the GO signal to these networks can trigger changes in movement gaits. See the text for details.
    ||
  • image p567fig15.38 (a) The Gated Pacemaker model for the control of circadian rythms is a recurrent shunting on-center off-surround network whose excitatory feedback signals are gated by habituative transmitters. Tonic arousal signals energize the pacemaker. Diurnal (left) and nocturnal (right) pacemakers are determined by whether phasic light signals turn the pacemaker on or off. An activity-dependent fatigue signal prevents the pacemaker from becoming overly active for too long. (b) Two simulations of circadian activity cycles during different schedules of light (L) and dark (D). See the text for details.
    || sourceOn-> on-cells (recurrent) <-(-) (-)> off-cells (recurrent) <-sourceOff. on-cells-> activity-> off-cells. off-cells-> fatigue. Diurnal: sourceOn=[light, arousal]; sourceOff=arousal;. Nocturnal: sourceOn=arousal; sourceOff=[arousal, light];.
  • image p586fig16.16 In the place cell learning model of (Gorchetnikov, Grossberg 2007), three populations of five cells each of entorhinal grid cells (only two are shown) with different spatial periods input to the model
  • image p627tbl17.01 Homologs between reaction-diffusion and recurrent shunting cellular network models of development.
    || byRows: (reaction-diffusion, recurrent shunting net) (activator, excitatory activity) (inhibitor, inhibitory activity) (morphogenic source density, inputs) (firing of morphogen gradient, contrast enhancement) (maintenance of morphogen gradient, short-term memory) (power or sigmoidal signal functions, power or sigmoidal signal functions) (on-center off-surround interactions via diffusion, on-center off-surround interactions via signals) (self-stabilizing distributions of morphogens if inhibitors equilibrate rapidly, short-term memory pattern if inhibitors equilibrate rapidly) (periodic pulses if inhibitors equilibrate slowly, periodic pulses if inhibitors equilibrate slowly) (regulation, adaptation).
  • image p016fig01.11 A sufficiently big mismatch between a bottom-up input pattern and a top-down expectation can activate the orienting system, which triggers a burst of nonspecific arousal that can reset the recognition category that read out the expectation. In this way, unexpected events can reset short-term memory and initiate a search for a category that better represents the current situation.
    || [category- top-down (TD) expectation; Bottom-up (BU) input pattern] -> Feature pattern -> BU-TD mismatch -> orienting system -> non-specific arousal -> category.
  • image p038fig01.25 The ART Matching Rule stabilizes real time learning using a [top-down, modulatory on-center, off-surround] network. Object attention is realized by such a network. See text for additional discussion.
    || ART Matching Rule [volition, categories, features]. [one, two] against one.
  • image p052fig02.02 Feature-category resonances enable us to rapidly learn how to recognize objects without experiencing catastrophic forgetting. Attentive matching between bottom-up feature pattern inputs and top-down expectations prevent catastrophic forgetting by focussing object attention upon expected patterns of features, while suppressing outlier features that might otherwise have caused catastophic forgetting if they were learned also.
    || Adaptive Resonance. Attended feature clusters reactivate bottom-up pathways. Activated categories reactivate their top-down pathways. Categories STM, Feature patterns STM. Feature-Category resonance [synchronize, amplify, prolong]s system response. Resonance triggers learning in bottom-up and top-down adaptive weights: adaptive resonance!
  • image p078fig02.31 How noise suppression enables matching of bottom-up and top-down input patterns.
    || Noise suppression -> pattern matching. mismatch (out of phase) suppressed, match (in phase) amplifies pattern.
  • image p079fig02.32 Matching amplifies the matched pattern due to automatic gain control. See terms I and J in the equation.
    || Substrate of resonance. Match (in phase) of BU and TD input patterns AMPLIFIES matched pattern due to automatic gain control by shunting terms. J = sum[i: Ji], I = sum[i: Ii], θi = (Ii + Ji)/(I + J)
    xi = (B + C)*(I + J)/(A + I + J)*[θi -C/(B + C)]
    Need top-down expectations to be MODULATORY.
  • image p087fig03.01 A macrocircuit of key visual processes (in green) and the cortical areas in which they primarily occur (in red), from the retina to the Prefrontal Cortex (PFC), including both the What and Where cortical streams. The [bottom-up, horizontal, and top-down] interactions help each of these processes to overcome computationally complementary processing deficiencies that they would experience without them, and aso to read-out top-down expectations that help to stabilize learning while they focus attention on salient objects and positions.
    || Emerging unified theory of visual intelligence. [What, Where] streams. Bottom-up and top-down interactions overcome COMPLEMENTARY processing deficiencies.
  • image p091fig03.04 A cross-section of the eye, and top-down view of the retina, shao how the blind spot and retinal veins can occlude the registration of light signals at their positions on the retina.
    || Eye: [optic nerve, ciliary body, iris,lens, pupil, cornea, sclera, choroid, retina]. Human retina: [fovea, blind spot, optic nerve]. see alsi cross-section of retinal layer.
  • image p163fig04.39 A schematic of the LAMINART model that explains key aspects of laminar visual cortical anatomy and dynamics. LGN -> V1 [6, 4, 2/3] -> V2 [6, 4, 2/3]
    || p163c1h0.6 "... The first article abount laminar computing ... proposed how the laminar cortical model could process 2D pictures using bottom-up filtering and horizontal bipole grouping interactions (Grossbeerg, Mingolla, Ross 1997). In 1999, I was able to extend the model to also include top-down circuits for expectation and attention (Grossberg 1999)(right panel). Such a synthesis of laminar bottom-up, horizontal, and top-down circuits is characteristic of the cerebral cortex (left panel). I called it LAMINART because it began to show how properties of Adaptive Resonance Theory, or ART, notably the ART prediction about how top-down expectations and attention work, and are realized by identical cortical cells and circuits. You can immediately see from the schematic laminar circuit diagram ... (right panel) that circuits in V2 seem to repeat circuits in V1, albeit with a larger spatial scale, despite the fact that V1 and V2 carry out different functions, How this anatomical similarity can coexist with functional diversity will be clarified in subsequent sections and chapters. It enable dfifferent kinds of biological intelligence to communicate seamlessly while carrying out their different psychological functions. ..."
  • image p192fig05.06 Bottom-up and top-down circuits between the LGN and cortical area V1. The top-down circuits obey the ART Matching Rule for matching with bottom-up input patterns and focussing attention on expected critical features.
    || Model V1-LGN circuits, version [1, 2]. retina -> LGN relay cells -> interneurons -> cortex [simple, endstopped] cells -> cortex complex cells
  • image p193fig05.08 The patterns of LGN activation and inhibition on the sides and ends of a line without the top-down feedback (A) and with it (C). The top-down distribution of excitation (+) and inhibition (-) are shown in (B).
    ||
  • image p199fig05.11 Instar learning enables a bottom-up adaptive filter to become selectively tuned to particular feature patterns. Such pattern learning needs adaptive weights that can either increase or decrease to match the featural activations that they filter.
    || Instar learning STM->LTM: need both increases and decreases in strength for the LTM pattern to learn the STM pattern
  • image p200fig05.13 Instar and outstar learning are often used to learn the adaptive weights in the bottom-up filters and top-down expectations that occur in ART. The ART Matching Rule for object attention enables top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features.
    || Expectations focus attention: feature pattern (STM), Bottom-Up adaptive filter (LTM), Category (STM), competition, Top-Down expectation (LTM); ART Matching Rule: STM before top-down matching, STM after top-down matching (attention!)
  • image p211fig05.20 The PN and N200 event-related potentials are computationally complementary events that are computed within the attentional and orienting systems.
    || PN and N200 are complementary waves. PN [top-down, conditionable, specific] match; N200 [bottom-up, unconditionable, nonspecific] mismatch
  • image p214fig05.24 Learning of a top-down expectation must occur during bottom-up learning in the adaptive filter in order to be able to match the previously associated feature pattern with the one that is currently active.
    || Learning top-down expectations. When the code (green right triangle GRT) for X1 was learned at F2, GRT learned to read-out X1 at F1. [Bottom-Up, Top-Down] learning
  • image p214fig05.25 The sequence of events whereby a novel input pattern can activate a category which, in turn, reads out its learned top-down expectation to be matched against the input pattern. Error correction thus requires the use of a Match Detector that has properties of the Processing Negativity ERP.
    || How is an error corrected. During bottom-up learning, top-down learning must also occur so that the pattern that is read out top-down can be compared with the pattern that is activated by bottom-up inputs. Match detector: Processing Negativity ERP. 1. top-down, 2. conditionable, 3. specific, 4. match
  • image p214fig05.26 When a big enough mismatch occurs, the orienting system is activated and sends a burst of nonspecific arousal to the category level. This Mismatch Detector has properties of the N200 ERP.
    || Mismatch triggers nonspecific arousal. Mismatch at F1 eleicits a nonspecific event at F2. Call this event nonspecific arousal. N200 ERP Naatanen etal: 1. bottom-up, 2. unconditionable, 3. nonspecific, 4. mismatch
  • image p215fig05.28 How a mismatch between bottom-up and top-down input patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level.
    || Mismatch -> inhibition -> arousal -> reset. BU input orienting arousal, BU+TD mismatch arousal and reset. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
  • image p220fig05.29 Vigilance is a gain parameter on inputs to the orienting system that regulates whether net excitation from bottom-up inputs or inhibition from activated categories will dominate the orienting system. If excitation wins, then a memory search for a better matching will occur. If inhibition wins, then the orienting system will remain quiet, thereby enabling resonance and learning to occur.
    || Vigilance control [resonate and learn, reset and search]. ρ is a sensitivity or gain parameter
  • image p221fig05.30 When a predictive disconfirmation occurs, vigilance increases enough to drive a search for a more predictive category. If vigilance increases just enough to exceed the analog match between features that survive top-down matching and the entire bottom-up input pattern, then minimax learning occurs. In this case, the minimum amount of category generalization is given up to correct the predictive error.
    || Match tracking realizes minimax learning principle. Given a predictive error, vigilance increases just enough to trigger search and thus acrifices the minimum generalization to correct the error ... and enables expert knowledge to be incrementally learned. predictive error -> vigilance increase just enough -> minimax learning
  • image p221fig05.31 A system like Fuzzy ARTMAP can learn to associate learned categories in one ART network with learned categories in a second ART network. Because both bottom-up and top-down interactions occur in both networks, a bottom-up input pattern to the first ART network can learn to generate a top-down output pattern from the second ART network.
    || Fuzzy ARTMAP. Match tracking realizes minimax learning principle: vigilence increases to just above the match ratio of prototype / exemplar, thereby triggering search
  • image p226fig05.35 I had shown in 1976 how a competitive learning or self-organizing map model could undergo catastrophic forgetting if the input environment was sufficiently dense and nonstationary, as illustrated by Figure 5.18. Later work with Gail Carpenter showed how, if the ART Matching Rule was shut off, repeating just four input patterns in the correct order could also casue catastrophic forgetting by causing superset recoding, as illustrated in Figure 5.36.
    || Code instability input sequences. D C A; B A; B C = ; |D|<|B|<|C|; where |E| is the number of features in the set E. Any set of input vectors that satisfy the above conditions will lead to unstable coding if they are periodically presented in the order ABCAD and the top-down ART Matching Rule is shut off.
  • image p246fig05.48 Microcircuits of the LAMINART model that I developed with Rajeev Raizada. See the text for details of how they integrate bottom-up adaptive filtering, horizontal bipole grouping, and top-down attentional matching that satisfied the ART Matching Rule.
    ||
  • image p252fig06.01 A surface-shroud resonance begins to form when the surface representations of objects bid for spatial attention. In addition to these topographic excitatory inputs, there is long-range inhibition of the spatial attention cells that determines which inputs will attract spatial attention.
    || Bottom-up spatial attention competition. [more, less] luminous perceptual surfaces -> competition -> spatial attention
  • image p253fig06.02 After bottom-up surface inputs activate spatial attentional cells, they send top-down topographic excitatory signals back to the surface representations. This recurrent shunting on-center off-surround network contrast enhances larger attentional activities while approximately normalizing the total spatial attentional activity. A surface-shroud resonance hereby forms that selects an attentional shroud, enhances the perceived contrast of the attended surface (light blue region), and maintains spatial attention on it.
    || Surface-shroud resonance. perceptual surfaces -> competition -> spatial attention. (Carrasco, Penpeci-Talgar, and Eckstein 2000, Reynolds and Desimone 2003)
  • image p258fig06.07 A top-down spotlight of attention can also be converted into a shroud. This process begins when the spotlight triggers surface filling-in within a region. Figure 6.8 shows how it is completed.
    || Reconciling spotlights and shrouds: top-down attentional spotlight becomes a shroud. spotlight of attention, surface filling-in
  • image p286fig07.04 Illusory contours persist longer than real contours because real contours have more inducers whose rebound at contour offset can cause faster boundary reset. Illusory contours also take longer to form than real contours, which explains the increasing portion of the curve.
    || Persistence data and simulations (Meyer, Ming 1988; Reynolds 1981). Increasing portion of curve is due to formation time of the illusory contour. Longer persistence is due to fewer bottom-up inducers of an illusory contour that has the same length as a real contour: only illuminance-derived edges generate reset signals. When bottom-up inducers are inhibited by OFF cell rebounds, their offset gradually propagates to the center of the illusory contour.
  • image p286fig07.05 This figure shows the propagation through time of illusory contour offset from the rebounded cells that got direct inputs to the center of the contour.
    || Persistence data and simulations. Illusory contours persist longer than real contours (Meyer, Ming 1988; Reynolds 1981). When bottom-up inducers are inhibited by OFF cell rebounds, their offset gradually propagates to the center of the illusory contour.
  • image p315fig08.35 The output signals from the directional grouping network obey the ART Matching Rule. They thereby select consistent motion directional signals while suppressing inconsistent ones, and do not distort that the spared cells code. The aperture problem is hereby solved by the same mechanism that dynamically stabilizes the learning of directional grouping cells.
    || How to select correct direction and preserve speed estimates? Prediction: Feedback from MSTv to MT- obeys ART Matching Rule; Top-down, modulatory on-center, off-surround network (Grossberg 1976, 1980; Carpenter, Grossberg 1987, 1991); Explains how directional grouping network can stably develop and how top-down directional attention can work. (Cavanough 1992; Goner etal 1986; Sekuer, Ball 1977; Stelmach etal 1994). Directional grouping network (MSTv) <-> Directional long-range filter (MT). Modulatory on-center selects chosen direction and preserves speed. Off-surround inhibits incompatible directions.
  • image p330fig08.52 Direction fields of the object frame (left column) and of the two dot "parts" (right column) show the correct motion directions after the peak shift top-down expectation acts.
    || Simulation of motion vector decomposition. [Larger scale (nearer depth), Small scale (farther depth)] vs [Down, Up]
  • image p331fig08.54 The simulated part directions of the rotating dot through time after the translational motion of the frame does its work via the top-down peak shift mechanism.
    || Cycloid. Motion directions of a single dot moving slowly along a cycloid curve through time.
  • image p354fig10.01 The lamiar cortical circuit that realizes how we pay attention to an object sends signals from layer 6 of a higher cortical level to layer 6 of a lower cortical level and then back up to layer 4. This "folded feedback" circuit realizes a top-down, modulatory on-center, off-surround circuit that realizes the ART Matching Rule.
    || Top-down attention and folded feedback. Attentional signals also feed back into 6-to-4 on-center off-surround. 1-to-5-to-6 feedback path: Macaque (Lund, Booth 1975) cat (Gilbert, Wiesel 1979). V2-to-V1 feedback is on-center off-surround and affects layer 6 of V1 the most (Bullier etal 1996; Sandell, Schiller 1982). Attended stimuli enhanced, ignored stimuli suppressed. This circuit supports the predicted ART Matching Rule! [LGN, V[1,2][6->1]]
  • image p356fig10.03 Laminar computing achieves at least three basic properties of visual processing that have analogs in all biologically intelligent behaviors. These properties may be found in all cortical circuits in specialized form.
    || What does Laminar Computing achieve? 1. Self-stabilizing development and learning; 2. Seamless fusion of a) pre-attentive automatic bottom-up processing, b) attentive task-selective top-down processing; 3. Analog coherence: Solution of Binding Problem for perceptual grouping without loss of analog sensitivity. Even the earliest visual cortical stages carry out active adaptive information processing: [learn, group, attention]ing
  • image p359fig10.05 Activation of V1 is initiated, in part, by direct excitatory signals from the LGN to layer 4 of V1.
    || How are layer 2/3 bipole cells activated? Direct bottom-up activation of layer 4. LGN -> V1 layer 4. Strong bottom-up LGN input to layer 4 (Stratford etal 1996; Chung, Ferster 1998). Many details omitted.
  • image p359fig10.06 Another, albeit indirect, pathway from LGN exists that can also excite layer 4 of V1. Why are not these two pathways redundant? The answer, ultimately, how to do with how cortex learns, as well as with how it pays attention. See the text for details.
    || Another bottom-up input to layer 4: Why?? Layer 6-to-4 on-center off-surround (Grieve, Sillito 1991, 1995; Ahmedetal 1994, 1997). LGN projects to layers 6 and 4. Layer 6 excites spiny stellates in column above it. Medium range connections onto inhibitory neurons. 6-t-4 path acts as on-center off-curround.
  • image p359fig10.07 The two bottom-up pathways from LGN to layer 4 of V1 can together activate layer 4 and contrast-normalize layer 4 responses.
    || Bottom-up contrast normalization (Grossberg 1968, 1973; Sperling, Sondhi 1968; Heeger 1992; Douglas etal 1995; Shapley etal 2004). Together, direct LGN-to-4 path and 6-to-4 on-center off-surround provide contrast normalization if cells obey shunting or membrane equation dynamics.
  • image p360fig10.08 The bottom-up on-center off-surround from LGN-to-6-to-4 has a modulatory on-center because of its role in realizing the ART Matching Rule and, with it, the ability of the cortex to dynamically stabilize its learned memories.
    || Modulation of priming by 6-to-4 on-center (Stratford etal 1996; Callaway 1998). On-center 6-to-4 excitation is inhibited down to being modulatory (priming, subthreshold). On-center 6-to-4 excitation cannot activate layer 4 on its own. Clarifies need for direct path. Prediction: plays key role in stable grouping, development and learning. ART Matching Rule!
  • image p364fig10.13 The bottom-up adaptive filter, intracortical grouping circuit, and intercortical top-down attentional circuit all use the same competitive decision circuit between layers 6 and 4, called the attention-preattention interface, with which to select the featural patterns that will be processed.
    || Bottom-up filters and intracortical grouping feedback use the same 6-to-4 decision circuit, LGN-> Vx[6,4,2/3]. competitive decision circuit, modulatory on-center off-surround network. Top-down intercortical attention also uses the same 6-to-4 decision circuit!
  • image p364fig10.14 This figure emphasizes how preattentive intracortical groupings and top-down intercortical attention share the same modulatory on-center, off-surround layer 4-to-6 decision circuit.
    || Explanation: grouping and attention share the same modulatory decision circuit. Layer 6-6-4-2/3 pathway shown; also a layer 6-1-2/3 path. intercortical attention, both act via a modulatory on-center off-surround decision circuit, intracortical feedback from groupings
  • image p420fig12.18 The ARTSTREAM model explains and simulates the auditory continuity illusion as an example of a spectral-pitch resonance. Interactions of ART Matching Rule and asymmetric competition mechanisms in cortical strip maps explain how the tone selects the consistent frequency from the noise in its own stream while separating the rest of the noise into another stream.
    || ARTSTREAM model (Grossberg 1999; Grossberg, Govindarajan, Wyse, Cohen 2004). SPINET. Frequency and pitch strips. Bottom Up (BU) harmonic sieve. Top Down (TD) harmonic ART matching. Exclusive allocation. Learn pitch categories based on early harmonic processing. A stream is a Spectral-Pitch Resonance!
  • image p441fig12.38 The LTM Invariance Principle is realized if the relative sizes of the inputs to the list chunk level stay the same as more items are stored in working memory. This property, in turn, follows from shunting previously stored working memory activities when a ne4w item occurs.
    || LTM Invariance principle. Choose STM activities so that newly stored STM activities may alter the size of old STM activities without recoding their LTM patterns. In particular: New events do not change the relative activities of past event sequences, but may reduce their absolute activites. Why? Bottom-up adaptive filtering uses dot products: T(j) = sum[i=1 to n: x(i)*z(i,j) = total input to v(j). The relative sizes of inputs to coding nodes v(j) are preserved. x(i) -> w*x(i), 0 < w <= 1, leaves all past ratios T(j)/T(k) unchanged.
  • image p449fig12.47 This figure illustrates the self-similarity in a Masking Field of both its recurrent inhibitory connections (red arrows) and its top-down excitatory priming signals (green arrows) to the item chunk working memory.
    || Both recurrent inhibition and top-down excitatory priming are self-similar in a masking field. MYSELF <-> [MY, MYSELF]
  • image p453fig12.50 The ARTWORD perception cycle shows how sequences of items activate possible list chunks, which compete among each other and begin to send their top-down expectations back to the item working memory. An item-list resonance develops through time as a result.
    || ARTWORD perception cycle. (a) bottom-up activation (b) list chunk competition (c) item-list resonance (d) chunk reset due to habituative collapse.
  • image p483fig13.02 The object-value categories in the orbitofrontal cortex require converging specific inputs from the sensory cortex and nonspecific incentive motivational inputs from the amygdala in order to fire. When the orbitofrontal cortex fires, it can deliver top-down ART Matching Rule priming signals to the sensory cortical area by which it was activated, thereby helping to choose the active recognition categories there that have the most emotional support, while suppressing others, leading to attentional blocking of irrelevant cues.
    || Cognitive-Emotional-Motor (CogEM) model. Drive-> amygdala incentive motivational learning-> orbitofrontal cortex- need converging cue and incentive iputs to fire <-> sensory cortex- conditioned renforcer learning-> amygdala. CS-> sensory cortex. Motivated attention closes the cognitive-emotional feedback loop, focuses on relevant cues, and causes blocking of irrelevant cues.
  • image p484fig13.04 The top-down feedback from the orbitofrontal cortex closes a feedback loop that supports a cognitive-emotional resonance. If this resonance can be sustained long enough, it enables us to have feelings at the same time that we experience the categories that caused them.
    || Cognitive-Emotional resonance. Basis of "core consciousness" and "the feeling of what happens". (Damasio 1999) derives heuristic version of CogEM model from his clinical data. Drive-> amygdala-> prefrontal cortex-> sensory cortex, resonance around the latter 3. How is this resonance maintained long enough to become conscious?
  • image p523fig14.03 (a) The MOTIVATOR neural model generalizes CogEM by also including the basal ganglia. It can hereby explain and simulate complementary functions of the amygdala and basal ganglia (SNc) during conditioning and learned performance. The basal ganglia generate Now Print signals in response to unexpected rewards. These signals modulate learning of new associations in many brain regions. The amygdala supports motivated attention to trigger actions that are expected to occur in response to conditioned or unconditioned stimuli. Object Categories represent visual or gustatory inputs in anterior inferotemporal (ITA) and rhinal (RHIN) cortices, respectively. Value Categories represent the value of anticipated outcomes on the basis of hunger and satiety inputs, in amygdala (AMYG) and lateral hypothalamus (LH). Object-Value Categories resolve the value of competing perceptual stimuli in medial (MORB) and lateral (ORB) orbitofrontal cortex. The Reward Expectation Filter detects the omission or delivery of rewards using a circuit that spans ventral striatum (VS), ventral pallidum (VP), striosomal delay (SD) cells in the ventral striatum, the pedunculopontine nucleus (PPTN) and midbrain dopaminergic neurons of the substantia nigra pars compacta/ventral tegmental area (SNc/VTA). The circuit that processes CS-related visual information (ITA, AMTG, ORB) operates in parallel with a circuit that processes US-related visual and gustatory information (RHIN, AMYG, MORB). (b) Reciprocal adaptive connections between hypothalamus and amygdala enable amygdala cells to become learned value categories. The bottom region represents hypothalmic cells, which receive converging taste and metabolite inputs whereby they become taste-drive cells. Bottom-up signals from activity patterns across these cells activate competing value category, or US Value Representations, in the amygdala. A winning value category learns to respond selectively to specific combinations of taste-drive activity patterns and sends adaptive top-down priming signals back to the taste-drive cells that activated it. CS-activated conditioned reinforcer signals are also associatively linked to value categories. Adpative connections end in (approximately) hemidiscs. See the text for details.
    ||
  • image p600fig16.36 The entorhinal-hipppocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories. See the text for details.
    || Entorhinal-hippocampal interactions as an ART system. Hippocampal place cells as spatial categories. Angular head velocity-> head direction cells-> stripe cells- small scale 1D periodic code (ECIII) SOM-> grid cells- small scale 2D periodic code (ECII) SOM-> place cells- larger scale spatial map (DG/CA3)-> place cells (CA1)-> conjunctive-coding cells (EC V/VI)-> top-down feedback back to stripe cells- small scale 1D periodic code (ECIII). stripe cells- small scale 1D periodic code (ECIII)-> place cells (CA1).
  • image p613fig16.44 The main target position vector (TPV), difference vector (DV), and volitional GO computations in SOVEREIGN that bring together reactive and planned signals to control decision-making and action. See the text for details.
    || Reactive visual TPV (RVT), NETs (NETs), S-MV mismatch (SMVM), NETmv (NETmv), reactive visual TPV storage (RVTS), reactive DV1 (RD1), NET (NET), motivated what and where decisions (MWWD), Planned DV1 (PD1), tonic (Tonic), top-down readout mismatch (TDRM), Parvo gate (tonic) (PG), Orienting GOp offset (OGpO). RVT-> [NETs, RVTS], NETs-> [SMVM, NET], SMVM-> NET, NETmv-> SMVM, RVTS-> [NETs, RD1], NET-> [RD1, PD1, TDRM], MWWD-> PD1, PD1-> Tonic-> TDRMPG-> NETs, OGpO-> [NETmv, PD1].
  • Grossberg 2021 p229c2h0.60 SMART computer simulations demonstrate that a good enough match of a top-down expectation with a bottom-up feature pattern generates an attentive resonance during which the spikes of active cells synchronize in the gamma frequency range of 20-70 Hz (Figure 5.40). Many labs have reported a link between attention and gamma oscillations in the brain, including two articles published in 2001, one from the laboratory of Robert Desimone when he was at the the National Institute of Mental Health in Bethseda (Fries, Reynolds, Rorie, Desimone 2001), and the other from the laboratory of Wolf Singer in Frankfurt (Engel, Fries, Singer 2001). You
  • Grossberg 2021 p081c2h0.66 sub-section: How does one evolve a computational brain?
    The above discussion illustrates that no single step of theoretical derivation can derive a whole brain. One needs a method for deriving a brain in stages, or cycles, much as evolution has incrementally discovered ever more complex brains over many thousands of years. The following theoretical method has been successfully applied many times since I first used it in 1957. It embodies a kind of conceptual evolutionary process for deriving a brain.

    Because "brain evolution needs to achieve behavioural success", we need to start with data that embodiey indices of behavioral success. That is why, as illustrated in Figure 2.37 Modelling method and cycle, one starts with Behavioral Data from scores or hundreds of psychological experiments. These data are analyszed as the result of an individual adapting autonomously in real time to a changing world. This is the Arty of Modeling. It requires that one be able to infer from static data curves the dynamical processes that control individual behaviors occuring in real time. One of the hardest things that I teach to my students to do is "how to think in real time" to be able to carry out this speculative leap.

    Properly carried out, this analysis leads to the discovery of new Design Principles that are embodied by these behavioral processes. The Design Principles highlight the functional meaning of the data, and clarify how individual behaviors occurring in real time give rise to these static data curves.

    These principles are then converted into the simplest Mathematical Model using a method of minimal anatomies, which is a form of Occam
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
    bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :image p009fig01.06 Primacy gradient of activity stored in working memory within a recurrent shunting on-center off-surround network. Rehersal is controlled by a nonspecific rehersal wave and self-inhibitory feedback of the item that is currently being rehearsed. Rehearsal is controlled by a nonspecific rehearsal wave and self-inhibitory feedback of the item that is currently being rehearsed. Green = excitatory, red = inhibitory
    || inputs? -> item and order WM storage -> competitive selection-> rehearsal wave -> outputs
  • image p077fig02.27 A schematic of the on-center off-surround network that occurs in the mudpuppy retina, including three main cell types: receptors, horizontal cells, and bipolar cells.
    || Mechanism: cooperative-competitive dynamics.
    On-center off-surround (Kuffler 1953) cat retina
    Subtractive lateral inhibition (Hartline, Ratcliff 1956/7+) limulus retina.
    R receptor -> H horizontal -> B bipolar (Werblin, Dowling, etal 1969+) mudpuppy retina.
  • image p100fig03.15 A fuzzy band of possible initial grouping orientations allows grouping to get started. Cooperative-competitive feedback via a hierarchical resolution of uncertainty chooses a sharp final grouping that has the most evidence to support it.
    || before choice: transient; after choice: equilibrium
  • image p108fig03.28 The watercolor illusion of Baingio Pinna 1987 can be explained using spatial competition betweeen like-oriented boundary signals. This occurs at what I have called the First Competitive Stage. This is one stage in the brain
  • image p146fig04.25 Networks of simple, complex, and hypercomplex cells can create end cuts as an example of hierarchical resolution of uncertainty. See the text for details.
    || How are end cuts created? (Grossberg 1984) Two stages of short-range competition. 1st stage: Simple cells -> complex cells -> hypercomplex - endstopped complex. First competitive stage- across position, same orientation; Second competitive stage- same position, across orientation. -> cooperation.
  • image p148fig04.26 End cuts are formed during neon color spreading in the same way that they are formed at line ends.
    || End cut during neon color spreading.
  • FIRST competitive stageSECOND competitive stage
    within orientationacross orientation
    across positionwithin position
    to generate end cuts.
  • image p149fig04.27 Bipole cells can form boundaries that interpolate end cuts, and use their cooperative-competitive interactions to choose the boundary groupings that have the most support from them.
    || Bipole cells: boundary completion. long-range cooperation & short-range inhibition: complete winning boundary groupings and suppress weaker boundaries.
  • image p161fig04.37 Kanizsa squares that form either collinearly to their inducers (left panel) or perpendicular to them (right panel) confirm predictions of the BCS boundary completion model.
    || Analog-sensitive boundary completion. contour strength vs Kanizsa square image. Increases with "support ratio" (Shipley, Kellman 1992). Inverted-U (Lesher, Mingoloa 1993; cf Soriano, Spillmann, Bach 1994)(shifted gratings). p370h0.6 BCS = Boundary Contour System, FCS = Feature Contour System. p161c1h0.85 "... As predicted by the BCS, they found an Inverted-U in contour strength as a function of line density. ... This effect may be explained by the action of the short-range competition that occurs before the stage of long-range cooperative grouping by bipole cells (Figure 4.32). It is thus another example of the balance between cooperative and competitive mechanisms. ..."
  • image p198fig05.10 A competitive learning circuit learns to transform distributed feature patterns into selective responses of recognition categories.
    || Competitive learning and Self-Organized Maps (SOMs). input patterns -> feature level (F1) -> adaptive filter (T=ZS) ->
  • image p205fig05.18 How catastrophic forgetting can occur in a competitive learning or self-organizing map model due to basic properties of competition and associative learning.
    || Learning from pattern sequences, practicing a sequence of spatial patterns can recode all of them! When is learning stable? Input patterns cannot be too dense relative to the number of categories; Either: not to many distributed inputs relative to the number of categories, or not too many input clusters
  • image p226fig05.35 I had shown in 1976 how a competitive learning or self-organizing map model could undergo catastrophic forgetting if the input environment was sufficiently dense and nonstationary, as illustrated by Figure 5.18. Later work with Gail Carpenter showed how, if the ART Matching Rule was shut off, repeating just four input patterns in the correct order could also casue catastrophic forgetting by causing superset recoding, as illustrated in Figure 5.36.
    || Code instability input sequences. D C A; B A; B C = ; |D|<|B|<|C|; where |E| is the number of features in the set E. Any set of input vectors that satisfy the above conditions will lead to unstable coding if they are periodically presented in the order ABCAD and the top-down ART Matching Rule is shut off.
  • image p287fig07.07 Persistence increases with distance between a target and a masking stimulus due to weakening of the spatial competition in the first competitive stage of hypercomplex cells.
    || Persistence data and simulations. Persistence increases with distance between a target and a masking stimulus (Farrell, Pavel, Sperling 1990). There is less spatial competition from the masker to the target when they are more distant, hence the target is more persistent.
  • image p364fig10.13 The bottom-up adaptive filter, intracortical grouping circuit, and intercortical top-down attentional circuit all use the same competitive decision circuit between layers 6 and 4, called the attention-preattention interface, with which to select the featural patterns that will be processed.
    || Bottom-up filters and intracortical grouping feedback use the same 6-to-4 decision circuit, LGN-> Vx[6,4,2/3]. competitive decision circuit, modulatory on-center off-surround network. Top-down intercortical attention also uses the same 6-to-4 decision circuit!
  • image p437fig12.33 Item and Order working memory models explain free recall data, as well as many other psychological and neurobiological data, by simulating how temporal series of events are stored as evolving spatial patterns of activity at content-addressable item categories. The categories with the largest activities are rehearsed first, and self-inhibit their activity as they do so in order to prevent tem from being rehearsed perseveratively. The laws whereby the items are stored in working memory obey basic design principles concerning list categories, or chunks, of sequences of stored items can be stably remembered.
    || Working memory models: item and order, or competitive queuing (Grossberg 1978; Houghton 1990; Page, Norris 1998). Event sequence in time stored as an evolving spatial pattern of activity. Primacy gradient of working memory activation stores correct temporal order at content-addressable cells. Maximally activated cell populations is performed next when a rehearsal wave is turned on. Output signal from chosen cell population inhibits its own activity to prevent perseveration: inhibition of return. Iterate until entire sequence is performed.
  • image p488fig13.12 (left column) How incentive motivational feedback amplifies activity of a sensory cortical cell population. (right column) A sensory cortical cell population whose activity is amplified by incentive motivational feedback can suppress the activities of less activated populations via self-normalizing recurrent competitive interactions.
    || Motivational feedback and blocking. (left) sensory input CS, STM activity without motivational feedback, STM activity with motivational feedback. (right) STM suppressed by competition, STM amplified by (+) feedback.
  • image p510fig13.39 Shunting competition and informational noise suppression in affective gated dipoles, plus back-propagating action potentials for teaching signals, enable the net normalized adaptive weights to be learned. They never saturate!
    || Learn net dipole output pattern. Opponent "decision" controls learning. Cf. competitive learning. Learning signal, opponent extinction.
  • p001 Chapter 1 Overview - From Complementary Computing and Adaptive Resonance to conscious awareness
  • p250 Chapter 6 Conscious seeing and invariant recognition - Complementary cortical streams coordinate attention for seeing and recognition
  • p289 Chapter 8 How we see and recognize object motion - Visual form and motion perception obey complementary laws
  • p337 Chapter 9 Target tracking, navigation, and decision-making - Visual tracking and navigation obey complementary laws
  • image p029tbl01.01 Some pairs of complementary processing streams.
    ||
    visual boundary:
    interblob stream V1-V2-V4
    visual surface:
    blob stream V1-V2-V4
    visual boundary:
    interblob stream V1-V2-V4
    visual motion:
    magno stream V1-MT-MST
    WHAT streamWHERE stream
    perception & recognition:
    interferotemporal & prefrontal areas
    space & action:
    parietal & prefrontal areas
    object tracking:
    MT interbands & MSTv
    optic flow navigation:
    MT+ bands & MSTd
    motor target position:
    motor & parietal cortex
    volitional speed:
    basal ganglia
  • image p030tbl01.02 The What and Where cortical processing streams obey complementary laws. These laws enable the What stream to rapidly and stably learn invariant object categories without experiencing catastrophic forgetting, while the Where stream learns labile spatial and action representations to control actions that are aimed towards these objects.
    ||
    WHATWHERE
    spatially-invariant object learning and recognitionspatially-variant reaching and movement
    fast learning without catastrophic forgettingcontinually update sensory-motor maps and gains
    IT InterferoTemporal CortexPPC Posterior Parietal Cortex
    WhatWhere
    matchingexcitatoryinhibitory
    learningmatchmismatch
  • image p087fig03.01 A macrocircuit of key visual processes (in green) and the cortical areas in which they primarily occur (in red), from the retina to the Prefrontal Cortex (PFC), including both the What and Where cortical streams. The [bottom-up, horizontal, and top-down] interactions help each of these processes to overcome computationally complementary processing deficiencies that they would experience without them, and aso to read-out top-down expectations that help to stabilize learning while they focus attention on salient objects and positions.
    || Emerging unified theory of visual intelligence. [What, Where] streams. Bottom-up and top-down interactions overcome COMPLEMENTARY processing deficiencies.
  • image p094fig03.07 The processes of boundary completion and surface filling-in are computationally complementary.
    ||
    Boundary completionSurface filling-in
    outwardinward
    orientedunoriented
    insensitive to direction of contrastsensitive to direction-of-contrast
  • image p174fig04.51 The same feedback circuit that ensures complementary consistency between boundaries and surfaces also, automatically, initiates figure-ground separation! See the text for details.
    || before feedback: [V1 -> V2 pale stripe -> V2 thin stripe, "attention pointers" (Cavanagh etal 2010)]; after feedback: [V1 + V2 thin stripe] -> V2 pale stripe via contrast sensitive [exhitation, inhibition] for depths [1, 2] -> object recognition
  • image p176fig04.53 The on-center off-surround network within position and across depth helps to explain why brighter Kanizsa squares look closer.
    || inhibition vs. depth. p176c1h0.25 "... to qualitatively understand how this example of proximity-luminance covariance works. It follows directly from the boundary pruning by surface contour feedback signals (Figure 4.51) that achieves complementary consistency and initiates figure-ground perception. ...". p176c1h0.45 "... these inhibitory sigals are part of an off-surround network whose strength decreases as the depth difference increases between the surface that generates the signal and its recipient boundaries. ...". p176c1h0.8 "... Within FACADE theory, the perceived depth of a surface is controlled by the boundaries that act as its filling-in generators and barriers (Figure 3.22), since these boundaries select the depth-sselective FIDOs within whin filling-in can occur, and thereby achieve surface capture. These boundaries, in turn, are themselves strengthened after surface-to-boundary contour feedback eliminates redundant boundaries that cannot support sucessful filling-in (Figure 4.51). These surface contour feedback signals have precisely the properties that are needed to explain why brighter Kanizsa squares look closer! ..."
  • image p211fig05.20 The PN and N200 event-related potentials are computationally complementary events that are computed within the attentional and orienting systems.
    || PN and N200 are complementary waves. PN [top-down, conditionable, specific] match; N200 [bottom-up, unconditionable, nonspecific] mismatch
  • image p267fig06.14 Feedback from object surfaces to object boundaries uses surface contours. This feedback assures complementary consistency and enables figure-ground separation. A corollary discharge of the surface contours can be used to compite salient object feature positions.
    || Perceptual consistency and figure-ground separation.
  • image p314fig08.34 The VISTARS model for visually-based spatial navigation. It uses the Motion BCS as a front end and feeds it output signals into two computationally complementary cortical processing streams for computing optic flow and target tracking information.
    || VISTARS navigation model (Browning, Grossberg, Mingolia 2009). Use FORMOTION model as front end for higher level navigational circuits: input natural image sequences -> estimate heading (MT+)-MSTd -> additive processing -> estimate object position (MT-)-MSTv direction and speed subtractive processing -> Complementary Computing. [optic flow navigation, object tracking]
  • image p523fig14.03 (a) The MOTIVATOR neural model generalizes CogEM by also including the basal ganglia. It can hereby explain and simulate complementary functions of the amygdala and basal ganglia (SNc) during conditioning and learned performance. The basal ganglia generate Now Print signals in response to unexpected rewards. These signals modulate learning of new associations in many brain regions. The amygdala supports motivated attention to trigger actions that are expected to occur in response to conditioned or unconditioned stimuli. Object Categories represent visual or gustatory inputs in anterior inferotemporal (ITA) and rhinal (RHIN) cortices, respectively. Value Categories represent the value of anticipated outcomes on the basis of hunger and satiety inputs, in amygdala (AMYG) and lateral hypothalamus (LH). Object-Value Categories resolve the value of competing perceptual stimuli in medial (MORB) and lateral (ORB) orbitofrontal cortex. The Reward Expectation Filter detects the omission or delivery of rewards using a circuit that spans ventral striatum (VS), ventral pallidum (VP), striosomal delay (SD) cells in the ventral striatum, the pedunculopontine nucleus (PPTN) and midbrain dopaminergic neurons of the substantia nigra pars compacta/ventral tegmental area (SNc/VTA). The circuit that processes CS-related visual information (ITA, AMTG, ORB) operates in parallel with a circuit that processes US-related visual and gustatory information (RHIN, AMYG, MORB). (b) Reciprocal adaptive connections between hypothalamus and amygdala enable amygdala cells to become learned value categories. The bottom region represents hypothalmic cells, which receive converging taste and metabolite inputs whereby they become taste-drive cells. Bottom-up signals from activity patterns across these cells activate competing value category, or US Value Representations, in the amygdala. A winning value category learns to respond selectively to specific combinations of taste-drive activity patterns and sends adaptive top-down priming signals back to the taste-drive cells that activated it. CS-activated conditioned reinforcer signals are also associatively linked to value categories. Adpative connections end in (approximately) hemidiscs. See the text for details.
    ||
  • image p548fig15.16 Homologous recognition learning and reinforcement learning macrocicuits enable adaptively timed conditioning in the reinforcement learning circuit to increase inhibition of the orienting system at times when a mismatch in the recognition system would have reduced inhibition of it.
    || Homolog between ART and CogEM model, complementary systems. [Recognition, Reinforcement] learning vs [Attentional, Orienting] system. Reinforcement: timing, drive representation.
  • p353 Chapter 10 Laminar computing by cerebral cortex - Towards a unified theory of biologucal and artificial intelligence
  • image p030fig01.20 A schematic cross-section of a slice of laminar neocortex whose cells are organized in a characteristic way in six layers, which themselves may be organized into distinct sublaminae. The computational paradigm of Laminar Computing attempts to show how different parts of neocortex can represent and control very different kinds of behavior - including vision, speech, can cognition - using specializations of the same canonical laminar cortical design.
    || Projection fibres: Cortico[spinal, bulbar, pontine, striate, reticulat, etc]; Thalamocortical fibres; Diffuse cortical afferent fibres: [nonspecific thalamocortical, Cholinergic, Monoaminergic]; Corticocortical efferents; Projection [cell, fibre]; Corticocortical efferent terminals.
  • image p141fig04.19 A laminar cortical circuit for computing binocular disparities in layer 3B of V1 at binocular simple cells. These cells add positionally disparate inputes from like polarized monocular simple cells (layer 4 of V1). Binocular simple cells at each position that is sensitive to opposite polarities then add their outputs at complex cells in layer 2/3. Chapter 10 will explain how these laminar circuits work in greater detail.
    || Laminar cortical circuit for complex cells. [left, right] eye.
    V1 layerdescription
    2/3Acomplex cells
    3Bbinocular simple cells
    4monocular simple cells
  • image p163fig04.39 A schematic of the LAMINART model that explains key aspects of laminar visual cortical anatomy and dynamics. LGN -> V1 [6, 4, 2/3] -> V2 [6, 4, 2/3]
    || p163c1h0.6 "... The first article abount laminar computing ... proposed how the laminar cortical model could process 2D pictures using bottom-up filtering and horizontal bipole grouping interactions (Grossbeerg, Mingolla, Ross 1997). In 1999, I was able to extend the model to also include top-down circuits for expectation and attention (Grossberg 1999)(right panel). Such a synthesis of laminar bottom-up, horizontal, and top-down circuits is characteristic of the cerebral cortex (left panel). I called it LAMINART because it began to show how properties of Adaptive Resonance Theory, or ART, notably the ART prediction about how top-down expectations and attention work, and are realized by identical cortical cells and circuits. You can immediately see from the schematic laminar circuit diagram ... (right panel) that circuits in V2 seem to repeat circuits in V1, albeit with a larger spatial scale, despite the fact that V1 and V2 carry out different functions, How this anatomical similarity can coexist with functional diversity will be clarified in subsequent sections and chapters. It enable dfifferent kinds of biological intelligence to communicate seamlessly while carrying out their different psychological functions. ..."
  • image p174fig04.52 An example of how the 3D LAMINART model can transform the two monocular images of the random dot stereogream in the top row into the three depth-separated surfaces representations in the bottom row.
    || Stereogram surface percepts: surface lightnesses are segregated in depth (Fang and Grossberg 2009). [left, right] inputs, [far, fixation, near] planes. Contrast algorithms that just compute disparity matches and let computer code build the surface eg (Marr, Poggio, etal 1974).
  • image p182fig04.58 LAMINART model processing stage that are sufficient to explain many percepts of transparency, including those summarized in Figure 4.57.
    || [left, right] eye, [LGN, V1 [6, 4, 3B, 2/3 A], V2 [4, 2/3]], [mo, bi]nocular cart [simple, complex] cells, [excita, inhibi]tory cart [connection, cell]s.
  • image p230fig05.38 The Synchronous Matching ART, or SMART, model includes spiking neurons in a laminar cortical hierarchy. I developed SMART with my PhD student Massimiliano Versace. By unlumping LAMINART to include spiking neurons, finer details of neurodynamics, such as the existence of faster gamma oscillations during good enough matches, and slower beta oscillations during bad enough mismatches, could be shown as emergent properties of network interactions.
    || Second order thalamus -> specific thalamic nucleus -> Thalamic reticulate nucleus -> neocortical laminar circuit [6ll, 6l, 5, 2/3, 1] -> Higher order cortex. Similar for First order thalamus -> First order cortex, with interconnection to Second order, nonspecific thalamic nucleus
  • image p231fig05.39 The SMART hypothesis testing and learning cycle predicts that vigilance increases when a mismatch in subcortical regions like the nonspecific thanlamus activates the nucleus basalis of Meynert which, in turn, broadcasts a burst of the neurotransmitter acetylcholine, or ACh, to deeper cortical layers. Due to the way in which LAMINART proposes that cortical matching and mismatching occurs, this ACh burst can increase vigilance and thereby trigger a memory search. See the text for details.
    || [BU input, [, non]specific thalamic nucleus, thalamic reticulate nucleus, neocortical laminar circuit] cart [Arousal, Reset, Search, Vigilance]
  • image p232fig05.41 (a)-(c). The sequence of interlaminar events that SMART predicts during a mismatch reset. (d) Some of the compatible neurophysiological data.
    || Mismatch causes layer 5 dendritic spike that trigger reset. (a) Arousal causes increase in nonspecific thalamic nuclei firing rate and layer 5 dendritic and later somatic spikes (Karkum and Zhu 2002, Williams and Stuart 1999) (b) Layer 5 spikes reach layer 4 via layer 6i and inhibitory neurons (Lund and Boothe 1975, Gilbert and Wiesel 1979) (c) habituative neurotransmitters in layer 6i shift the balance of active cells in layer 4 (Grossberg 1972, 1976) (d) Dendritic stimulation fires layer 5 (Larkum and Zhu 2002) stimulation apical dendrites of nonspecific thalamus
  • image p246fig05.48 Microcircuits of the LAMINART model that I developed with Rajeev Raizada. See the text for details of how they integrate bottom-up adaptive filtering, horizontal bipole grouping, and top-down attentional matching that satisfied the ART Matching Rule.
    ||
  • image p248fig05.49 This circuit of the LAMINART model helps to explain properties of Up and Down states during slow wave sleep, and how disturbances in ACh dynamics can disrupt them.
    ||
  • image p356fig10.03 Laminar computing achieves at least three basic properties of visual processing that have analogs in all biologically intelligent behaviors. These properties may be found in all cortical circuits in specialized form.
    || What does Laminar Computing achieve? 1. Self-stabilizing development and learning; 2. Seamless fusion of a) pre-attentive automatic bottom-up processing, b) attentive task-selective top-down processing; 3. Analog coherence: Solution of Binding Problem for perceptual grouping without loss of analog sensitivity. Even the earliest visual cortical stages carry out active adaptive information processing: [learn, group, attention]ing
  • image p357fig10.04 Laminar Computing achieves its properties by computing in a new way that sythesizes the best properties of feedforward and feedback interactions, analog and digital computations, and preattentive and attentive learning. The property of analog coherence enables coherent groupings and decisions to form without losing sensitivity to the amount of evidence that supports them.
    || Laminar Computing: a new way to compute. 1. Feedforward and feedback: a) Fast feedforward processing when data are unambiguous (eg Thorpe etal), b) slower feedback chooses among ambiguous alternatives [self-normalizing property, real-time probabiligy theory], c) A self-organizing system that trades certainty against speed: Goes beyond Bayesian models! 2. Analog and Digital: Analog Coherence combines the stability of digital with the sensitivity of analog. 3. Preattentive and Attentive Learning: Reconciles the differences of (eg) Helmholtz and Kanizsa, "A preattentive grouping is its own
  • image p362fig10.11 Feedback between layer 2/3 to the layer 6-to-4-to-2/3 feedback loop chooses the strongest grouping in cases where there is more than one. If only one grouping exists, then the circuit can function very quickly in a feedforward manner. When multiple groupings exist, the cortex "runs as fast as it can" to select the one with the most evidence to support it using the self-normalizing inhibition in the layer 6-to-4 off-surround.
    || How is the final grouping selected? Folded feedback LGN-> 6-> 4-> 2/3. 1. Layer 2/3 groupings feed back into 6-to-4 on-center off-surround: a) direct layer 2/3 -to-6 path; b) can also go via layer 5 (Blasdel etal 1985; Kisvarday etal 1989). 2. Strongest grouping enhanced by its on-center. 3. Inputs to weaker groupings suppressed by off-surround. 4. Interlaminar feedback creates functional columns. Activities of conflicting groupings are reduced by self-normalizing inhibition, slowing processing; intracortical feedback selects and contrast-enhances the winning grouping, speeding processing.
  • image p363fig10.12 The same laminar circuit design repeats in V1 and V2, albeit with specializations that include longer horizontal grouping axoms and figure-ground separation interactions.
    || V2 repeats V1 circuitry at larger spatial scale, LGN-> V1[6,4,2/3]-> V2[6,4,2/3]. V2 layer 2/3 horizontal axons longer-range than in V1 (Amir etal 1993). Therefore, longer-range groupings can form in V2 (Von der Heydt etal 1984)
  • image p375fig11.07 The 3D LAMINART model uses both monocular and binocular simple cells to binocularly fuse like image contrasts. The remainder of the model generates 3D boundary and surface representations of multiple kinds of experiments as well as of natural scenes.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A], V2 thin stripe [4->2/3A], V4]. V1 blob [V1-4 monocular, V1 interior binocular] simple cells. [complex, simple, inhibitory] cells, on-center off-surround
  • image p376fig11.10 The 3D LAMINART model shows how the disparity filter can be integrated into the circuit that completes 3D boundary representations using bipole grouping cells. It also explains how surface contours can strengthen boundaries that succeed in generating closed filling-in domains.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A] surface contour, V2 thin stripe (monocular surface) [4->2/3A], V2 interior [disynaptic inhibitory interneurons, bipole grouping cells, disparity filter, V4 binocular surface]. [complex, simple, inhibitory] cells, on-center off-surround
  • image p378fig11.12 How monocular and binocular information are combined in V1 and V2 in the 3D LAMINART model.
    || Model utilizes monocular information. [left, right] eye cart V1-[4 monocular simple, 3B binocular simple, complex2/3A [mo,bi]nocular] cells, V2-4 binocular complex cells. black = monocular cells, blue = binocular cells. In V2, monocular inputs add to binocular inputs along the line of sight and contribute to depth perception.
  • image p379fig11.13 How the 3D LAMINART model explains DaVinci stereopsis. All the stages of boundary and surface formation are color coded to clarify their explanation. Although each mechanism is very simple, when all of them act together, the correct depthful surface representation is generated. See the text for details.
    || DaVinci stereopsis (Nakayama, Shimojo 1990). An emergent property of the previous simple mechanisms working together. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] pair [Binocular match: boundaries of thick bar -> Add monocular boundaries along lines-of-sight -> Line-of-sight inhibition kills weaker vertical boundaries -> 3D surface percept not just a disparity match!] pair [Binocular match: right edge of thin and thick bars -> Strongest boundaries: binocular and monocular boundaries add -> Vertical boundaries from monocular left edge of thin bar survive -> Filling-in contained by connected boundaries]. cart [very near, near, fixation plane, far, very far]
  • image p380fig11.14 The model explanation of DaVinci stereopsis when the input stimuli have opposite contrast polarities.
    || Polarity-reversed Da Vinci stereopsis (Nakayama, Shimojo 1990). Same explanation! (... as Figure 11.13 ...) [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p395fig11.34 A comparison of the properties of other rivalry models with those of the 3D LAMINART model (surrounded by red border). Significantly, only 3D LAMINART explains both stable vision and rivalry (green border).
    || Comparison of rivalry models
  • image p401fig11.41 The 3D LAMINART model proposes how angle cells and disparity-gradient interact through learning to generate 3D representations of slanted objects.
    || 3D LAMINART model. [LGN, V1, V2, V4] Four key additions: 1. Angle cells - tuned to various angles; 2. Disparity-gradient cells - tuned to disparity gradients in the image; 3. weights from [angle to disparity-gradient] cells - learned while viewing 3D image; Colinear grouping between [angle to disparity-gradient] cells - disambiguates ambiguous groupings.
  • image p402fig11.44 3D scenic reconstruction of the image in Figure 11.43 by the 3D LAMINART model.
    || Disparity [5, 6, 8, 10, 11, 14]: images of objects in common depth planes
  • image p436fig12.30 The conscious ARTWORD, or cARTWORD, laminar cortical speech model simulates how future context can disambiguate noisy past speech sounds in such a way that the completed percept is consciously heard to proceed from past to future as a feature-item-list resonant wave propagates through time.
    || cARTWORD: Laminar cortical model macrocircuit (Grossberg, Kazerounian 2011) Simulates PHONEMIC RESTORATION: Cognitive Working Memory (processed item sequences) - [Excitatory-> inhibitory-> habituative-> adaptive filter-> adaptive filter-> adaptive filter with depletable synapse-> Acoustic [item, feature]
  • image p444fig12.42 The LIST PARSE laminar cortical model of working memory and list chunking that I published with Lance Pearson in 2008 simulated the Averbeck etal data in Figure 12.41, as in the left column of the figure. It also simulated cognitive data about working memory storage by human subjects. See the text for details.
    || LIST PARSE: Laminar cortical model of working memory and list chunking (Grossberg, Pearson 2008). Simulates data about: [immediate, delayed, continuous] distractor free recall; immediate serial recall; and variable-speed sequential perfomance of motor acts. [velocity, acceleration] vs time (ms) from recall cue.
  • image p445fig12.43 The LIST PARSE laminar cortical Cognitive Working Memory circuit, that is proposed to occur in ventrolateral prefrontal cortex, is homologous to the LAMINART circuit circuit that models aspects of how visual cortex sees. The Motor Working Memory, VITE Trajectory Generator, and Variable-Rate Volitional Control circuits model how other brain regions, including dorsolateral prefrontal cortex, motor cortex, cerebellum, and basal ganglia, interact with the Cognitive Working Memory to control working memory storage and variable-rate performance of item sequences.
    || List parse circuit diagram. Connectivity convention. sequence chunks [<- BU filter, TD expectation ->] working memory. Working memory and sequence chunking circuit is homologous to visual LAMINART circuit!
  • image p498fig13.22 (left column, top row) Adaptive filtering and conditioned arousal are both needed to regulate what cues can learn to activate particular space-time patterns. These developments lead inexorably to basic cognitive abilities, as embodied in the 3D LAMINART models for 3D vision and figure-ground perception (Chapter 11) and the 3D ARTSCAN SEARCH model for invariant object learning, recognition, and 3D search (Chapter 6). (right column, top row) Conditioned arousal enables only emotionally important cues to activate a motivationally relevant space-time pattern. (bottom row) Conditioned arousal and drive representations arise naturally from the unlumping of avalanche circuits to make them selective to motivationally important cues. The MOTIVATOR model is a natural outcome of this unlumping process (this chapter).
    || (top) Adaptive filtering and Conditioned arousal. Towards Cognition: need to filter inputs to the command cell. Towards Emotion: important signals turn arousal ON and OFF. (bottom) Conditioned arousal and Drive representations. Competition between conditioned arousal sources at drive representations, eg amygdala.
  • image p038fig01.25 The ART Matching Rule stabilizes real time learning using a [top-down, modulatory on-center, off-surround] network. Object attention is realized by such a network. See text for additional discussion.
    || ART Matching Rule [volition, categories, features]. [one, two] against one.
  • image p163fig04.39 A schematic of the LAMINART model that explains key aspects of laminar visual cortical anatomy and dynamics. LGN -> V1 [6, 4, 2/3] -> V2 [6, 4, 2/3]
    || p163c1h0.6 "... The first article abount laminar computing ... proposed how the laminar cortical model could process 2D pictures using bottom-up filtering and horizontal bipole grouping interactions (Grossbeerg, Mingolla, Ross 1997). In 1999, I was able to extend the model to also include top-down circuits for expectation and attention (Grossberg 1999)(right panel). Such a synthesis of laminar bottom-up, horizontal, and top-down circuits is characteristic of the cerebral cortex (left panel). I called it LAMINART because it began to show how properties of Adaptive Resonance Theory, or ART, notably the ART prediction about how top-down expectations and attention work, and are realized by identical cortical cells and circuits. You can immediately see from the schematic laminar circuit diagram ... (right panel) that circuits in V2 seem to repeat circuits in V1, albeit with a larger spatial scale, despite the fact that V1 and V2 carry out different functions, How this anatomical similarity can coexist with functional diversity will be clarified in subsequent sections and chapters. It enable dfifferent kinds of biological intelligence to communicate seamlessly while carrying out their different psychological functions. ..."
  • image p192fig05.06 Bottom-up and top-down circuits between the LGN and cortical area V1. The top-down circuits obey the ART Matching Rule for matching with bottom-up input patterns and focussing attention on expected critical features.
    || Model V1-LGN circuits, version [1, 2]. retina -> LGN relay cells -> interneurons -> cortex [simple, endstopped] cells -> cortex complex cells
  • image p200fig05.13 Instar and outstar learning are often used to learn the adaptive weights in the bottom-up filters and top-down expectations that occur in ART. The ART Matching Rule for object attention enables top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features.
    || Expectations focus attention: feature pattern (STM), Bottom-Up adaptive filter (LTM), Category (STM), competition, Top-Down expectation (LTM); ART Matching Rule: STM before top-down matching, STM after top-down matching (attention!)
  • image p207fig05.19 The ART hypothesis testing and learning cycle. See the text for details about how the attentional system and orienting system interact in order to incorporate learning of novel categories into the corpus of already learned categories without causing catastophic forgetting.
    ||
  • image p211fig05.21 Sequences of P120, N200, and P300 event-related potentials occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    || ERP support for mismatch-mediated reset: event-related potentials: human scalp potentials. ART predicted correlated sequences of P120-N200-P300 Event Related Potentials during oddball learning. P120 mismatch; N200 arousal/novelty; P300 STM reset. Confirmed in (Banquet and Grossberg 1987)
  • image p215fig05.28 How a mismatch between bottom-up and top-down input patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level.
    || Mismatch -> inhibition -> arousal -> reset. BU input orienting arousal, BU+TD mismatch arousal and reset. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
  • image p221fig05.31 A system like Fuzzy ARTMAP can learn to associate learned categories in one ART network with learned categories in a second ART network. Because both bottom-up and top-down interactions occur in both networks, a bottom-up input pattern to the first ART network can learn to generate a top-down output pattern from the second ART network.
    || Fuzzy ARTMAP. Match tracking realizes minimax learning principle: vigilence increases to just above the match ratio of prototype / exemplar, thereby triggering search
  • image p226fig05.35 I had shown in 1976 how a competitive learning or self-organizing map model could undergo catastrophic forgetting if the input environment was sufficiently dense and nonstationary, as illustrated by Figure 5.18. Later work with Gail Carpenter showed how, if the ART Matching Rule was shut off, repeating just four input patterns in the correct order could also casue catastrophic forgetting by causing superset recoding, as illustrated in Figure 5.36.
    || Code instability input sequences. D C A; B A; B C = ; |D|<|B|<|C|; where |E| is the number of features in the set E. Any set of input vectors that satisfy the above conditions will lead to unstable coding if they are periodically presented in the order ABCAD and the top-down ART Matching Rule is shut off.
  • image p226fig05.36 Column (a) shows catastrophic forgetting when the ART Matching Rule is not operative. It is due to superset recoding. Column (b) shows how category learning quickly stabilizes when the ART Martching Rule is restored.
    || Stabel and unstable learning, superset recoding
  • image p228fig05.37 A macrocircuit of the neurotrophic Spectrally Timed ART, or nSTART, model. I developed nSTART with my PhD student Daniel Franklin. It proposes how adaptively timed learning in the hippocampus, bolstered by Brain Derived Neurotrophic Factor, or BDNF, helps to ensure normal memory consolidation.
    || habituative gates, CS, US, Thalamus (sensory cortex, category learning, conditioned reinforcer learning, adaptively time learning and BDNF), Amygdala (incentive motivation learning), Hippocampus (BDNF), Prefrontal Cortex (attention), Pontine nuclei, Cerebellum (adaptively timed motor learning)
  • image p230fig05.38 The Synchronous Matching ART, or SMART, model includes spiking neurons in a laminar cortical hierarchy. I developed SMART with my PhD student Massimiliano Versace. By unlumping LAMINART to include spiking neurons, finer details of neurodynamics, such as the existence of faster gamma oscillations during good enough matches, and slower beta oscillations during bad enough mismatches, could be shown as emergent properties of network interactions.
    || Second order thalamus -> specific thalamic nucleus -> Thalamic reticulate nucleus -> neocortical laminar circuit [6ll, 6l, 5, 2/3, 1] -> Higher order cortex. Similar for First order thalamus -> First order cortex, with interconnection to Second order, nonspecific thalamic nucleus
  • image p240fig05.44 When an algebraic exemplar model is realized using only local computations, it starts looking like an ART prototype model.
    || How does the model know which exemplars are in category A? BU-TD learning. How does a NOVEL test item access category A?
  • image p241fig05.45 The 5-4 category structure is one example of how an ART network learns the same kinds of categories as human learners. See the text for details.
    || 5-4 Category structure. A1-A5: closer to the (1 1 1 1) prototype; B1-B4: closer to the (0 0 0 0) prototype
  • image p246fig05.48 Microcircuits of the LAMINART model that I developed with Rajeev Raizada. See the text for details of how they integrate bottom-up adaptive filtering, horizontal bipole grouping, and top-down attentional matching that satisfied the ART Matching Rule.
    ||
  • image p254fig06.03 These interactions of the ARTSCAN Search model enable it to learn to recognize and name invariant object categories. Interactions between spatial attention in the Where cortical stream, via surface-shroud resonances, and object attention in the What cortical stream, that obeys the ART Matching Rule, coordinate these learning, recognition, and naming processes.
    || Retinal image -> On & OFF cell contrast normalization (retina/LGN) -> polarity [, in]sensitive contrast enhancement (V1) -> object [boundary (V2), surface (V2/V4)] -> surface contour (V2); What stream categories: volition control (BG). object boundary (V2) <-> view (ITp) <-> view integrator (ITp) <-> object (ITa) <-> [object-value (ORB), value (Amyg)] <-> name (PFC)
  • image p315fig08.35 The output signals from the directional grouping network obey the ART Matching Rule. They thereby select consistent motion directional signals while suppressing inconsistent ones, and do not distort that the spared cells code. The aperture problem is hereby solved by the same mechanism that dynamically stabilizes the learning of directional grouping cells.
    || How to select correct direction and preserve speed estimates? Prediction: Feedback from MSTv to MT- obeys ART Matching Rule; Top-down, modulatory on-center, off-surround network (Grossberg 1976, 1980; Carpenter, Grossberg 1987, 1991); Explains how directional grouping network can stably develop and how top-down directional attention can work. (Cavanough 1992; Goner etal 1986; Sekuer, Ball 1977; Stelmach etal 1994). Directional grouping network (MSTv) <-> Directional long-range filter (MT). Modulatory on-center selects chosen direction and preserves speed. Off-surround inhibits incompatible directions.
  • image p316fig08.36 How the directional grouping network, notably properties of the ART Matching Rule, enables a small set of amplified feature tracking signals at the ends of a line to select consistent directions in the line interior, while suppressing inconsistent directions.
    || Motion capture by directional grouping feedback. Directional grouping network (MSTv) <-> Directional long-range filter (MT). It takes longer to capture ambiguous motion signals in the line interior as the length of the line increases cf (Castet etal 1993)
  • image p354fig10.01 The lamiar cortical circuit that realizes how we pay attention to an object sends signals from layer 6 of a higher cortical level to layer 6 of a lower cortical level and then back up to layer 4. This "folded feedback" circuit realizes a top-down, modulatory on-center, off-surround circuit that realizes the ART Matching Rule.
    || Top-down attention and folded feedback. Attentional signals also feed back into 6-to-4 on-center off-surround. 1-to-5-to-6 feedback path: Macaque (Lund, Booth 1975) cat (Gilbert, Wiesel 1979). V2-to-V1 feedback is on-center off-surround and affects layer 6 of V1 the most (Bullier etal 1996; Sandell, Schiller 1982). Attended stimuli enhanced, ignored stimuli suppressed. This circuit supports the predicted ART Matching Rule! [LGN, V[1,2][6->1]]
  • image p360fig10.08 The bottom-up on-center off-surround from LGN-to-6-to-4 has a modulatory on-center because of its role in realizing the ART Matching Rule and, with it, the ability of the cortex to dynamically stabilize its learned memories.
    || Modulation of priming by 6-to-4 on-center (Stratford etal 1996; Callaway 1998). On-center 6-to-4 excitation is inhibited down to being modulatory (priming, subthreshold). On-center 6-to-4 excitation cannot activate layer 4 on its own. Clarifies need for direct path. Prediction: plays key role in stable grouping, development and learning. ART Matching Rule!
  • image p419fig12.17 The auditory continuity illusion illustrates the ART Matching Rule at the level of auditory streaming. Its "backwards in time" effect of future context on past conscious perception is a signature of resonance.
    || Auditory continuity illusion. input, percept. Backwards in time - How does a future sound let past sound continue through noise? Resonance! - It takes a while to kick in. After it starts, a future tone can maintain it much more quickly. Why does this not happen if there is no noise? - ART Matching Rule! TD harmonic filter is maodulatory without BU input. It cannot create something out of nothing.
  • image p420fig12.18 The ARTSTREAM model explains and simulates the auditory continuity illusion as an example of a spectral-pitch resonance. Interactions of ART Matching Rule and asymmetric competition mechanisms in cortical strip maps explain how the tone selects the consistent frequency from the noise in its own stream while separating the rest of the noise into another stream.
    || ARTSTREAM model (Grossberg 1999; Grossberg, Govindarajan, Wyse, Cohen 2004). SPINET. Frequency and pitch strips. Bottom Up (BU) harmonic sieve. Top Down (TD) harmonic ART matching. Exclusive allocation. Learn pitch categories based on early harmonic processing. A stream is a Spectral-Pitch Resonance!
  • image p473fig12.69 Error rate and mean reaction time (RT) data from the lexical decision experiments of (Schvanevelt, McDonald 1981). ART Matching Rule properties explain these data in (Grossberg, Stone 1986).
    || (left) Error rate vs type of prime [R, N, U], [non,] word. (right) Mean RT (msec) vs type of prime [R, N, U], [non,] word.
  • image p483fig13.02 The object-value categories in the orbitofrontal cortex require converging specific inputs from the sensory cortex and nonspecific incentive motivational inputs from the amygdala in order to fire. When the orbitofrontal cortex fires, it can deliver top-down ART Matching Rule priming signals to the sensory cortical area by which it was activated, thereby helping to choose the active recognition categories there that have the most emotional support, while suppressing others, leading to attentional blocking of irrelevant cues.
    || Cognitive-Emotional-Motor (CogEM) model. Drive-> amygdala incentive motivational learning-> orbitofrontal cortex- need converging cue and incentive iputs to fire <-> sensory cortex- conditioned renforcer learning-> amygdala. CS-> sensory cortex. Motivated attention closes the cognitive-emotional feedback loop, focuses on relevant cues, and causes blocking of irrelevant cues.
  • image p548fig15.16 Homologous recognition learning and reinforcement learning macrocicuits enable adaptively timed conditioning in the reinforcement learning circuit to increase inhibition of the orienting system at times when a mismatch in the recognition system would have reduced inhibition of it.
    || Homolog between ART and CogEM model, complementary systems. [Recognition, Reinforcement] learning vs [Attentional, Orienting] system. Reinforcement: timing, drive representation.
  • image p600fig16.36 The entorhinal-hipppocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories. See the text for details.
    || Entorhinal-hippocampal interactions as an ART system. Hippocampal place cells as spatial categories. Angular head velocity-> head direction cells-> stripe cells- small scale 1D periodic code (ECIII) SOM-> grid cells- small scale 2D periodic code (ECII) SOM-> place cells- larger scale spatial map (DG/CA3)-> place cells (CA1)-> conjunctive-coding cells (EC V/VI)-> top-down feedback back to stripe cells- small scale 1D periodic code (ECIII). stripe cells- small scale 1D periodic code (ECIII)-> place cells (CA1).
  • image p613fig16.43 The main visual form and motion processing stream mechanisms of SOVEREIGN, many of them described at length in previous chapters.
    || Render 3-D scene (R3DS), figure-ground separation (FGS), log-polar transform (LPT), Gaussian coarse-coding (GCC), Invariant visual target map (IVTM), What Fuzzy ART (WhatFuzz), body spatial coordinates (BSC), where reactive visual TPV storage (WRVTS), Directional transient cell network (DTCN), Motion direction hemifild map (MDHM), Hemifiled left/right scoring (HLRS), reactive visual control signal (RVCS), Parvo/Magno/Erg competition (PMEC), Approach and Orient GOp (AOGp), GOm (GOm). R3DS [parvo-> FGS, magno-> DTCN], FGS-> [LPT, WRVTS], LPT-> GCC-> IVTM-> WhatFuzz, BSC-> [RVTS, PMEC], PMEC-> [gateRVTS-> RVTS, gateRVCS-> RVCS], DTCN-> MDHM-> HLRS, HLRS-> [PMEC, RVCS], AOGp-> gateRVTS, GOm-> gateRVCS.
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
    bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :image p163fig04.39 A schematic of the LAMINART model that explains key aspects of laminar visual cortical anatomy and dynamics. LGN -> V1 [6, 4, 2/3] -> V2 [6, 4, 2/3]
    || p163c1h0.6 "... The first article abount laminar computing ... proposed how the laminar cortical model could process 2D pictures using bottom-up filtering and horizontal bipole grouping interactions (Grossbeerg, Mingolla, Ross 1997). In 1999, I was able to extend the model to also include top-down circuits for expectation and attention (Grossberg 1999)(right panel). Such a synthesis of laminar bottom-up, horizontal, and top-down circuits is characteristic of the cerebral cortex (left panel). I called it LAMINART because it began to show how properties of Adaptive Resonance Theory, or ART, notably the ART prediction about how top-down expectations and attention work, and are realized by identical cortical cells and circuits. You can immediately see from the schematic laminar circuit diagram ... (right panel) that circuits in V2 seem to repeat circuits in V1, albeit with a larger spatial scale, despite the fact that V1 and V2 carry out different functions, How this anatomical similarity can coexist with functional diversity will be clarified in subsequent sections and chapters. It enable dfifferent kinds of biological intelligence to communicate seamlessly while carrying out their different psychological functions. ..."
  • image p174fig04.52 An example of how the 3D LAMINART model can transform the two monocular images of the random dot stereogream in the top row into the three depth-separated surfaces representations in the bottom row.
    || Stereogram surface percepts: surface lightnesses are segregated in depth (Fang and Grossberg 2009). [left, right] inputs, [far, fixation, near] planes. Contrast algorithms that just compute disparity matches and let computer code build the surface eg (Marr, Poggio, etal 1974).
  • image p182fig04.58 LAMINART model processing stage that are sufficient to explain many percepts of transparency, including those summarized in Figure 4.57.
    || [left, right] eye, [LGN, V1 [6, 4, 3B, 2/3 A], V2 [4, 2/3]], [mo, bi]nocular cart [simple, complex] cells, [excita, inhibi]tory cart [connection, cell]s.
  • image p230fig05.38 The Synchronous Matching ART, or SMART, model includes spiking neurons in a laminar cortical hierarchy. I developed SMART with my PhD student Massimiliano Versace. By unlumping LAMINART to include spiking neurons, finer details of neurodynamics, such as the existence of faster gamma oscillations during good enough matches, and slower beta oscillations during bad enough mismatches, could be shown as emergent properties of network interactions.
    || Second order thalamus -> specific thalamic nucleus -> Thalamic reticulate nucleus -> neocortical laminar circuit [6ll, 6l, 5, 2/3, 1] -> Higher order cortex. Similar for First order thalamus -> First order cortex, with interconnection to Second order, nonspecific thalamic nucleus
  • image p231fig05.39 The SMART hypothesis testing and learning cycle predicts that vigilance increases when a mismatch in subcortical regions like the nonspecific thanlamus activates the nucleus basalis of Meynert which, in turn, broadcasts a burst of the neurotransmitter acetylcholine, or ACh, to deeper cortical layers. Due to the way in which LAMINART proposes that cortical matching and mismatching occurs, this ACh burst can increase vigilance and thereby trigger a memory search. See the text for details.
    || [BU input, [, non]specific thalamic nucleus, thalamic reticulate nucleus, neocortical laminar circuit] cart [Arousal, Reset, Search, Vigilance]
  • image p246fig05.48 Microcircuits of the LAMINART model that I developed with Rajeev Raizada. See the text for details of how they integrate bottom-up adaptive filtering, horizontal bipole grouping, and top-down attentional matching that satisfied the ART Matching Rule.
    ||
  • image p248fig05.49 This circuit of the LAMINART model helps to explain properties of Up and Down states during slow wave sleep, and how disturbances in ACh dynamics can disrupt them.
    ||
  • image p375fig11.07 The 3D LAMINART model uses both monocular and binocular simple cells to binocularly fuse like image contrasts. The remainder of the model generates 3D boundary and surface representations of multiple kinds of experiments as well as of natural scenes.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A], V2 thin stripe [4->2/3A], V4]. V1 blob [V1-4 monocular, V1 interior binocular] simple cells. [complex, simple, inhibitory] cells, on-center off-surround
  • image p376fig11.10 The 3D LAMINART model shows how the disparity filter can be integrated into the circuit that completes 3D boundary representations using bipole grouping cells. It also explains how surface contours can strengthen boundaries that succeed in generating closed filling-in domains.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A] surface contour, V2 thin stripe (monocular surface) [4->2/3A], V2 interior [disynaptic inhibitory interneurons, bipole grouping cells, disparity filter, V4 binocular surface]. [complex, simple, inhibitory] cells, on-center off-surround
  • image p378fig11.12 How monocular and binocular information are combined in V1 and V2 in the 3D LAMINART model.
    || Model utilizes monocular information. [left, right] eye cart V1-[4 monocular simple, 3B binocular simple, complex2/3A [mo,bi]nocular] cells, V2-4 binocular complex cells. black = monocular cells, blue = binocular cells. In V2, monocular inputs add to binocular inputs along the line of sight and contribute to depth perception.
  • image p379fig11.13 How the 3D LAMINART model explains DaVinci stereopsis. All the stages of boundary and surface formation are color coded to clarify their explanation. Although each mechanism is very simple, when all of them act together, the correct depthful surface representation is generated. See the text for details.
    || DaVinci stereopsis (Nakayama, Shimojo 1990). An emergent property of the previous simple mechanisms working together. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] pair [Binocular match: boundaries of thick bar -> Add monocular boundaries along lines-of-sight -> Line-of-sight inhibition kills weaker vertical boundaries -> 3D surface percept not just a disparity match!] pair [Binocular match: right edge of thin and thick bars -> Strongest boundaries: binocular and monocular boundaries add -> Vertical boundaries from monocular left edge of thin bar survive -> Filling-in contained by connected boundaries]. cart [very near, near, fixation plane, far, very far]
  • image p380fig11.14 The model explanation of DaVinci stereopsis when the input stimuli have opposite contrast polarities.
    || Polarity-reversed Da Vinci stereopsis (Nakayama, Shimojo 1990). Same explanation! (... as Figure 11.13 ...) [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p395fig11.34 A comparison of the properties of other rivalry models with those of the 3D LAMINART model (surrounded by red border). Significantly, only 3D LAMINART explains both stable vision and rivalry (green border).
    || Comparison of rivalry models
  • image p401fig11.41 The 3D LAMINART model proposes how angle cells and disparity-gradient interact through learning to generate 3D representations of slanted objects.
    || 3D LAMINART model. [LGN, V1, V2, V4] Four key additions: 1. Angle cells - tuned to various angles; 2. Disparity-gradient cells - tuned to disparity gradients in the image; 3. weights from [angle to disparity-gradient] cells - learned while viewing 3D image; Colinear grouping between [angle to disparity-gradient] cells - disambiguates ambiguous groupings.
  • image p402fig11.44 3D scenic reconstruction of the image in Figure 11.43 by the 3D LAMINART model.
    || Disparity [5, 6, 8, 10, 11, 14]: images of objects in common depth planes
  • image p445fig12.43 The LIST PARSE laminar cortical Cognitive Working Memory circuit, that is proposed to occur in ventrolateral prefrontal cortex, is homologous to the LAMINART circuit circuit that models aspects of how visual cortex sees. The Motor Working Memory, VITE Trajectory Generator, and Variable-Rate Volitional Control circuits model how other brain regions, including dorsolateral prefrontal cortex, motor cortex, cerebellum, and basal ganglia, interact with the Cognitive Working Memory to control working memory storage and variable-rate performance of item sequences.
    || List parse circuit diagram. Connectivity convention. sequence chunks [<- BU filter, TD expectation ->] working memory. Working memory and sequence chunking circuit is homologous to visual LAMINART circuit!
  • image p498fig13.22 (left column, top row) Adaptive filtering and conditioned arousal are both needed to regulate what cues can learn to activate particular space-time patterns. These developments lead inexorably to basic cognitive abilities, as embodied in the 3D LAMINART models for 3D vision and figure-ground perception (Chapter 11) and the 3D ARTSCAN SEARCH model for invariant object learning, recognition, and 3D search (Chapter 6). (right column, top row) Conditioned arousal enables only emotionally important cues to activate a motivationally relevant space-time pattern. (bottom row) Conditioned arousal and drive representations arise naturally from the unlumping of avalanche circuits to make them selective to motivationally important cues. The MOTIVATOR model is a natural outcome of this unlumping process (this chapter).
    || (top) Adaptive filtering and Conditioned arousal. Towards Cognition: need to filter inputs to the command cell. Towards Emotion: important signals turn arousal ON and OFF. (bottom) Conditioned arousal and Drive representations. Competition between conditioned arousal sources at drive representations, eg amygdala.
  • image p228fig05.37 A macrocircuit of the neurotrophic Spectrally Timed ART, or nSTART, model. I developed nSTART with my PhD student Daniel Franklin. It proposes how adaptively timed learning in the hippocampus, bolstered by Brain Derived Neurotrophic Factor, or BDNF, helps to ensure normal memory consolidation.
    || habituative gates, CS, US, Thalamus (sensory cortex, category learning, conditioned reinforcer learning, adaptively time learning and BDNF), Amygdala (incentive motivation learning), Hippocampus (BDNF), Prefrontal Cortex (attention), Pontine nuclei, Cerebellum (adaptively timed motor learning)
  • image p541fig15.02 The neurotrophic Spectrally Timed Adaptive Resonance Theory, or nSTART, model of (Franklin, Grossberg 2017) includes hippocampus to enable adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between CS and US.
    || Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. [CS, US] -> Sensory Cortex (SC) <- motivational attention <-> category learning -> Prefrontal Cortex (PFC). SC conditioned reinforcement learning-> Amygdala (cannot bridge the temporal gap) incentive motivational learning-> PFC. SC adaptively timer learning and BDNF-> Hippocampus (can bridge the temporal gap) BDNF-> PFC. PFC adaptively timed motor learning-> cerebellum.
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
  • bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :image p221fig05.31 A system like Fuzzy ARTMAP can learn to associate learned categories in one ART network with learned categories in a second ART network. Because both bottom-up and top-down interactions occur in both networks, a bottom-up input pattern to the first ART network can learn to generate a top-down output pattern from the second ART network.
    || Fuzzy ARTMAP. Match tracking realizes minimax learning principle: vigilence increases to just above the match ratio of prototype / exemplar, thereby triggering search
  • image p225fig05.33 Some early ARTMAP benchmark studies. These successes led to the use of ARTMAP, and many variants that we and other groups have developed, in many large-scale applications in engineering and technology that has not abated even today.
    || see Early ARTMAP benchmark studies
  • image p225fig05.34 ARTMAP was successfully used to learn maps of natural terrains with many advantages over those of mapping projects that used AI expert systems. The advantages are so great that many mapping projects started to use this technology.
    || AI expert system - 1 year: field identification of natural regions; derivation of ad hoc rules for each region by expert geographers; correct 80,000 of 250,000 site labels; 230m (site-level) scale. ARTMAP system - 1 day: rapid, automatic, no natural regions or rules; confidence map; 30m (pixel-level) scale can see roads; equal accuracy at test sites
  • image p242fig05.46 Computer simulations of how two variants of Distributed ARTMAP incrementally learn the 5-4 category structure. See the text for details.
    || Distributed ARTMAP with [self-supervised learning, post-training LTM noise]
  • image p436fig12.30 The conscious ARTWORD, or cARTWORD, laminar cortical speech model simulates how future context can disambiguate noisy past speech sounds in such a way that the completed percept is consciously heard to proceed from past to future as a feature-item-list resonant wave propagates through time.
    || cARTWORD: Laminar cortical model macrocircuit (Grossberg, Kazerounian 2011) Simulates PHONEMIC RESTORATION: Cognitive Working Memory (processed item sequences) - [Excitatory-> inhibitory-> habituative-> adaptive filter-> adaptive filter-> adaptive filter with depletable synapse-> Acoustic [item, feature]
  • image p453fig12.49 The ARTWORD model that I published in 2000 with my PhD student Christopher Myers simulates data such as the (Repp etal 1978) data in Figure 12.68. See the text for details.
    || ARTWORD model (Grossberg, Myers 2000). Input phonetic features-> Phonemic item working memory-> Masking Field unitized lists-> Automatic gain control-> Phonemic item working memory. [habituative gate, adaptive filter]s.
  • image p453fig12.50 The ARTWORD perception cycle shows how sequences of items activate possible list chunks, which compete among each other and begin to send their top-down expectations back to the item working memory. An item-list resonance develops through time as a result.
    || ARTWORD perception cycle. (a) bottom-up activation (b) list chunk competition (c) item-list resonance (d) chunk reset due to habituative collapse.
  • image p455fig12.52 Simulation of cARTWORD dynamics in response to the complete list /1/-/2/-/3/. The relevant responses are surrounded by a red box.
    || Presentation of a normal sequence: input /1/-/2/-/3/. |c(i,1)-5| vs time (msec). List chunks select most predictive code. Order stored in WM layers. Resonant activity of /1/-/2/-/3/ in item and feature layers corresponds to conscious speech percept.
  • image p456fig12.53 Simulation of cARTWORD dynamics in response to the partial list /1/-silence-/3/ with /2/ replaced by silence. Only the representations of these items can be seen in the red box.
    || Presentation with silence duration: input /1/-silence-/3/. |c(i,1)-5| vs time (msec). List chunks select most predictive code. Order stored in WM layers. Gap in resonant activity of /1/-silence-/3/ in item and feature layers corresponds to perceived silence.
  • image p456fig12.54 Item /2/ is restored in the correct list position in response to the list /1/-noise-/3/.
    || Presentation with noise: input /1/-noise-/3/. |c(i,1)-5| vs time (msec). List chunks select the most predictive code. Order restored in WM layers. Resonant activity of /1/-/2/-/3/ in item and feature layers corresponds to restoration of item /2/ replaced by noise in input.
  • image p457fig12.55 Item /4/ is restored in the correct list position in response to the list /1/-noise-/5/. This and the previous figure show how future context can disambiguate past noisy sequences that are otherwise identical.
    || Presentation with noise: input /1/-noise-/5/. |c(i,1)-5| vs time (msec). List chunks select the most predictive code. Order restored in WM layers. Resonant activity of /1/-/4/-/3/ in item and feature layers corresponds to restoration of item /4/ replaced by noise in input.
  • image p444fig12.42 The LIST PARSE laminar cortical model of working memory and list chunking that I published with Lance Pearson in 2008 simulated the Averbeck etal data in Figure 12.41, as in the left column of the figure. It also simulated cognitive data about working memory storage by human subjects. See the text for details.
    || LIST PARSE: Laminar cortical model of working memory and list chunking (Grossberg, Pearson 2008). Simulates data about: [immediate, delayed, continuous] distractor free recall; immediate serial recall; and variable-speed sequential perfomance of motor acts. [velocity, acceleration] vs time (ms) from recall cue.
  • image p445fig12.43 The LIST PARSE laminar cortical Cognitive Working Memory circuit, that is proposed to occur in ventrolateral prefrontal cortex, is homologous to the LAMINART circuit circuit that models aspects of how visual cortex sees. The Motor Working Memory, VITE Trajectory Generator, and Variable-Rate Volitional Control circuits model how other brain regions, including dorsolateral prefrontal cortex, motor cortex, cerebellum, and basal ganglia, interact with the Cognitive Working Memory to control working memory storage and variable-rate performance of item sequences.
    || List parse circuit diagram. Connectivity convention. sequence chunks [<- BU filter, TD expectation ->] working memory. Working memory and sequence chunking circuit is homologous to visual LAMINART circuit!
  • image p446fig12.44 (left column, top row) LIST PARSE can model linguistic data from human subjects. In this figure, model parameters are fixed to enable a close fit to data about error-type distributions in immediate free recall experiments, notably transposition errors. (right column, top row) Simulation and data showing bowing of the serial position curve, including an extended primacy gradient. (left column, bottom row) The simulation curve overlays data about list length effects, notably the increasing recall difficulty of longer lists during immediate serial recall (ISR). (right column, bottom row) Simulation (bottom image) and data (top image) of the limited temporal extent for recall.
    || (1. TL) Error-type distributions in immediate serial recall (Hanson etal 1996). % occurence vs serial position. Graph convention: Data- dashed lines; Simulations- solid lines. Six letter visual ISR. Order errors- transpositions of neighboring items are the most common. Model explanation: Noisy activation levels change relative oorder in primacy gradient. Similar activation of neighboring items most susceptible to noise. Model paramenters fitted on these data. (2. TR) Bowing of serial position curve (Cowan etal 1999). % correct vs serial position. Auditory ISR with various list lengths (graphs shifted rightward): For [, sub-]span lists- extended primacy, with one (or two) item recency; Auditory presentation- enhanced performance for last items. LIST PARSE: End effects- first and last items half as many members; Echoic memory- last presented item retained in separate store. (3. BL) List length effects, circles (Crannell, Parrish 1968), squares (Baddeley, Hitch 1975), solid line- simulation. % list correct vs list length. Variable list length ISR: longer lists are more difficult to recall. LIST PARSE: More items- closer activation levels and lower absolute activity level with enough inputs; Noise is more likely to produce order errors, Activity levels more likely to drop below threshold;. (4. BR) Limited temporal extent for recall (Murdoch 1961). % recalled vs retention interval (s). ISR task with distractor-filled retention intervals (to prevent rehersal): Increasing retention interval - decreases probabilty of recalling list correctly; Load dependence- longer list more affected by delays; Performance plateau- subjects reach apparent asymptote. LIST PARSE: Increase convergence of activities with time; loss of order information;.
  • image p447fig12.45 (left column) LIST PARSE simulations of the proportion of order errors as a function of serial position for 6 item lists with (a) an extended pause of 7 time units between the third and fourth items, and (b) pauses of 5 time units (solid curve) and 10 time units (dashed curve) between all items. (right column) Simulations (solid curves) and data (dashed curves) illustrating close model fits in various immediate free recall tasks.
    || (Left) Temporal grouping and presentation variability. Temporal grouping: Inserting an extended pause leads to inter-group bowing; Significantly different times of integration and activity levels across pause, fewer interchanges. (Right) Immediate free recall, and [delayed, continuous] distractor-free recall. Overt rehersal IFR task with super-span (ie 20 item) lists: Extended recency- even more extended with shorter ISIs; Increased probability of recall with diminished time from last rehearsal; Early items in list rehearsed most;. LIST PARSE (unique) for long lists: Incoming items from recency gradient; Rehearsal (re-presentation) based upon level of activity.
  • image p254fig06.03 These interactions of the ARTSCAN Search model enable it to learn to recognize and name invariant object categories. Interactions between spatial attention in the Where cortical stream, via surface-shroud resonances, and object attention in the What cortical stream, that obeys the ART Matching Rule, coordinate these learning, recognition, and naming processes.
    || Retinal image -> On & OFF cell contrast normalization (retina/LGN) -> polarity [, in]sensitive contrast enhancement (V1) -> object [boundary (V2), surface (V2/V4)] -> surface contour (V2); What stream categories: volition control (BG). object boundary (V2) <-> view (ITp) <-> view integrator (ITp) <-> object (ITa) <-> [object-value (ORB), value (Amyg)] <-> name (PFC)
  • image p255fig06.04 The ARTSCAN Search model can also search for a desired target object in a scene, thereby clarifying how our brains solve the Where
  • image p259fig06.08 The distributed ARTSCAN, or dARTSCAN, model includes spatial attention in both PPC and PFC, and both fast-acting attention, triggered by transient cells in Where cortical areas such as MT, and slower-acting surface-shroud resonances in What cortical areas such as V4 and PPC. See the text for details.
    || dARTSCN spatial attention hierarchy, Fast (Where stream) Slow (What stream) (Foley, Grossberg, and Mingolia 2012). [transient cells (MT) ->, object surfaces (V4) <->] [object shrouds (PPC) <-> spatial shrouds (PPC/PFC)]
  • image p271fig06.17 Persistent activity in IT cells is just what is needed to enable view-invariant object category learning by ARTSCAN to be generalized to [view, position, size]-invariant category learning by positional ARTSCAN, or pARTSCAN. See the text for details.
    || Persistent activity in IT. Physiological data show that persistent activity exist in IT (Fuester and Jervey 1981, Miyashita and Chang 1988, Tomita etal 1999). Adapted from (Tomita etal 1999 Nature)
  • image p272fig06.18 The pARTSCAN model can learn [view, position, size]-invariant categories by adding view category integrator cells that have the properties of persistent neurons in IT. These integrator cells get reset with the invariant object category, not the view category.
    || pARTSCAN: positionally-invariant object learning. (Cao, Grossberg, Markowitz 2011). IT cells with persistent activities are modeled by view category integrators in ITp. View-specific category cells are RESET as the eyes move within the object. View category integrator cells are NOT RESET when the view-specific category is reset. They are RESET along with invariant object category cells when a spatial attention shift occurs.
  • image p273fig06.20 pARTSCAN can simulate the IT cell recoding that Li and DiCarlo reported in their swapping experiments because the swapping procedure happens without causing a parietal reset burst to occur. Thus the originally activated invariant category remains activated and can get associated with the swapped object features.
    || Simulation of Li and DiCarlo swapping data. data (Li and DiCarlo 2008), model (Cao, Grossberg, Markowitz 2011). normalized response vs. exposure (swaps and/or hours)
  • image p274fig06.21 pARTSCAN can also simulate the trade-off in IT cell responses between position invariance and selectivity that was reported by Zoccolan etal 2007. This trade-off limits the amount of position invariance that can be learned by a cortical area like V1 that is constrained by the cortical magnification factor.
    || Trade-off in IT cell response properties. Inferotemporal cortex cells with greater position invariance respond less selectively to natural objects. invariance-tolerance, selectivity-sparseness. data (Zoccolan etal 2007) model (Grossberg, Markowitzm, Cao 2011). position tolerance (PT, degrees) vs sparseness (S)
  • image p274fig06.22 pARTSCAN can simulate how IT cortex processes image morphs, when it learns with high vigilance. See the text for details.
    || Akrami etal simulation: a case of high vigilance. tested on morphs between image pairs
  • image p498fig13.22 (left column, top row) Adaptive filtering and conditioned arousal are both needed to regulate what cues can learn to activate particular space-time patterns. These developments lead inexorably to basic cognitive abilities, as embodied in the 3D LAMINART models for 3D vision and figure-ground perception (Chapter 11) and the 3D ARTSCAN SEARCH model for invariant object learning, recognition, and 3D search (Chapter 6). (right column, top row) Conditioned arousal enables only emotionally important cues to activate a motivationally relevant space-time pattern. (bottom row) Conditioned arousal and drive representations arise naturally from the unlumping of avalanche circuits to make them selective to motivationally important cues. The MOTIVATOR model is a natural outcome of this unlumping process (this chapter).
    || (top) Adaptive filtering and Conditioned arousal. Towards Cognition: need to filter inputs to the command cell. Towards Emotion: important signals turn arousal ON and OFF. (bottom) Conditioned arousal and Drive representations. Competition between conditioned arousal sources at drive representations, eg amygdala.
  • image p531fig14.06 Classification of scenic properties as texture categories by the ARTSCENE model. See the text for details.
    || Image-> Feature extraction (texture principal component rankings)-> Learning feature-to-scene mapping (texture category principal component rankings)<- scene class. Large-to-small attentional shrouds as principle component higher.
  • image p531fig14.07 Voting in the ARTSCENE model achieves even better prediction of scene type. See the text for details.
    || Image-> Feature extraction (texture principal component rankings)-> Learning feature-to-scene mapping (texture category principal component rankings)-> evidence accumulation (sum)-> scene class winner-take-all inference. Large-to-small attentional shrouds as principle component higher.
  • image p532fig14.08 Macrocircuit of the ARTSCENE Search neural model for learning to search for desired objects by using the sequences of already experienced objects and their locations to predict what and where the desired object is. V1 = First visual area or primary visual cortex; V2 = Second visual area; V4 = Fourth visual area; PPC = Posterior Parietal Cortex; ITp = posterior InferoTemporal cortex; ITa = anterior InferoTemporal cortex; MTL = Medial Temporal Lobe; PHC = ParaHippoCampal cortex; PRC = PeriRhinal Cortex; PFC = PreFrontal Cortex; DLPFC = DorsoLateral PreFrontal Cortex; VPFC = Ventral PFC; SC = Superior Colliculus.
    ||
  • image p533fig14.09 Search data and ARTSCENE Search simulations of them in each pair of images from (A) to (F). See the text for details.
    || 6*[data vs simulation], [Response time (ms) versus epoch].
  • image p214fig05.26 When a big enough mismatch occurs, the orienting system is activated and sends a burst of nonspecific arousal to the category level. This Mismatch Detector has properties of the N200 ERP.
    || Mismatch triggers nonspecific arousal. Mismatch at F1 eleicits a nonspecific event at F2. Call this event nonspecific arousal. N200 ERP Naatanen etal: 1. bottom-up, 2. unconditionable, 3. nonspecific, 4. mismatch
  • image p230fig05.38 The Synchronous Matching ART, or SMART, model includes spiking neurons in a laminar cortical hierarchy. I developed SMART with my PhD student Massimiliano Versace. By unlumping LAMINART to include spiking neurons, finer details of neurodynamics, such as the existence of faster gamma oscillations during good enough matches, and slower beta oscillations during bad enough mismatches, could be shown as emergent properties of network interactions.
    || Second order thalamus -> specific thalamic nucleus -> Thalamic reticulate nucleus -> neocortical laminar circuit [6ll, 6l, 5, 2/3, 1] -> Higher order cortex. Similar for First order thalamus -> First order cortex, with interconnection to Second order, nonspecific thalamic nucleus
  • image p231fig05.39 The SMART hypothesis testing and learning cycle predicts that vigilance increases when a mismatch in subcortical regions like the nonspecific thanlamus activates the nucleus basalis of Meynert which, in turn, broadcasts a burst of the neurotransmitter acetylcholine, or ACh, to deeper cortical layers. Due to the way in which LAMINART proposes that cortical matching and mismatching occurs, this ACh burst can increase vigilance and thereby trigger a memory search. See the text for details.
    || [BU input, [, non]specific thalamic nucleus, thalamic reticulate nucleus, neocortical laminar circuit] cart [Arousal, Reset, Search, Vigilance]
  • image p232fig05.40 Computer simulation of how the SMART mode generates (a) gamma oscillations if a good enough match occurs, or (c) beta oscillations if a bad enough match occurs. See the text for details.
    || Brain oscillations during match/mismatch, data, simulation. (a) TD corticothalamic feedback increases synchrony (Sillito etal 1994) (b) Match increases γ oscillations (c) Mismatch increases θ,β oscillations
  • image p232fig05.41 (a)-(c). The sequence of interlaminar events that SMART predicts during a mismatch reset. (d) Some of the compatible neurophysiological data.
    || Mismatch causes layer 5 dendritic spike that trigger reset. (a) Arousal causes increase in nonspecific thalamic nuclei firing rate and layer 5 dendritic and later somatic spikes (Karkum and Zhu 2002, Williams and Stuart 1999) (b) Layer 5 spikes reach layer 4 via layer 6i and inhibitory neurons (Lund and Boothe 1975, Gilbert and Wiesel 1979) (c) habituative neurotransmitters in layer 6i shift the balance of active cells in layer 4 (Grossberg 1972, 1976) (d) Dendritic stimulation fires layer 5 (Larkum and Zhu 2002) stimulation apical dendrites of nonspecific thalamus
  • Grossberg 2021 p229c2h0.60 SMART computer simulations demonstrate that a good enough match of a top-down expectation with a bottom-up feature pattern generates an attentive resonance during which the spikes of active cells synchronize in the gamma frequency range of 20-70 Hz (Figure 5.40). Many labs have reported a link between attention and gamma oscillations in the brain, including two articles published in 2001, one from the laboratory of Robert Desimone when he was at the the National Institute of Mental Health in Bethseda (Fries, Reynolds, Rorie, Desimone 2001), and the other from the laboratory of Wolf Singer in Frankfurt (Engel, Fries, Singer 2001). You
  • image p468fig12.65 Linguistic properties of the PHONET model and some of the data that it simulates. The upper left image summarizes the asymmetric transient-to-sustained gain control that helps to create invariant intraword ratios during variable-rate speech. The lower left image summarizes the rate-dependent gain control of the ARTPHONE model that creates rate-invariant working memory representations in response to sequences of variable-rate speech. The right image summarizes the kind of paradoxical VC-CV category boundary data of (Repp 1980) that ARTPHONE simulates. See the text for details.
    || (left upper) [transient, sustained] [working memory, filter, category]. (left lower) phone inputs-> [input rate estimate, features], Features w <- habituative transmitter gates -> categories-> rate invariant phonetic output, input rate estimate-> gain control-> [features, categories] rate-dependent integration of categories and features. (right) % 2-stop vs VC-CV silent interval (msec): [ib-ga, ib-ba, iga, iba].
  • image p420fig12.18 The ARTSTREAM model explains and simulates the auditory continuity illusion as an example of a spectral-pitch resonance. Interactions of ART Matching Rule and asymmetric competition mechanisms in cortical strip maps explain how the tone selects the consistent frequency from the noise in its own stream while separating the rest of the noise into another stream.
    || ARTSTREAM model (Grossberg 1999; Grossberg, Govindarajan, Wyse, Cohen 2004). SPINET. Frequency and pitch strips. Bottom Up (BU) harmonic sieve. Top Down (TD) harmonic ART matching. Exclusive allocation. Learn pitch categories based on early harmonic processing. A stream is a Spectral-Pitch Resonance!
  • image p422fig12.19 The ARTSTREAM model includes mechanisms for deriving streams both from pitch and from source direction. See the text for details.
    || [left, right] cart Peripheral processing = [input signal-> outer & middle ear preemphasis-> basilar membrane gammatone filterbank-> energy measure]. Spectral stream layer-> spectral summation layer-> delays-> [f-, tau] plane-> pitch stream layer-> pitch summation layer.
  • image p423fig12.20 The Spatial Pitch Network, or SPINET, model shows how a log polar spatial representation of the sound frequency spectrum can be derived from auditory signals occuring in time. The spatial representation allows the ARTSTREAM model to compute spatially distinct auditory streams.
    || SPINET model (Spatial Pitch Network) (Cohen, Grossberg, Wyse 1995). 1. input sound 2. Gamma-tone filter bank 3. Shaort-term average energy spectrum 4. MAP transfer function 5. On-center off-surround and rectification 6. Harmonic weighting 7. Harmonic summation and competition -> PITCH
  • image p424fig12.21 One of the many types of data about pitch processing that are simulated by the SPINET model. See the text for details.
    || Pitch shifts with component shifts (Patterson, Wightman 1976; Schouten 1962). Pitch vs lowest harmonic number.
  • image p425fig12.23 ARTSTREAM simulations of the auditory continuity illusion and other streaming properties (left column, top row). When two tones are separated by silence (Input), a percept of silence also separates them in a spectral-pitch resonance. (left column, bottom row). When two tones are separated by broadband noise, the percept of tone continues through the noise in one stream (stream 1) while the remainder of the noise occurs in a different stream (stream 2). (right column) Some of the other streaming properties that have been simulated by the ARTSTREAM model.
    || Auditory continuity does not occur without noise. Auditory continuity in noise. Other simulated streaming data.
  • image p431fig12.27 The strip maps that occur in ARTSTREAM and NormNet are variants of a cortical design that aalso creates ocular dominance columns in the visual cortex.
    || Adult organization of V1 (Grinvald etal http://www.weizmann.ac.il/brain/images/cubes.html). (1) Occular dominance columns (OCDs): Alternating strips of cortex respond preferentially to visual inputs of each eye (R/L corresponds to Right and Left eye inputs in the figure); Orientation columns: A smooth pattern of changing orientation preference within each ODC. Organized in a pinwheel like fashion.
  • p370 Chapter 11 means (Grossberg 2021) page 370, Chapter 11
    p002sec Illusion and realitymeans (Grossberg 2021) page 2, section Illusion and reality
    p013fig01.09means (Grossberg 2021) page 13, Figure 1.09 (1.9 as in book)
    p030tbl01.02 means (Grossberg 2021) page 30, Table 1.02 (1.2 as in book)
    p111c2h0.5means (Grossberg 2021) page 111, column 2, height from top as fraction of page height
    || text...Are notes in addition to [figure, table] captions, mostly comprised of text within the image, but also including quotes of text in the book. Rarely, it includes comments by Howell preceded by "Howell". The latter are distinct from "readers notes" (see, for example : reader Howell notes).
    p044 Howell: grepStr
  • p001 Chapter 1 Overview - From Complementary Computing and Adaptive Resonance to conscious awareness
  • p250 Chapter 6 Conscious seeing and invariant recognition - Complementary cortical streams coordinate attention for seeing and recognition
  • p539 Chapter 15 Adaptively timed learning - How timed motivation regulates conscious learning and memory consolidation
    p370 Chapter 11 means (Grossberg 2021) page 370, Chapter 11
    p002sec Illusion and realitymeans (Grossberg 2021) page 2, section Illusion and reality
    p013fig01.09means (Grossberg 2021) page 13, Figure 1.09 (1.9 as in book)
    p030tbl01.02 means (Grossberg 2021) page 30, Table 1.02 (1.2 as in book)
    p111c2h0.5means (Grossberg 2021) page 111, column 2, height from top as fraction of page height
    || text...Are notes in addition to [figure, table] captions, mostly comprised of text within the image, but also including quotes of text in the book. Rarely, it includes comments by Howell preceded by "Howell". The latter are distinct from "readers notes" (see, for example : reader Howell notes).
    p044 Howell: grepStr
  • image p039tbl01.03 The link between consciousness and movement
    ||
    VISUALseeing, knowing, and reaching
    AUDITORYhearing, knowing, and speaking
    EMOTIONALfeeling, knowing, and acting
  • image p042tbl01.04 The six main kinds of resonances which support different kinds of conscious awareness that will be explained and discussed in this book.
    ||
    type of resonancetype of consciousness
    surface-shroudsee visual object or scene
    feature-categoryrecognize visual object or scene
    stream-shroudhear auditory object or stream
    spectral-pitch-and-timbrerecognize auditory object or stream
    item-listrecognize speech and language
    cognitive-emotionalfeel emotion and know its source
  • image p270fig06.16 The same target position signal that can command the next saccade also updates a gain field that predictively maintains the attentional shroud in head-centered coordinates, even before the eye movement is complete. This process keeps the shroud invariant under eye movements, so that it can continue to inhibit reset of an emerging invariant category as t is associated with multiple object views, even while the conscious surface representation shifts with each eye movement in retinotopic coordinates. This pdating process is often called predictive re mapping.
    || Predictive remapping of eye movements! From V3A to LIP. [spatial attention, object attention, figure-ground separation, eye movement remapping, visual search]. (Beauvillaib etal 2005, Carlson-Radvansky 1999, Cavanaugh etal 2001, Fecteau & Munoz 2003, Henderson & Hollingworth 2003, Irwin 1991)
  • image p278fig06.27 A surface-shroud resonance through the Where stream enables us to consciously see an object while a feature-category resonance into the What stream enables us to recognize it. Both kinds of resonances can synchronize via visual cortex so that we can know what an object is when we see it.
    || What kinds of resonances support knowing vs seeing? What stream [knowing, feature-prototype resonance], Where stream [seeing, surface-shroud resonance]
  • image p278fig06.28 If the feature-category resonances cannot form, say due to a lesion in IT, then a surface-shroud resonance can still support conscious seeing of an attended object, and looking at or reaching for it, even if the individual doing so knows nothing about the object, as occurs during visual agnosia. The surface-shroud resonance supports both spatial attention and releases commands that embody the intention to move towards the attended object.
    || What kinds of resonances support knowing vs seeing? visual agnosia: reaching without knowing Patient DF (Goodale etal 1991). Attention and intention both parietal cortical functions (Anderson, Essick, Siegel 1985; Gnadt, Andersen 1988; Synder, Batista, Andersen 1997, 1998)
  • image p355fig10.02 Distinguishing processes of seeing vs knowing has been difficult because they interact so strongly.
    || Seeing vs. Knowing. Seeing and knowing [operate at different levels of the brain, use specialized circuits], but they [interact via feedback, use similar cortical designs, feedback is needed for conscious perception]. Cerebral Cortex: Seeing [V1-V4, MS-MST], Knowing [IT, PFC].
  • image p369fig10.19 Data from (Watanabe etal 2001) showing perceptual learning of the coherent motion direction, despite the lack of extra-foveal attention and awareness of the moving stimuli.
    || Unconscious perceptual learning of motion direction, % correct for two tests, compared to chance level results.
  • image p396fig11.35 Three properties of bipole boundary grouping in V2 can explain how boundaries oscillate in response to rivalry-inducing stimuli. Because all boundaries are invisible, however, these properties are not sufficient to generate a conscious percept of rivalrous surfaces.
    || 3 V2 boundary properties cause binocular rivalry. 1. Bipole grouping, 2. Orientational competition, 3. Actovity-dependent habituation
  • image p419fig12.17 The auditory continuity illusion illustrates the ART Matching Rule at the level of auditory streaming. Its "backwards in time" effect of future context on past conscious perception is a signature of resonance.
    || Auditory continuity illusion. input, percept. Backwards in time - How does a future sound let past sound continue through noise? Resonance! - It takes a while to kick in. After it starts, a future tone can maintain it much more quickly. Why does this not happen if there is no noise? - ART Matching Rule! TD harmonic filter is maodulatory without BU input. It cannot create something out of nothing.
  • image p436fig12.30 The conscious ARTWORD, or cARTWORD, laminar cortical speech model simulates how future context can disambiguate noisy past speech sounds in such a way that the completed percept is consciously heard to proceed from past to future as a feature-item-list resonant wave propagates through time.
    || cARTWORD: Laminar cortical model macrocircuit (Grossberg, Kazerounian 2011) Simulates PHONEMIC RESTORATION: Cognitive Working Memory (processed item sequences) - [Excitatory-> inhibitory-> habituative-> adaptive filter-> adaptive filter-> adaptive filter with depletable synapse-> Acoustic [item, feature]
  • image p455fig12.52 Simulation of cARTWORD dynamics in response to the complete list /1/-/2/-/3/. The relevant responses are surrounded by a red box.
    || Presentation of a normal sequence: input /1/-/2/-/3/. |c(i,1)-5| vs time (msec). List chunks select most predictive code. Order stored in WM layers. Resonant activity of /1/-/2/-/3/ in item and feature layers corresponds to conscious speech percept.
  • image p484fig13.04 The top-down feedback from the orbitofrontal cortex closes a feedback loop that supports a cognitive-emotional resonance. If this resonance can be sustained long enough, it enables us to have feelings at the same time that we experience the categories that caused them.
    || Cognitive-Emotional resonance. Basis of "core consciousness" and "the feeling of what happens". (Damasio 1999) derives heuristic version of CogEM model from his clinical data. Drive-> amygdala-> prefrontal cortex-> sensory cortex, resonance around the latter 3. How is this resonance maintained long enough to become conscious?
  • Figure 2.37 Modelling method and cycle, one starts with Behavioral Data from scores or hundreds of psychological experiments. These data are analyszed as the result of an individual adapting autonomously in real time to a changing world. This is the Arty of Modeling. It requires that one be able to infer from static data curves the dynamical processes that control individual behaviors occuring in real time. One of the hardest things that I teach to my students to do is "how to think in real time" to be able to carry out this speculative leap.

    Properly carried out, this analysis leads to the discovery of new Design Principles that are embodied by these behavioral processes. The Design Principles highlight the functional meaning of the data, and clarify how individual behaviors occurring in real time give rise to these static data curves.

    These principles are then converted into the simplest Mathematical Model using a method of minimal anatomies, which is a form of Occam
  • image p484fig13.04 The top-down feedback from the orbitofrontal cortex closes a feedback loop that supports a cognitive-emotional resonance. If this resonance can be sustained long enough, it enables us to have feelings at the same time that we experience the categories that caused them.
    || Cognitive-Emotional resonance. Basis of "core consciousness" and "the feeling of what happens". (Damasio 1999) derives heuristic version of CogEM model from his clinical data. Drive-> amygdala-> prefrontal cortex-> sensory cortex, resonance around the latter 3. How is this resonance maintained long enough to become conscious?
  • image p514fig13.44 Analog of the COgEM model in Figure 6.1 of (Damasio 1999).
    || (a) map of object X-> map of proto-self at inaugural instant-> [, map of proto-self modified]-> assembly of second-order map. (b) map of object X enhanced-> second-order map imaged.
  • image p105fig03.23 The pointillist painting A Sunday on la Grande Jatte by Georges Seurat illustrates how we group together both large-scale coherence among the pixels of the painting, as well as forming small groupings around the individual dabs of color.
    ||
  • image p107fig03.25 The Roofs of Collioure by Matisse. See the text for details
    || p107c1h0.6 "... [Matisse] showed how patches of pure color, when laid down properly on a canvas, could be grouped by the brain into emergent boundarues, without the intervention of visible outlines. ... The trick was that these emergent boundaries, being invisible, or amodal, did not darken the colors in the surface representations. In this sense, Matisse intuitively realized that "all boundaries are invisible" through the masterful way in which he arranged his colors on canvas to generate boundaries that could support compelling surface representations. ..."
  • image p108fig03.27 Matisse
  • image p110fig03.32 Claude Monet
  • image p120fig03.43 Four paintings by Monet of the Rouen cathedral under different lighting conditions (top row) and their monochromatic versions (bottom row). See the text for details.
    || p119c2h0.25 "... Monet uses nearby colors that are nearly equiluminant, and sharp, high-contrast luminance defined edges are sparse. He hereby creates weaker boundary signals within and between the parts of many forms, and stronger boundary signals between the forms. This combination facilitates color spreading within the forms and better separation of brightness and collor differences between forms. ... The grayscale versions of these paintings demonstrate the near equiluminance of the brushstrokes within forms, and places in which brightness and color differences significantly influence the groupings that differentiate between forms, including the differentiation between the cathedral and the sky. ..."
  • image p120fig03.44 The Rouen cathedral at sunset generates very different boundary webs than it does in full sunlight, as illustrated by Figure 3.45.
    || Rouen Cathedral at sunset (Monet 1892-1894).
    • Lighting almost equiluminant
    • Most boundaries are thus caused by color differences, not luminance differences
    • Fine architectural details are obscured, leading to...
    • Coarser and more uniform boundary webs, so...
    • Less depth in the painting.
  • image p121fig03.45 The Rouen cathedral in full sunlight.
    || Rouen Cathedral full sunlight (Monet 1892-1894).
    • Lighting is strongly non-uniform across most of the painting
    • Strong boundaries due to both luminance and color differences
    • Fine architectural details are much clearer, leading to...
    • Finer and more non-uniform boundary webs, so...
    • Much more detail and depth
  • image p121fig03.46 The Rouen cathedral in full sunlight contains T-Junctions that are not salient in the painting of it at sunset. These are among the painting
  • image p171fig04.49 An example of DaVinci stereopsis in which the left eye sees more of the wall between A and C than the right eye does. The region between B and C is seen only by the left eye because the nearer wall between C and D occludes it from the right eye view.
  • image p377fig11.11 DaVinci stereopsis phenomena occur when only one eye can receive visual inputs from part of a 3D scene due to occlusion by a nearer surface.
    || How does monocular information contribute to depth perception? DaVinci steropsis (Gillam etal 1999). Only by utilizing monocular information can visual system create correct depth percept. [left, right] eye view
  • image p379fig11.13 How the 3D LAMINART model explains DaVinci stereopsis. All the stages of boundary and surface formation are color coded to clarify their explanation. Although each mechanism is very simple, when all of them act together, the correct depthful surface representation is generated. See the text for details.
    || DaVinci stereopsis (Nakayama, Shimojo 1990). An emergent property of the previous simple mechanisms working together. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] pair [Binocular match: boundaries of thick bar -> Add monocular boundaries along lines-of-sight -> Line-of-sight inhibition kills weaker vertical boundaries -> 3D surface percept not just a disparity match!] pair [Binocular match: right edge of thin and thick bars -> Strongest boundaries: binocular and monocular boundaries add -> Vertical boundaries from monocular left edge of thin bar survive -> Filling-in contained by connected boundaries]. cart [very near, near, fixation plane, far, very far]
  • image p380fig11.14 The model explanation of DaVinci stereopsis when the input stimuli have opposite contrast polarities.
    || Polarity-reversed Da Vinci stereopsis (Nakayama, Shimojo 1990). Same explanation! (... as Figure 11.13 ...) [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p381fig11.15 The same model mechanisms explain the surface percept that is generated by the variant of DaVinci stereopsis that Gillam, Blackburn, and Nakayama studied in 1999.
    || DaVinci stereopsis (Gillam, Blackburn, Nakayama 1999). same model mechanisms. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p382fig11.16 The version of DaVinci steropsis wherein three narrow rectangles are binocularly matched with one thick rectangle can also be explained is a similar way.
    || DaVinci stereopsis of [3 narrow, one thick] rectangles (Gillam, Blackburn, Nakayama 1999). [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • p618 Chapter 17 A universal development code - Mental measurements embody universal laws of cell biology and physics
  • image p073fig02.19 Computing with cells: infinity does not exist in biology!
    || Computing in a bounded activity domain, Gedanken experiment (Grossberg 1970). Vm sub-areas [xm, B - xm], I(all m)], m=[1, i, B].
    Bexcitable sites
    xi(t)excited sites (activity, potential)
    B - xi(t)unexcited sites
  • image p082fig02.37 My models begin with behavioral data, since brains are designed to achieve behavioral success. The text explains how models evolve in stages, through a process of successive refinements, or unlumpings. These unlumpings together carry out a kind of conceptual evolution, leading to models that can explain and predict ever larger psychological and neurobiological databases.
    || Modelling method and cycle.
    Behavioral data -(art of modeling)-> Design principles <- Neural data <-(brain predictions)- Mathematical model and analysis -(behavioral predictions)-> Behavioural data
    Operationalizes "proper level of abstraction"
    Operationalizes that you cannot "derive a brain" in one step.
  • image p501fig13.26 A simple differential equation describes the processes of transmitter accumulation and release that do their best, at a finite rate, to carry out unbiased transduction.
    || Transmitter accumulation and release. Transmitter y cannot be restored at an infinite rate: T = S*ym y ~= B, Differential equations: d[dt: y] = A*(B - y) - S*y = accumulate - release. Transmitter y tries to recover to ensure unbiased transduction. What if it falls behind? Evolution has exploited the good properties that happen then.
  • Grossberg 2021 p081c2h0.66 sub-section: How does one evolve a computational brain?
    The above discussion illustrates that no single step of theoretical derivation can derive a whole brain. One needs a method for deriving a brain in stages, or cycles, much as evolution has incrementally discovered ever more complex brains over many thousands of years. The following theoretical method has been successfully applied many times since I first used it in 1957. It embodies a kind of conceptual evolutionary process for deriving a brain.

    Because "brain evolution needs to achieve behavioural success", we need to start with data that embodiey indices of behavioral success. That is why, as illustrated in Figure 2.37 Modelling method and cycle, one starts with Behavioral Data from scores or hundreds of psychological experiments. These data are analyszed as the result of an individual adapting autonomously in real time to a changing world. This is the Arty of Modeling. It requires that one be able to infer from static data curves the dynamical processes that control individual behaviors occuring in real time. One of the hardest things that I teach to my students to do is "how to think in real time" to be able to carry out this speculative leap.

    Properly carried out, this analysis leads to the discovery of new Design Principles that are embodied by these behavioral processes. The Design Principles highlight the functional meaning of the data, and clarify how individual behaviors occurring in real time give rise to these static data curves.

    These principles are then converted into the simplest Mathematical Model using a method of minimal anatomies, which is a form of Occam
  • image p248fig05.49 This circuit of the LAMINART model helps to explain properties of Up and Down states during slow wave sleep, and how disturbances in ACh dynamics can disrupt them.
    ||
  • image p278fig06.28 If the feature-category resonances cannot form, say due to a lesion in IT, then a surface-shroud resonance can still support conscious seeing of an attended object, and looking at or reaching for it, even if the individual doing so knows nothing about the object, as occurs during visual agnosia. The surface-shroud resonance supports both spatial attention and releases commands that embody the intention to move towards the attended object.
    || What kinds of resonances support knowing vs seeing? visual agnosia: reaching without knowing Patient DF (Goodale etal 1991). Attention and intention both parietal cortical functions (Anderson, Essick, Siegel 1985; Gnadt, Andersen 1988; Synder, Batista, Andersen 1997, 1998)
  • image p557fig15.26 Brain regions and processes that contribute to autistic behavioral symptoms when they become imbalanced in prescribed ways.
    || Basal Gamglia prolonged gate opening <-> { Amygdala emotionally depressed-> [hippocampus- hyperspecific learning; Cerebellum- adaptive timing fails; hypofrontal blocking fails, no Theory of Mind]-> Neocortex; Neocortex- rewards not received-> Amygdala}.
  • image p189fig05.04 The hippocampus is one of several brain regions that are important in learning and remembering about objects and events that we experience throughout life. The book will describe several hippocampal processes that contribute to this achievement in different ways.
    || hypothalmic nuclei, amygdala, hippocampus, cingulate gyrus, corpus callosum, thalamus
  • image p228fig05.37 A macrocircuit of the neurotrophic Spectrally Timed ART, or nSTART, model. I developed nSTART with my PhD student Daniel Franklin. It proposes how adaptively timed learning in the hippocampus, bolstered by Brain Derived Neurotrophic Factor, or BDNF, helps to ensure normal memory consolidation.
    || habituative gates, CS, US, Thalamus (sensory cortex, category learning, conditioned reinforcer learning, adaptively time learning and BDNF), Amygdala (incentive motivation learning), Hippocampus (BDNF), Prefrontal Cortex (attention), Pontine nuclei, Cerebellum (adaptively timed motor learning)
  • image p233fig05.42 Mismatch-induced beta oscillations have been reported in at least three parts of the brain: V1, V4, and hippocampus. Althpough there may be other reasons for beta oscillations in the brain, those that are caused by a mismatch should be studied in concert with the gamma oscillations that occur during a good enough match. See tyhe text for details.
    || Is there evidence for the [gamma, beta] prediction? Yes, in at least three parts of the brain, (Buffalo EA, Fries P, Ladman R, Buschman TJ, Desimone R 2011, PNAS 108, 11262-11267) Does this difference in average oscillation frequencies in the superficial and deep layers reflect layer 4 reset? Superficial recording γ (gamma), Deep recording β (beta) (Berke etal 2008, hippocampus; Buschman and Miller 2009, FEF)
  • image p520fig14.02 Macrocircuit of the main brain regions, and connections between them, that are modelled in the unified predictive Adaptive Resonance Theory (pART) of cognitive-emotional and working memory dynamics. Abbreviations in red denote brain regions used in cognitive-emotional dynamics. Those in green denote brain regions used in working memory dynamics. Black abbreviations denote brain regions that carry out visual perception, learning and recognition of visual object categories, and motion perception, spatial representation and target tracking. Arrow denote non-excitatory synapses. Hemidiscs denote adpative excitatory synapses. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown. Also not shown are output signals from cortical areas to motor responses. V1: striate, or primary, visual cortex; V2 and V4: areas of prestriate visual cortex; MT: Middle Temporal cortex; MST: Medial Superior Temporal area; ITp: posterior InferoTemporal cortex; ITa: anterior InferoTemporal cortex; PPC: Posterior parietal Cortex; LIP: Lateral InterParietal area; VPA: Ventral PreArculate gyrus; FEF: Frontal Eye Fields; PHC: ParaHippocampal Cortex; DLPFC: DorsoLateral Hippocampal Cortex; HIPPO: hippocampus; LH: Lateral Hypothalamus; BG: Basal Ganglia; AMYG: AMYGdala; OFC: OrbitoFrontal Cortex; PRC: PeriRhinal Cortex; VPS: Ventral bank of the Principal Sulcus; VLPFC: VentroLateral PreFrontal Cortex. See the text for further details.
    ||
  • image p532fig14.08 Macrocircuit of the ARTSCENE Search neural model for learning to search for desired objects by using the sequences of already experienced objects and their locations to predict what and where the desired object is. V1 = First visual area or primary visual cortex; V2 = Second visual area; V4 = Fourth visual area; PPC = Posterior Parietal Cortex; ITp = posterior InferoTemporal cortex; ITa = anterior InferoTemporal cortex; MTL = Medial Temporal Lobe; PHC = ParaHippoCampal cortex; PRC = PeriRhinal Cortex; PFC = PreFrontal Cortex; DLPFC = DorsoLateral PreFrontal Cortex; VPFC = Ventral PFC; SC = Superior Colliculus.
    ||
  • image p541fig15.02 The neurotrophic Spectrally Timed Adaptive Resonance Theory, or nSTART, model of (Franklin, Grossberg 2017) includes hippocampus to enable adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between CS and US.
    || Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. [CS, US] -> Sensory Cortex (SC) <- motivational attention <-> category learning -> Prefrontal Cortex (PFC). SC conditioned reinforcement learning-> Amygdala (cannot bridge the temporal gap) incentive motivational learning-> PFC. SC adaptively timer learning and BDNF-> Hippocampus (can bridge the temporal gap) BDNF-> PFC. PFC adaptively timed motor learning-> cerebellum.
  • image p543fig15.06 The circuit between dentate granule cells and CA1 hippocampal pyramid cells seems to compute spectrally timed responses. See the text for details.
    || Hippocampal interpretation. 1. Dentate granule cells (Berger, Berry, Thompson 1986): "increasing firing...in the CS period...the latency...was constant". 2. Pyramidal cells: "Temporal model" Dentate granule cells-> CA3 pyramids. 3. Convergence (Squire etal 1989): 1e6 granule cells, 1.6e5 CA3 pyramids. 80-to-1 (ri).
  • image p549fig15.19 How the adaptively timed hippocampal spectrum T inhibits (red arrow) the orienting system A as motivated attention in orbitofrontal cortex Si(2) peaks at the ISI.
    || Conditioning, Attention, and Timing circuit. Hippocampus spectrum-> Amgdala orienting system-> neocortex motivational attention. Adaptive timing inhibits orienting system and maintains adaptively timed Motivated Attention on the CS.
  • image p557fig15.26 Brain regions and processes that contribute to autistic behavioral symptoms when they become imbalanced in prescribed ways.
    || Basal Gamglia prolonged gate opening <-> { Amygdala emotionally depressed-> [hippocampus- hyperspecific learning; Cerebellum- adaptive timing fails; hypofrontal blocking fails, no Theory of Mind]-> Neocortex; Neocortex- rewards not received-> Amygdala}.
  • image p573fig16.01 The experimental chamber (A) and neurophysiological recordings from a rat hippocampus (B) that led to the discovery of place cells. See the text for details.
    ||
  • image p575fig16.03 As a rat navigates in its experimental chamber (black curves), neurophysiological recordings disclose the firing patterns (in red) of (a) a hippocampal place cell and (b) an entrorhinal grid cell.
    ||
  • image p578fig16.04 Cross-sections of the hippocampal regions and the inputs to them. See the text for details.
    || EC-> CA1-> CA3-> DG. Layers [V/V1, II, II].
  • image p583fig16.10 The GRIDSmap model is embedded into a more complete representation of the processing stages from receipt of angular head velocity and linear velocity signals to this learning of place cells.
    || GRIDSmap. Pre-wired 2D stripe cells, learns 2D grid cells. vestibular cells [angular head velocity-> head direction cells, linear velocity]-> stripe cells- small scale 1D periodic spatial code (ECIII)-> SOM grid cells entorhinal cortex- small scale 2D periodic spatial scale-> SOM place cells hippocampal cortex- large scale 2D spatial code (dentate/CA3). Unified hierarchy of SOMs.
  • image p600fig16.36 The entorhinal-hipppocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories. See the text for details.
    || Entorhinal-hippocampal interactions as an ART system. Hippocampal place cells as spatial categories. Angular head velocity-> head direction cells-> stripe cells- small scale 1D periodic code (ECIII) SOM-> grid cells- small scale 2D periodic code (ECII) SOM-> place cells- larger scale spatial map (DG/CA3)-> place cells (CA1)-> conjunctive-coding cells (EC V/VI)-> top-down feedback back to stripe cells- small scale 1D periodic code (ECIII). stripe cells- small scale 1D periodic code (ECIII)-> place cells (CA1).
  • image p602fig16.37 Data showing the effect of hippocampal inactivation by muscimol on grid cell firing before, during, and six hours after the muscimol, reading from left to right.
    || Hippocampal inactivation disrupts grid cells (Bonnevie etal 2013). muscimole inactivation. spikes on trajectory: [before, after min [6-20, 20-40, 40-60, 6h]]. rate map (Hz) [18.6, 11.4, 9.5, 6.7, 10.8]. spatial autocorrelogram g=[1.12, 0.05, -0.34, 0.09, 1.27].
  • image p603fig16.38 Role of hippocampal feedback in maintaining grid fields. (a) Data showing the effect of hippocampal inactivation before and during muscimol inhibition of hippocampal cells, as in Figure 16.37. (b) Model simulation with normal grid fields. (c) Model simulation that emulates the effect of hippocampal inhibition on grid fields.
    || (a) Data: hippocampal inactivation [before, after] cart [spikes on trajectory (p: [18.6, 6.7] Hz), spatial autocorrelogram (g= [1.12, 0.09])]. (b) Model: noise-free path integration, [spikes on trajectory (p: 14.56 Hz), rate map, spatial autocorrelogram (g= 1.41), dynamic autocorrelogram (g=0.6)]. (c) Model: noisy path integration + non-specific tonic inhibition, [spikes on trajectory (p: 11.33 Hz), rate map, spatial autocorrelogram (g= 0.05), dynamic autocorrelogram (g=0.047)].
  • image p617fig16.50 The perirhinal and parahippocampal cortices enable adaptively timed reinforcement learning and spatial navigational processes that are modeled by Spectral Spacing models in the What and Where cortical streams, respectively, to be fused in the hippocampus.
    || What and Where inputs to the hippocampus (Diana, Yonelinas, Ranganath 2007). Adaptively timed conditioning and spatial naviga039tbl01.03 tion. Hippocampus <-> Entorhinal Cortex <-> [Perirhinal Cortex <-> what, Parahippocampal Cortex <-> where].
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
    bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :image p419fig12.17 The auditory continuity illusion illustrates the ART Matching Rule at the level of auditory streaming. Its "backwards in time" effect of future context on past conscious perception is a signature of resonance.
    || Auditory continuity illusion. input, percept. Backwards in time - How does a future sound let past sound continue through noise? Resonance! - It takes a while to kick in. After it starts, a future tone can maintain it much more quickly. Why does this not happen if there is no noise? - ART Matching Rule! TD harmonic filter is maodulatory without BU input. It cannot create something out of nothing.
  • image p420fig12.18 The ARTSTREAM model explains and simulates the auditory continuity illusion as an example of a spectral-pitch resonance. Interactions of ART Matching Rule and asymmetric competition mechanisms in cortical strip maps explain how the tone selects the consistent frequency from the noise in its own stream while separating the rest of the noise into another stream.
    || ARTSTREAM model (Grossberg 1999; Grossberg, Govindarajan, Wyse, Cohen 2004). SPINET. Frequency and pitch strips. Bottom Up (BU) harmonic sieve. Top Down (TD) harmonic ART matching. Exclusive allocation. Learn pitch categories based on early harmonic processing. A stream is a Spectral-Pitch Resonance!
  • image p425fig12.23 ARTSTREAM simulations of the auditory continuity illusion and other streaming properties (left column, top row). When two tones are separated by silence (Input), a percept of silence also separates them in a spectral-pitch resonance. (left column, bottom row). When two tones are separated by broadband noise, the percept of tone continues through the noise in one stream (stream 1) while the remainder of the noise occurs in a different stream (stream 2). (right column) Some of the other streaming properties that have been simulated by the ARTSTREAM model.
    || Auditory continuity does not occur without noise. Auditory continuity in noise. Other simulated streaming data.
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
  • bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :p404 Chapter 12From seeing and reaching to hearing and speaking - Circular reaction, streaming, working memory, chunking, and number
  • image p030tbl01.02 The What and Where cortical processing streams obey complementary laws. These laws enable the What stream to rapidly and stably learn invariant object categories without experiencing catastrophic forgetting, while the Where stream learns labile spatial and action representations to control actions that are aimed towards these objects.
    ||
  • WHATWHERE
    spatially-invariant object learning and recognitionspatially-variant reaching and movement
    fast learning without catastrophic forgettingcontinually update sensory-motor maps and gains
    IT InterferoTemporal CortexPPC Posterior Parietal Cortex
    WhatWhere
    matchingexcitatoryinhibitory
    learningmatchmismatch
  • image p032fig01.21 At least three parallel visual cortical streams respond to visual inputs that reach the retina. Two parvocellular streams process visual surfaces (blob stream) and visual boundaries (interblob stream). The magnocellular stream processes visual motion.
    || [Retina, LGNs, V[1,2,3,4], MT] to [What- inferotemporal areas, Where- parietal areas]: visual parallel streams [2x blob, 1x bound]
  • image p039tbl01.03 The link between consciousness and movement
    ||
    VISUALseeing, knowing, and reaching
    AUDITORYhearing, knowing, and speaking
    EMOTIONALfeeling, knowing, and acting
  • image p092fig03.05 A cross-section of the retinal layer. Note that light stimuli need to go through all retinal layers before they reach the photoreceptor layer at which the light signals are registered.
    || light stimuli ->
    retinal layerscellular composition
    inner limiting membrane
    retinal nerve fibreganglion nerve fibres
    ganglion cellganglion
    inner plexiformamacrine
    inner nuclearhorizontal
    outer plexiform
    outer limiting membrane
    photoreceptorrod
    photoreceptorcone
    retinal pigment epithelium
    <- signal transduction. http://brain.oxfordjournals.org/content/early/2011/01/20/brain.awq346
  • image p232fig05.41 (a)-(c). The sequence of interlaminar events that SMART predicts during a mismatch reset. (d) Some of the compatible neurophysiological data.
    || Mismatch causes layer 5 dendritic spike that trigger reset. (a) Arousal causes increase in nonspecific thalamic nuclei firing rate and layer 5 dendritic and later somatic spikes (Karkum and Zhu 2002, Williams and Stuart 1999) (b) Layer 5 spikes reach layer 4 via layer 6i and inhibitory neurons (Lund and Boothe 1975, Gilbert and Wiesel 1979) (c) habituative neurotransmitters in layer 6i shift the balance of active cells in layer 4 (Grossberg 1972, 1976) (d) Dendritic stimulation fires layer 5 (Larkum and Zhu 2002) stimulation apical dendrites of nonspecific thalamus
  • image p278fig06.28 If the feature-category resonances cannot form, say due to a lesion in IT, then a surface-shroud resonance can still support conscious seeing of an attended object, and looking at or reaching for it, even if the individual doing so knows nothing about the object, as occurs during visual agnosia. The surface-shroud resonance supports both spatial attention and releases commands that embody the intention to move towards the attended object.
    || What kinds of resonances support knowing vs seeing? visual agnosia: reaching without knowing Patient DF (Goodale etal 1991). Attention and intention both parietal cortical functions (Anderson, Essick, Siegel 1985; Gnadt, Andersen 1988; Synder, Batista, Andersen 1997, 1998)
  • image p303fig08.20 The G-wave speeds up with the distance between flashes at a fixed delay, and has a consitent motion across multiple spatial scales.
    || G-wave properties (Grossberg 1977). Theorem 2 (Equal half-time property) The time at which the motion signal reaches position w=L/2. Apparent motion speed-up with distance: this half-time is independent of the distance L between the two flashes. Consistent motion across scales: half-time is independent of the scale size K. Method of proof: elementary algebra and calculus (Grossberg, Rudd 1989 appendix)
  • image p304fig08.21 A computer simulation of the equal half-time property whereby the apparent motions within different scales that respond to the same flashes all reach the half-way point in the motion trajectory at the same time.
    || Equal half-time property: how multiple scales cooperate to generate motion percept. Travelling waves from Gaussian filters of different sizes bridge the same distance in comparable time. The time needed to bridge half the distance between flashes is the same.
  • image p335fig08.61 Behavioral data (left image) and simulation (right image) about speed in correct and error trials of the RT task. See text for details.
    || Behavioral data: speed, correct and error trials (RT task) (Roltman, Shadien 2002). More coherence in the motion causes faster reaction time.
  • image p350fig09.22 How the negative Gaussian of an obstacle causes a peak shift to avoid the obstacle without losing sight of how to reach the goal.
    || Steering dynamics: obstacle avoidance. body-centered coordinates [obstacle, goal, heading] -> steering
  • image p351fig09.25 By the time MT+ is reached, directional transient cells and directional filters have begun to extract more global directional information from the image.
    || M+ computes global motion estimate. Estimate global motion from noisy local motion estimates.
  • image p414fig12.11 Neurophysiological data from cortical areas 4 and 5 (every other column) and simulations thereof (other columns) during a reach.
    || activation vs time. (a) area 4 phasic RT (IFV) (b) area 4 tonic (OPV) (c) area 4 phasic-tonic (OFPV) (d) area 4 phasic MT (DVV) (e) area 5 phasic (DV) (f) area 5 tonic (PPV)
  • image p416fig12.13 The DIRECT model learns, using a circular reaction that is energized by an Endogenous Random Generator, or ERG, to make motor-equivalent volitionally-activated reaches. This circular reaction learns a spatial representation of a target in space. It can hereby make accurate reaches with clamped joints and on its first try using a tool under visual guidance; see Figure 12.16.
    || DIRECT model (Bulloch, Grossberg, Guenther 1993). learns by circular reaction. learns spatial reresentation to me4diate between vision and action. motor-equivalent reaching. can reach target with clamped joints. can reach target with a TOOL on the first try under visual guidance. How did tool use arise?!
  • image p416fig12.14 Computer simulations of DIRECT reaches with (b) a tool, (c) a clamped elbow, and (d) with a blindfold, among other constraints.
    || Computer simulationsd of direct reaches [unconstrained, with TOOL, elbow clamped at 140°, blindfolded]
  • image p417fig12.15 The DIRECT and DIVA models have homologous circuits to learn and control motor-equivalent reaching and speaking, with tool use and coarticulation resulting properties. See the text for why.
    || From Seeing and Reaching to Hearing and Speaking, Circular reactions (Piaget 1945, 1951, 1952). Homologous circuits for development and learning of motor-equivalent REACHING and SPEAKING. DIRECT TOOL use (Bullock, Grossberg, Guenther 1993), DIVA Coarticulation (Guenther 1995)
  • image p428fig12.25 (left architecture) Auditory-articulatory feedback loop whereby babbled sounds active learning in an imitative map that is later used to learn to reproduce the sounds of other speakers. An articulatory-to-auditory expectation renders learning possible by making the auditory and motor data dimensionally consistent, as in the motor theory of speech. (right architecture) Parallel streams in the ARTSPEECH model for learning speaker-independent speech and language meaning, including a mechanism for speaker normalization (right cortical stream) and for learning speaker-dependent vocalic qualities (left cortical stream).
    || left: Speaker-dependent vocalic qualities; right: Speaker-independent speech and language meaning
  • image p430fig12.26 The NormNet model shows how speaker normalization can be achieved using specializations of the same mechanisms that create auditory streams. See the text for how.
    || [Anchor vs Stream] log frequency map. -> diagonals-> Speaker-independent acoustic item information-> [BU adaptive filter, TD learned expectation]-> leaned item recognition categories
  • image p446fig12.44 (left column, top row) LIST PARSE can model linguistic data from human subjects. In this figure, model parameters are fixed to enable a close fit to data about error-type distributions in immediate free recall experiments, notably transposition errors. (right column, top row) Simulation and data showing bowing of the serial position curve, including an extended primacy gradient. (left column, bottom row) The simulation curve overlays data about list length effects, notably the increasing recall difficulty of longer lists during immediate serial recall (ISR). (right column, bottom row) Simulation (bottom image) and data (top image) of the limited temporal extent for recall.
    || (1. TL) Error-type distributions in immediate serial recall (Hanson etal 1996). % occurence vs serial position. Graph convention: Data- dashed lines; Simulations- solid lines. Six letter visual ISR. Order errors- transpositions of neighboring items are the most common. Model explanation: Noisy activation levels change relative oorder in primacy gradient. Similar activation of neighboring items most susceptible to noise. Model paramenters fitted on these data. (2. TR) Bowing of serial position curve (Cowan etal 1999). % correct vs serial position. Auditory ISR with various list lengths (graphs shifted rightward): For [, sub-]span lists- extended primacy, with one (or two) item recency; Auditory presentation- enhanced performance for last items. LIST PARSE: End effects- first and last items half as many members; Echoic memory- last presented item retained in separate store. (3. BL) List length effects, circles (Crannell, Parrish 1968), squares (Baddeley, Hitch 1975), solid line- simulation. % list correct vs list length. Variable list length ISR: longer lists are more difficult to recall. LIST PARSE: More items- closer activation levels and lower absolute activity level with enough inputs; Noise is more likely to produce order errors, Activity levels more likely to drop below threshold;. (4. BR) Limited temporal extent for recall (Murdoch 1961). % recalled vs retention interval (s). ISR task with distractor-filled retention intervals (to prevent rehersal): Increasing retention interval - decreases probabilty of recalling list correctly; Load dependence- longer list more affected by delays; Performance plateau- subjects reach apparent asymptote. LIST PARSE: Increase convergence of activities with time; loss of order information;.
  • image p231fig05.39 The SMART hypothesis testing and learning cycle predicts that vigilance increases when a mismatch in subcortical regions like the nonspecific thanlamus activates the nucleus basalis of Meynert which, in turn, broadcasts a burst of the neurotransmitter acetylcholine, or ACh, to deeper cortical layers. Due to the way in which LAMINART proposes that cortical matching and mismatching occurs, this ACh burst can increase vigilance and thereby trigger a memory search. See the text for details.
    || [BU input, [, non]specific thalamic nucleus, thalamic reticulate nucleus, neocortical laminar circuit] cart [Arousal, Reset, Search, Vigilance]
  • image p232fig05.41 (a)-(c). The sequence of interlaminar events that SMART predicts during a mismatch reset. (d) Some of the compatible neurophysiological data.
    || Mismatch causes layer 5 dendritic spike that trigger reset. (a) Arousal causes increase in nonspecific thalamic nuclei firing rate and layer 5 dendritic and later somatic spikes (Karkum and Zhu 2002, Williams and Stuart 1999) (b) Layer 5 spikes reach layer 4 via layer 6i and inhibitory neurons (Lund and Boothe 1975, Gilbert and Wiesel 1979) (c) habituative neurotransmitters in layer 6i shift the balance of active cells in layer 4 (Grossberg 1972, 1976) (d) Dendritic stimulation fires layer 5 (Larkum and Zhu 2002) stimulation apical dendrites of nonspecific thalamus
  • image p461fig12.58 The lisTELOS model built upon key processes that were earlier modeled by the TELOS model. See the text for details.
    || TELOS model (Brown, Bulloch, Grossberg 1999, 2004). shows [BG nigro-[thalamic, collicular], FEF, ITa, PFC, PNR-THAL, PPC, SEF, SC, V1, V4/ITp, Visual Cortex input] and [GABA].
  • image p523fig14.03 (a) The MOTIVATOR neural model generalizes CogEM by also including the basal ganglia. It can hereby explain and simulate complementary functions of the amygdala and basal ganglia (SNc) during conditioning and learned performance. The basal ganglia generate Now Print signals in response to unexpected rewards. These signals modulate learning of new associations in many brain regions. The amygdala supports motivated attention to trigger actions that are expected to occur in response to conditioned or unconditioned stimuli. Object Categories represent visual or gustatory inputs in anterior inferotemporal (ITA) and rhinal (RHIN) cortices, respectively. Value Categories represent the value of anticipated outcomes on the basis of hunger and satiety inputs, in amygdala (AMYG) and lateral hypothalamus (LH). Object-Value Categories resolve the value of competing perceptual stimuli in medial (MORB) and lateral (ORB) orbitofrontal cortex. The Reward Expectation Filter detects the omission or delivery of rewards using a circuit that spans ventral striatum (VS), ventral pallidum (VP), striosomal delay (SD) cells in the ventral striatum, the pedunculopontine nucleus (PPTN) and midbrain dopaminergic neurons of the substantia nigra pars compacta/ventral tegmental area (SNc/VTA). The circuit that processes CS-related visual information (ITA, AMTG, ORB) operates in parallel with a circuit that processes US-related visual and gustatory information (RHIN, AMYG, MORB). (b) Reciprocal adaptive connections between hypothalamus and amygdala enable amygdala cells to become learned value categories. The bottom region represents hypothalmic cells, which receive converging taste and metabolite inputs whereby they become taste-drive cells. Bottom-up signals from activity patterns across these cells activate competing value category, or US Value Representations, in the amygdala. A winning value category learns to respond selectively to specific combinations of taste-drive activity patterns and sends adaptive top-down priming signals back to the taste-drive cells that activated it. CS-activated conditioned reinforcer signals are also associatively linked to value categories. Adpative connections end in (approximately) hemidiscs. See the text for details.
    ||
  • image p524fig14.04 (a) Model basal ganglia circuit for the control of dopaminergic Now Print signals from the substantia nigra pars compacta, or SNc, in response to unexpected rewards. Cortical inputs (Ii), activated by conditioned stimuli, learn to excite the SNc via a multi-stage pathway from the vantral striatum (S) to the ventral pallidum and then on to the PPTN (P) and the SNc (D). The inputs Ii excite the ventral striatum via adaptive weights WIS, and the ventral striatum excites the SNc with strength W_PD. The striosomes, which contain an adaptive spectral timing mechanism [xij, Gij, Yij, Zij], learn to generate adaptively timed signals that inhibit reward-related activation of the SNc. Primary reward signals (I_R) from the lateral hypothalamus both excite the PPTN directly (with strength W_RP) and act as training signals to the ventral striatum S (with strength W_RS) that trains the weights W_IS. Arrowheads denote excitatory pathways, circles denote inhibitory pathways, and hemidiscs denote synapses at which learning occurs. Thick pathways denote dopaminergic signals.
    ||
  • image p559fig15.27 Brain regions and processes that contribute to the release of dopaminergic Now Print signals by the substantia nigra pars compacta, or SNc, in response to unexpected reinforcing events. See the text for details.
    || Model of spectrally timed SNc learning (Brown, Bulloch, Grossberg 1999). Delayed inhibitory expectations of reward. Dopamine cells signal an error in reqard prediction timing or magnitude. Immediate excitatory predictions of reward. Lateral hypothalamus (Primary Reward Input)-> [(+)ventral striatum <-> ventral pallidium (+)-> PPTN(+)-> SNc]. SNc-> [dopamine signal -> ventral striatum, Striosomal cells]. Conditioned Stimuli (CS)(+)-> [ventral striatum, striosomal cells]. Striosomal cells(-)-> SNc.
  • image p560fig15.29 Excitatory pathways that support activation of the SNc by a US and the conditioning of a CS to the US.
    || Excitatory pathway. Primary reward (apple juice) briefly excites lateral hypothalamus. Hypothalamic-PPTN excitation causes SNc dopamine burst. Hypothalamic activity excites ventral striatum for training. Active CS working memory signals learn to excite ventral striatum. Lateral hypothalamus (Primary Reward Input)-> [(+)ventral striatum <-> ventral pallidium(+)-> PPTN(+)-> SNc]. SNc-> [dopamine signal -> ventral striatum. Conditioned Stimuli working memory trace (CS)(+)-> ventral striatum.
  • image p560fig15.30 The inhibitory pathway from striosomal cells to the SNc is able to inhibit the SNc when a reward occurs with expected timing and magnitude.
    || Inhibitory pathway. Learning: CS-striosomal LTP occurs due to a three-way coincidence [An active CS working memory input, a Ca2+ spike, a dopamine burst]; Signaling: The delayed Ca2+ spike facilitates striosomal-SNc inhibition;. Striosomal cells learn to predict both timing and magnitude of reward signal to cancel it: reward expectation;. Conditioned stimuli (CS) LTP-> Striosomal cells <- dopamine | (-)-> SNc->.
  • image p561fig15.32 The SNc can generate both dopamine bursts and dips in response to rewards whose amplitude is unexpectedly large or small.
    || Inhibitory pathway: expectation magnitude. 1. If reward is greater than expected, a dopamine burst causes striosomal expectation to increase. 2. If reward is less than expected, a dopamine dip causes striosomal expectation to decrease. 3. This is a negative feedback control system for learning. Conditioned stimuli (CS)-> Striosomal cells <- dopamine | (-)-> SNc->.
  • image p569fig15.40 The direct and indirect basal ganglia circuits that control GO and STOP movement signals. See the text for details.
    || [Direct path GO(+), Indirect path STOP(+), dopamine from SNc(+-)]-> striatum. GO-> GPi/SNr-> Thalamus (VA/Vlo) <-> frontal cortex. STOP-> GPe <-> STN-> GPi/SNr. NAc-> GPi/SNr.
  • image p375fig11.06 The contrast constraint on binocular fusion is realized by obligate cells in layer 3B of cortical area V1.
    || Model implements contrast constraint on binocular fusion (cf. "obligate" cells Poggio 1991). An ecological constraint on cortical development. [left, right] eye cart V1-[4 monocular simple, 3B binocular simple, complex2/3A] cells. Inhibitory cells (red) ensure that fusion occurs when contrasts in left and right eye are approximately equal.
  • image p376fig11.09 The disparity filter in V2 helps to solve the correspondence problem by eliminating spurious contrasts using line-of-sight inhibition.
    || Model V2 disparity filter solves the correspondence problem. An ecological constraint on cortical development. [left, right] eye view: False matches (black) suppressed by line-of-sight inhibition (green lines). "Cells that fire together wire together".
  • image p581fig16.06 The learning of hexagonal grid cell receptive fields as an animal navigates an open field is a natural consequence of simple trigonometric properties of the positions at which the firing of stripe cells that are tuned to different directions will co-occur.
    || The Trigonometry of spatial navigation. Coactivation of stripe cells.
  • image p583fig16.09 The GRIDSmap model used algorithmically defined stripe cells to process realistic rat trajectories. The stripe cell outputs then formed inputs to the adaptive filter of a self-organizing map which learned hexagonal grid cell receptive fields.
    || GRIDSmap. Self-organizing map receives inputs from stripe cells and learns to respond to most frequent co-activation patterns. Stripe cells combine speed and head direction to create a periodic 1D position code. Virtual rat navigated using live rat trajectories from Moser Lab. Speed and head direction drives stripe cells.
  • image p584fig16.11 GRIDSmap simulation of the learning of hexagonal grid fields. See the text for details.
    || Simulation results. Multiple phases per scale. response vs lenght scale (0.5m+).
  • image p585fig16.13 Hexagonal grid cell receptive fields develop if their stripe cell directional preferences are separated by 7, 10, 15, 20, or random numbers degrees. The number and directional selectivities of stripe cells can thus be chosen within broad limits without undermining grid cell development.
    ||
  • image p585fig16.14 Superimposing firing of stripe cells whose directional preferences differ by 60 degrees supports learning hexagonal grid cell receptive fields in GRIDSmap.
    || GRIDSmap: from stripe cells to grid cells. Grid-cell Regularity from Integrated Distance through Self-organizing map. Superimposing firing of stripe cells oriented at intervals of 60 degrees. Hexagonal grid!
  • image p586fig16.15 Superimposing stripe cells oriented by 45 degrees does not lead to learning of rectangular grids in GRIDSmap, but it does in an oscillatory inference model.
    || Why is a hexagonal grid favored? Superimposing firing of stripe cells oriented at intervals of 45 degrees. Rectangular grid. This and many other possibilities do not happen in vivo. They do happen in the oscillatory inference model. How are they prevented in GRIDSmap?
  • image p587fig16.17 A finer analysis of the 2D trigonometry of spatial navigation showed that both the frequency and amplitude of coactivations by stripe cells determine the learning of hexagonal grid fields.
    || A refined analysis: SOM amplifies most frequent and energetic coactivations (Pilly, Grossberg 2012). [linear track, 2D environment]. (left) Stripe fields separated by 90°. 25 coactivations by 2 inputs. (right) Stripe fields separated by 60°. 23 coactivations by 3 inputs.
  • image p588fig16.18 Simulations of coordinated learning of grid cell receptive fields (second row) and unimodal place cell receptive fields (third row) by the hierarchy of SOMs in the GridPlaceMap model. Note the exquisite regularity of the hexagonal grid cell firing fields.
    || [stripe, grid, place] cells vs [spikes on trajectory, unsmoothed rate map, smoothed rate map].
  • image p605fig16.39 Data showing effects of medial septum (MS) inactivation on grid cells and network theta oscillations in medial entorhinal cortex (MEC). (A) Examples of disruption in the spatial expression of the hexagonal grid structure for two grid cells (Brandon etal 2011). (B) Temporal reduction in the power and frequency of network theta oscillations (Koenig etal 2011). (C) Temporary reduction in the gridness score, mean firing rate, and patial stability of grid cells (Koenig etal 2011).
    || Disruptive effects of Medial Septum inactivation in Medial Entorhinal Cortex (Brandon etal 2011; Koenig etal 2011). (A) Rate map [rate map, spatial autocorrelations, trajectory] vs [baseline, sub-sampled, medial septum inactivation, 3-6 hour recovery, 24 hour recovery], [rate map (Hz- m, p), spatial autocorrelations (gridness)][ 1.2, 7.2, 1.1; 0.25, 1.7, 0.6; 0.25, 2.5, -0.53; 0.7, 5.1, 0.55; 1.0, 5.3, 1.3; 2.1, 15, 0.19; 1.7, 12, 0.71; 1.7, 3.2, -0.22; 1.8, 9.1, 0.68; 2.5, 13, 0.46]. (B) [normalized power at 7-9 Hz, frequency (Hz)] vs 5-minute periods. (C) [mean gridness score (+-SEM), mean firing rate (% of baseline), mean correlation coeff (+-SEM)] vs 10-minute periods.
  • p353 Chapter 10 Laminar computing by cerebral cortex - Towards a unified theory of biologucal and artificial intelligence
  • image p011fig01.07 The choice of signal function f determines how an initial activity pattern will be transformed and stored in short-term memory (STM). Among [same, slower, faster]-than-linear signal functions, only the last one can suppress noise. It does so as it chooses the population that receives the largest input for storage, while suppressing the activities of all other population, thereby giving rise to a winner-take-all choice.
    || initial pattern (xi(0) vs i):
    fXi(∞)= xi(∞)/sum[j: xj(∞)]x(∞)
    linearperfect storage of any patternamplifies noise (or no storage)
    slower-than-linearsaturatesamplifies noise
    faster-than-linearchooses max [winner-take-all, Bayesian], categorical perceptionsuppresses noise, [normalizes, quantizes] total activity, finite state machine
  • image p357fig10.04 Laminar Computing achieves its properties by computing in a new way that sythesizes the best properties of feedforward and feedback interactions, analog and digital computations, and preattentive and attentive learning. The property of analog coherence enables coherent groupings and decisions to form without losing sensitivity to the amount of evidence that supports them.
    || Laminar Computing: a new way to compute. 1. Feedforward and feedback: a) Fast feedforward processing when data are unambiguous (eg Thorpe etal), b) slower feedback chooses among ambiguous alternatives [self-normalizing property, real-time probabiligy theory], c) A self-organizing system that trades certainty against speed: Goes beyond Bayesian models! 2. Analog and Digital: Analog Coherence combines the stability of digital with the sensitivity of analog. 3. Preattentive and Attentive Learning: Reconciles the differences of (eg) Helmholtz and Kanizsa, "A preattentive grouping is its own
  • image p230fig05.38 The Synchronous Matching ART, or SMART, model includes spiking neurons in a laminar cortical hierarchy. I developed SMART with my PhD student Massimiliano Versace. By unlumping LAMINART to include spiking neurons, finer details of neurodynamics, such as the existence of faster gamma oscillations during good enough matches, and slower beta oscillations during bad enough mismatches, could be shown as emergent properties of network interactions.
    || Second order thalamus -> specific thalamic nucleus -> Thalamic reticulate nucleus -> neocortical laminar circuit [6ll, 6l, 5, 2/3, 1] -> Higher order cortex. Similar for First order thalamus -> First order cortex, with interconnection to Second order, nonspecific thalamic nucleus
  • image p232fig05.40 Computer simulation of how the SMART mode generates (a) gamma oscillations if a good enough match occurs, or (c) beta oscillations if a bad enough match occurs. See the text for details.
    || Brain oscillations during match/mismatch, data, simulation. (a) TD corticothalamic feedback increases synchrony (Sillito etal 1994) (b) Match increases γ oscillations (c) Mismatch increases θ,β oscillations
  • image p233fig05.42 Mismatch-induced beta oscillations have been reported in at least three parts of the brain: V1, V4, and hippocampus. Althpough there may be other reasons for beta oscillations in the brain, those that are caused by a mismatch should be studied in concert with the gamma oscillations that occur during a good enough match. See tyhe text for details.
    || Is there evidence for the [gamma, beta] prediction? Yes, in at least three parts of the brain, (Buffalo EA, Fries P, Ladman R, Buschman TJ, Desimone R 2011, PNAS 108, 11262-11267) Does this difference in average oscillation frequencies in the superficial and deep layers reflect layer 4 reset? Superficial recording γ (gamma), Deep recording β (beta) (Berke etal 2008, hippocampus; Buschman and Miller 2009, FEF)
  • image p296fig08.07 When two flashes turn on and off out of phase with the correct range of interstimulus intervals, and not too far from one another, then either beta motion of phi motion are perceived.
    || Beta and Phi motion percepts. Beta motion: percepts of continuous motion of a well-defined object across empty intervening space. Phi motion: sense of "pure" motion without a concurrent percept of moving object. (Exner 1875) http://www.yorku.ca/eye/balls.htm
  • image p297fig08.08 When a second flash is more intense than the first flash, then apparent motion may occur from the second to the first flash.
    || Delta motion: motions from the second to the first flash. Data: (Kolers 1972; Korte 1915). Simulation: (Grossberg, Rudd 1992). This occurs when the luminance or contrast of the second flash is large compared to that of the first flash. Sustained and transient cells obey shunting dynamics whose averaging rates speed up with output intensity. The first flash to wane is the one that will be the source of the G-wave.
  • image p340fig09.07 Log polar remapping from the retina to cortical area V1 and beyond converts expansion, translation, and spiral flows on the retina into parallel flows, with different orientations, on the cortical map.
    || Log polar remapping of optic flow. retina -> cortex. Any combination of expansion and circular motion centered on the fovea maps to cortex as a single direction. Retinal Cartesian coordinates (x,y) map to cortical polar coordinates (r,theta). This makes it easy to compute directional receptive fields in the cortex!
  • image p423fig12.20 The Spatial Pitch Network, or SPINET, model shows how a log polar spatial representation of the sound frequency spectrum can be derived from auditory signals occuring in time. The spatial representation allows the ARTSTREAM model to compute spatially distinct auditory streams.
    || SPINET model (Spatial Pitch Network) (Cohen, Grossberg, Wyse 1995). 1. input sound 2. Gamma-tone filter bank 3. Shaort-term average energy spectrum 4. MAP transfer function 5. On-center off-surround and rectification 6. Harmonic weighting 7. Harmonic summation and competition -> PITCH
  • image p598fig16.34 The spiking GridPlaceMap model generates theta-modulated place and grid cell firing, unlike the rate-based model.
    || Theta-modulated cells in spiking model. [place, grid] cell vs [membrane potential (mV vs time), frequency vs inter-spike intervals (s), power spectra (normalized power vs frequency (Hz))].
  • image p605fig16.39 Data showing effects of medial septum (MS) inactivation on grid cells and network theta oscillations in medial entorhinal cortex (MEC). (A) Examples of disruption in the spatial expression of the hexagonal grid structure for two grid cells (Brandon etal 2011). (B) Temporal reduction in the power and frequency of network theta oscillations (Koenig etal 2011). (C) Temporary reduction in the gridness score, mean firing rate, and patial stability of grid cells (Koenig etal 2011).
    || Disruptive effects of Medial Septum inactivation in Medial Entorhinal Cortex (Brandon etal 2011; Koenig etal 2011). (A) Rate map [rate map, spatial autocorrelations, trajectory] vs [baseline, sub-sampled, medial septum inactivation, 3-6 hour recovery, 24 hour recovery], [rate map (Hz- m, p), spatial autocorrelations (gridness)][ 1.2, 7.2, 1.1; 0.25, 1.7, 0.6; 0.25, 2.5, -0.53; 0.7, 5.1, 0.55; 1.0, 5.3, 1.3; 2.1, 15, 0.19; 1.7, 12, 0.71; 1.7, 3.2, -0.22; 1.8, 9.1, 0.68; 2.5, 13, 0.46]. (B) [normalized power at 7-9 Hz, frequency (Hz)] vs 5-minute periods. (C) [mean gridness score (+-SEM), mean firing rate (% of baseline), mean correlation coeff (+-SEM)] vs 10-minute periods.
  • Grossberg 2021 p229c2h0.60 SMART computer simulations demonstrate that a good enough match of a top-down expectation with a bottom-up feature pattern generates an attentive resonance during which the spikes of active cells synchronize in the gamma frequency range of 20-70 Hz (Figure 5.40). Many labs have reported a link between attention and gamma oscillations in the brain, including two articles published in 2001, one from the laboratory of Robert Desimone when he was at the the National Institute of Mental Health in Bethseda (Fries, Reynolds, Rorie, Desimone 2001), and the other from the laboratory of Wolf Singer in Frankfurt (Engel, Fries, Singer 2001). You
  • p618 Chapter 17 A universal development code - Mental measurements embody universal laws of cell biology and physics
  • image p025fig01.16 (left panel) The main processing stages of the Cognitive-Emotional-Motor (CogEM) model have anatomical interpretations in terms of sensory cortex, amygdala, and prefrontal cortex. Chapter 13 will describe in greater detail how CS cues activate invariant object categories in the sensory cortex, value categories in the amygdala, and object-value categories in the prefrontal cortex, notably the orbitofrontal cortex. The amygdala is also modulated by internal drive inputs like hunger and satiety. (right panel) Anatomical data support this circuit, as do many neurophysiological data.
    || drive -> amygdala -> prefrontal cortex <-> sensory cortex -> amygdala. [visual, somatosensory, auditory, gustatory, olfactory] cortex -> [amygdala, Orbital Prefrontal Cortex]. amygdala -> Lateral Prefrontal Cortex
  • image p058fig02.04 Serial learning paradigm: Learning the temporal order of events by practicing them in the order that they occur in time.
    || Learning a global arrow in time. How do we learn to encode the temporal order of events in LTM? serial learning. [w=intra, W=inter]trial interval. "... data about serial verbal learning (Figure 2.4) seemed to suggest that events can go "backwards in time". ..."
  • image p059fig02.05 Bowed serial position curve. This kind of data emphasizes the importance of modelling how our brains give rise to our minds using nonlinear systems of differential equations.
    || Effects of [inter, intra]trial intervals (Hovland 1938). # of errors vs list position. [w (sec), W (sec)] = (2 6) (4 6) (2 126) (4 126). Nonoccurance of future items reduces the number of errors in response to past items. Thes data require a real-time theory for their explanation! that is, DIFFERENTIAL equations.
  • image p059fig02.06 The bowed serial position curve illustrates the sense in which "events can go backwards in time" during serial learning.
    || Bow due to backward effect in time. If the past influenced the future, but no conversely: # of errors vs list position; Data (Hoyland Hull, Underwood, etc).
  • image p071fig02.16 To solve the noise-saturation dilemma, individual neurons in a network that is receiving a distributed spatial patterns of inputs need to remain sensitive to the ratio of input to them divided by all the inputs in that spatial pattern. Although the inputs are delivered to a finite number of neurons, the input and activity patterns are drawn continuously across the cells for simplicity.
    || Noise-Saturation Dilemma. [Ii, xi] vs t. [Input, Activity] pattern [small -> noise, large -> saturation]. Problem: remain sensitive to input RATIOS θi = Ii / sum[j: Ij] as total input I = sum[j: Ij] -> ∞. Many kinds of data exhibit sensitivity to ratios of inputs.
  • image p073fig02.19 Computing with cells: infinity does not exist in biology!
    || Computing in a bounded activity domain, Gedanken experiment (Grossberg 1970). Vm sub-areas [xm, B - xm], I(all m)], m=[1, i, B].
    Bexcitable sites
    xi(t)excited sites (activity, potential)
    B - xi(t)unexcited sites
  • image p082fig02.37 My models begin with behavioral data, since brains are designed to achieve behavioral success. The text explains how models evolve in stages, through a process of successive refinements, or unlumpings. These unlumpings together carry out a kind of conceptual evolution, leading to models that can explain and predict ever larger psychological and neurobiological databases.
    || Modelling method and cycle.
    Behavioral data -(art of modeling)-> Design principles <- Neural data <-(brain predictions)- Mathematical model and analysis -(behavioral predictions)-> Behavioural data
    Operationalizes "proper level of abstraction"
    Operationalizes that you cannot "derive a brain" in one step.
  • image p085fig02.38 Our models have been used in many large-scale applications to engineering and technology. Linking brain to behavior explains how brain mechanisms give rise to psychological functions, and do so autonomously. The combination of mechanism, function, and autonomy helps to explain their value in helping to solve outstanding problems in technology.
    || Modeling method and cycle.
    Behavioral data -(art of modeling)-> Design principles <- Neural data <-(brain predictions)- Mathematical model and analysis -(behavioral predictions)-> Behavioural data
    Technology: Mathematical model and analysis <-> Technological applications
    At every stage, spin off new model designs and mechanisms to technologist who need autonomous intelligent applications.
  • image p134fig04.14 The kinds of displays that Michael Paradiso and Ken Nakayamas used to catch filling-in "in the act" and which Karl Arrington then simulated using the Grossberg and Todorovic 1988 model.
    || Experiments on filling-in. Catching "filling0in" in the act (Paradiso, Nakayama 1991). (Arrington 1994 Vision Research 34, 3371-3387) simulated these data using the model of Grossberg and Todorovic 1988.
  • image p145fig04.23 If end gaps were not closed by end cuts, then color would flow out of every line end!
    || A perceptual disaster in the feature contour system. feature contour, line boundary. input -> [boundary, surface]. boundary -> surface. Color would flow out of every line end! as it does during neon color spreading.
  • image p151fig04.29 Experimental evidence of bipole cells in cortical area V2 was reported by Von der Heydt, Peterhans, and Baumgarter (1984).
    || Bipoles: first neurophysiological evidence (V2) (von der Heydt, Peterhans, Baumgartner 1984, Peterhans, von der Heydt 1988). (Grossberg 1984) prediction.
    Ordering:
    Stimulus (S)
    probe location *
    cells in V2
    response?
    ...(S)*...YES
    ...*...(S)NO
    (S)...*...NO
    (S)...*...(S)YES
    (S)...*...
    (more contrast)
    NO
    (S)...*.....(S)YES
    Evidence for receptive field.
  • image p151fig04.30 Anatomical evidence for long-range horizontal connections has also been reported, as illustrated by the example above from (Bosking etal 1997).
    || Anatomy: horizontal connections (V1) (Bosking etal 1997). tree shrew. [10, 20]*[20, 10, 0, -10, -20] (degrees).
  • image p152fig04.31 The predicted bipole cell receptive field (upper left corner) has been supported by both neurophysiological data and psychophysical data, and used in various forms by many modelers. See the text for details.
    || Bipoles through the ages. (Grossberg 1984; Grossberg, Mongolla 1985). (Field, Hayes, Hess 1993) "association field". (Heitger, von der Heydt 1993). (Williams, Jacobs 1997). cf. "relatability" geometric constraints on which countours get to group (Kellman & Shipley 1991). Also "tensor voting" (Ullman, Zucker, Mumford, Guy, Medioni, ...).
  • image p159fig04.36 Graffiti art by Banksy exploits properties of amodal boundary completion and spatial impenetrability.
    || p159c1h0.75 perceptual psychologist Nava Rubin "... When the wall is smooth, Banksy leaves the regions previously covered by stencil unpainted, relying of observers
  • image p162fig04.38 How long-range cooperation among bipole cells and short-range competition by hypercomplex cells work together to generate the inverted-U in boundary strength that is found in the data of Figure 4.37 (right panel).
    || Cooperation and competition during grouping.
    few lineswide spacing, inputs outside spatial range of competition, more inputs cause higher bipole activity
    more linesnarrower spacing, slightly weakens net input to bipoles from each inducer
    increasing line densitycauses inhibition to reduce net total input to bipoles
  • image p163fig04.39 A schematic of the LAMINART model that explains key aspects of laminar visual cortical anatomy and dynamics. LGN -> V1 [6, 4, 2/3] -> V2 [6, 4, 2/3]
    || p163c1h0.6 "... The first article abount laminar computing ... proposed how the laminar cortical model could process 2D pictures using bottom-up filtering and horizontal bipole grouping interactions (Grossbeerg, Mingolla, Ross 1997). In 1999, I was able to extend the model to also include top-down circuits for expectation and attention (Grossberg 1999)(right panel). Such a synthesis of laminar bottom-up, horizontal, and top-down circuits is characteristic of the cerebral cortex (left panel). I called it LAMINART because it began to show how properties of Adaptive Resonance Theory, or ART, notably the ART prediction about how top-down expectations and attention work, and are realized by identical cortical cells and circuits. You can immediately see from the schematic laminar circuit diagram ... (right panel) that circuits in V2 seem to repeat circuits in V1, albeit with a larger spatial scale, despite the fact that V1 and V2 carry out different functions, How this anatomical similarity can coexist with functional diversity will be clarified in subsequent sections and chapters. It enable dfifferent kinds of biological intelligence to communicate seamlessly while carrying out their different psychological functions. ..."
  • image p165fig04.41 The Kanizsa-Minguzzi ring. See the text for details.
    || p165c1h0.6 "... (left panel), the annulus is divided by two line segments into annular sectors of unequal area. Careful viewing shows that the smaller sector looks a little brighter than the larger one. (Kanizsa, Minguzzi 1986) noted that "this unexpected effect is not easily explained. In fact, it cannot be accounted for by any simple psychological mechanism such as lateral inhibition or freuency filtering. Furthermore, it does not seem obvious to invoke oganizational factors, like figural belongingness or figure-ground articulation."". p165c2h0.35 "... (Grossberg, Todorovic 1988). Our main claim is that the two radial lines play two roles, one in the formation of boundaries with which to contain the filling-in process, and the other as a source of feature contour signals that are filled-in within the annular regions to create a surface brightness percept. ..."
  • image p232fig05.40 Computer simulation of how the SMART mode generates (a) gamma oscillations if a good enough match occurs, or (c) beta oscillations if a bad enough match occurs. See the text for details.
    || Brain oscillations during match/mismatch, data, simulation. (a) TD corticothalamic feedback increases synchrony (Sillito etal 1994) (b) Match increases γ oscillations (c) Mismatch increases θ,β oscillations
  • image p232fig05.41 (a)-(c). The sequence of interlaminar events that SMART predicts during a mismatch reset. (d) Some of the compatible neurophysiological data.
    || Mismatch causes layer 5 dendritic spike that trigger reset. (a) Arousal causes increase in nonspecific thalamic nuclei firing rate and layer 5 dendritic and later somatic spikes (Karkum and Zhu 2002, Williams and Stuart 1999) (b) Layer 5 spikes reach layer 4 via layer 6i and inhibitory neurons (Lund and Boothe 1975, Gilbert and Wiesel 1979) (c) habituative neurotransmitters in layer 6i shift the balance of active cells in layer 4 (Grossberg 1972, 1976) (d) Dendritic stimulation fires layer 5 (Larkum and Zhu 2002) stimulation apical dendrites of nonspecific thalamus
  • image p252fig06.01 A surface-shroud resonance begins to form when the surface representations of objects bid for spatial attention. In addition to these topographic excitatory inputs, there is long-range inhibition of the spatial attention cells that determines which inputs will attract spatial attention.
    || Bottom-up spatial attention competition. [more, less] luminous perceptual surfaces -> competition -> spatial attention
  • image p253fig06.02 After bottom-up surface inputs activate spatial attentional cells, they send top-down topographic excitatory signals back to the surface representations. This recurrent shunting on-center off-surround network contrast enhances larger attentional activities while approximately normalizing the total spatial attentional activity. A surface-shroud resonance hereby forms that selects an attentional shroud, enhances the perceived contrast of the attended surface (light blue region), and maintains spatial attention on it.
    || Surface-shroud resonance. perceptual surfaces -> competition -> spatial attention. (Carrasco, Penpeci-Talgar, and Eckstein 2000, Reynolds and Desimone 2003)
  • image p257fig06.05 A curve tracing task with monkeys was used by Roelfsema, Lamme, and Spekreijse in 1998 to demonstrate how spatial attention can flow along object boundaries. See the text for details.
    || Attention flows along curves: Roelfsema etal 1998: Macaque V1. fixation (300ms) -> stimulus (600ms RF - target curve, distractor) -> saccade. Crossed-curve condition: attention flows across junction between smoothly connected curve segments, Gestalt good continuation
  • image p258fig06.06 Neurophysiological data and simulation of how attention can flow along a curve. See the text for details.
    || Simulation of Roelfsema etal 1998, data & simulation. Attention directed only to far end of curve. Propagates along active layer 2/3 grouping to distal neurons.
  • image p265fig06.13 The basal ganglia gate perceptual, cognitive, emotional, and more processes through parallel loops.
    || [motor, ocularmotor, dorsolateral, ventral-orbital, anterior cingulate] vs. [Thalamus, pallidum-subs, nigra, Striatum, Cortex]
  • image p267fig06.14 Feedback from object surfaces to object boundaries uses surface contours. This feedback assures complementary consistency and enables figure-ground separation. A corollary discharge of the surface contours can be used to compite salient object feature positions.
    || Perceptual consistency and figure-ground separation.
  • image p271fig06.17 Persistent activity in IT cells is just what is needed to enable view-invariant object category learning by ARTSCAN to be generalized to [view, position, size]-invariant category learning by positional ARTSCAN, or pARTSCAN. See the text for details.
    || Persistent activity in IT. Physiological data show that persistent activity exist in IT (Fuester and Jervey 1981, Miyashita and Chang 1988, Tomita etal 1999). Adapted from (Tomita etal 1999 Nature)
  • image p273fig06.20 pARTSCAN can simulate the IT cell recoding that Li and DiCarlo reported in their swapping experiments because the swapping procedure happens without causing a parietal reset burst to occur. Thus the originally activated invariant category remains activated and can get associated with the swapped object features.
    || Simulation of Li and DiCarlo swapping data. data (Li and DiCarlo 2008), model (Cao, Grossberg, Markowitz 2011). normalized response vs. exposure (swaps and/or hours)
  • image p274fig06.21 pARTSCAN can also simulate the trade-off in IT cell responses between position invariance and selectivity that was reported by Zoccolan etal 2007. This trade-off limits the amount of position invariance that can be learned by a cortical area like V1 that is constrained by the cortical magnification factor.
    || Trade-off in IT cell response properties. Inferotemporal cortex cells with greater position invariance respond less selectively to natural objects. invariance-tolerance, selectivity-sparseness. data (Zoccolan etal 2007) model (Grossberg, Markowitzm, Cao 2011). position tolerance (PT, degrees) vs sparseness (S)
  • image p275fig06.23 Data from (Akrami etal 2009) and our simulation of it. See the text for details.
    || IT responses to image morphs. data vs model
  • image p284fig07.02 Psychophysical data (top row) and simulation (bottom row) of how persistence decreases with flash illuminance and duration.
    || Persistence data and simulations. (Francis, Grossberg, Mingolia 1994 Vision Research, 34, 1089-1104). Persistence decreases with flash illuminance and duration (Bowen, Pola, Matin 1974; Breitmeyer 1984; Coltheart 1980). Higher luminance or longer duration habituates the gated dipole ON channel more. Causes larger and faster rebound in the OFF channel to shut persisting ON activity off.
  • image p285fig07.03 Persistence decreases with flash illuminance and duration due to the way in which habituative transmitters regulate the strength of the rebound in response to offset of a stimulating input, and how this rebound inhibits previously activated bipole cells.
    || Persistence data and simulations (Francis, Grossberg, Mingolia 1994 Vision Research, 34, 1089-1104). Persistence decreases with flash illuminance and duration. Horizontal input excites a horizontal bipole cell, which supports persistence. Offset of the horizontal input causes a rebound of activity in the vertical pathway, which inhibits the horizontal bipole cell, thereby terminating persistence.
  • image p286fig07.04 Illusory contours persist longer than real contours because real contours have more inducers whose rebound at contour offset can cause faster boundary reset. Illusory contours also take longer to form than real contours, which explains the increasing portion of the curve.
    || Persistence data and simulations (Meyer, Ming 1988; Reynolds 1981). Increasing portion of curve is due to formation time of the illusory contour. Longer persistence is due to fewer bottom-up inducers of an illusory contour that has the same length as a real contour: only illuminance-derived edges generate reset signals. When bottom-up inducers are inhibited by OFF cell rebounds, their offset gradually propagates to the center of the illusory contour.
  • image p286fig07.05 This figure shows the propagation through time of illusory contour offset from the rebounded cells that got direct inputs to the center of the contour.
    || Persistence data and simulations. Illusory contours persist longer than real contours (Meyer, Ming 1988; Reynolds 1981). When bottom-up inducers are inhibited by OFF cell rebounds, their offset gradually propagates to the center of the illusory contour.
  • image p287fig07.06 The relative durations of persistence that occur due to an adaptation stimulus of the same or orthogonal orientation follow from the properties of the habituative gated dipoles that are embedded in the boundary completion system.
    || Persistence data and simulations. Change in persistence depends on whether adaptation stimulus has same or orthogonal orientation as test grating (Meyer, Lawson, Cohen 1975). If adaptation stimulus and test stimulus have the same orientation, they cause cumulative habituation, which causes a stronger reset signal, hence less persistence. When they are orthogonal, the competition on the ON channel is less, hence more persistence.
  • image p287fig07.07 Persistence increases with distance between a target and a masking stimulus due to weakening of the spatial competition in the first competitive stage of hypercomplex cells.
    || Persistence data and simulations. Persistence increases with distance between a target and a masking stimulus (Farrell, Pavel, Sperling 1990). There is less spatial competition from the masker to the target when they are more distant, hence the target is more persistent.
  • image p297fig08.08 When a second flash is more intense than the first flash, then apparent motion may occur from the second to the first flash.
    || Delta motion: motions from the second to the first flash. Data: (Kolers 1972; Korte 1915). Simulation: (Grossberg, Rudd 1992). This occurs when the luminance or contrast of the second flash is large compared to that of the first flash. Sustained and transient cells obey shunting dynamics whose averaging rates speed up with output intensity. The first flash to wane is the one that will be the source of the G-wave.
  • image p297fig08.09 Simulation of motion in opposite directions that is perceived when two later flashes occur on either side of the first flash.
    || Split motion. Data: (H.R. Silva 1926), Simulation: (Grossberg, Rudd 1992)
  • image p298fig08.10 Simulation of the motion speed-up that is perceived when flash duration decreases.
    || "The less you see it, the faster it moves". Data: (Giaschi, Anstis 1989), Simulation: (Grossberg, Rudd 1992). ISI = 0, flash duration decreases; SOA = constant, flash duration decreases
  • image p304fig08.22 Data (top image) and simulation (bottom image) of Korte
  • image p311fig08.30 The data of (Castet etal 1993) in the left image was simulated in the right image by the 3D FORMOTION model that I developed with my PhD student Jonathan Chey. These data provide insight into how feature tracking signals propagate from the ends of a line to its interior, where they capture consistent motion directional signals and inhibit inconsistent ones.
    || Solving the aperture problem. A key design problem: How do amplified feature tracking signals propagate within depth to select the cirrect motion directions at ambiguous positions? This propagation from feature tracking signals to the line interior determines perceived speed in Castet etal data, which is why speed depends on line tilt and length. Data: (Castet etal 1993), Simulation: (Chey etal 1997)
  • image p319fig08.38 The neurophysiological data from MT (left image) confirms the prediction embodied in the simulation of MT (right image) concerning the fact that it takes a long time for MT to compute an object
  • image p333fig08.58 Neurophysiological data (left image) and simulation (right image) of LIP data during correct trials on the RT task. See the text for details.
    || LIP responses during RT task correct trials (Roltman, Shadlen 2002). More coherence in favored direction causes faster cell activation. More coherence in opposite direction causes faster cell inhibition. Coherence stops playing a role in the final stages of LIP firing.
  • image p334fig08.59 Neurophysiological data (left column) and simulations (right column) of LIP responses for the FD task during both [correct, error] trials. See the text for details.
    || LIP responses for the FD task during both [correct, error] trials (Shadlen, Newsome 2001). LIP encodes the perceptual decision regardless of the true direction of the dots. Predictiveness of LIP responses on error trials decreases with increasing coherence.
  • image p334fig08.60 Behavioral data (left image) and simulation (right image) about accuracy in both the RT and FD tasks. See text for details
    || Behavioral data: % correct vs % coherence (Mazurek etal 2003; Roltman, Shadien 2002). More coherence in the motion causes more accurate decisions. RT task accuracy at weaker coherence levels is slightly better than FD task accuracy.
  • image p335fig08.61 Behavioral data (left image) and simulation (right image) about speed in correct and error trials of the RT task. See text for details.
    || Behavioral data: speed, correct and error trials (RT task) (Roltman, Shadien 2002). More coherence in the motion causes faster reaction time.
  • image p335fig08.62 More remarkable simulation fits (right column) to LIP neurophysiology data (left column) about where and when to move the eyes.
    || LIP encodes not only where, but also when, to move the eyes. ...No Bayes(Roltman, Shadien 2002). Firing rate (sp/s) vs time (ms). Slope of firing rate (sp/s^2) vs % correct.
  • image p342fig09.11 Psychophysical data (left panel) and computer simulation (right column) of the importance of efference copy in real movements. See the text for details.
    || Heading: move to wall and fixate stationary object (adapted from Warren, Hannon 1990). Inaccurate for simulated eye rotation, accurate for real eye rotation, need confirmation by efference copy!
  • image p343fig09.13 When one scans the three different types of pears in the left image, as illustrated by the jagged blue curve with red movement end positions, and transforms the resulting retinal images via the cortical magnification factor, or log polar mapping, the result is the series of images in the right column. How do our brains figure out from such confusing data which views belong to which pear?
    || View-invariant object learning and recognition Three pears: Anjou, Bartlett, Comice. Which is the Bartlett pear? During unsupervised scanning and learning about the world, no one tells the brain what views belong to which objects while it learns view-invariant object categories. Cortical magnificantion in V1.
  • image p349fig09.20 Using virtual reality displays (left image), (Fajen, Warren 2003) collected data (right two images) about how observers avoid obstacles (open circular disks) as a function of their distance and angular position as they navigate towards a fixed goal (x). These data illustrate how goals act as attractors while obstacles act as repellers.
    || Steering from optic flow (Fajen, Warren 2003). goals are attractors, obstacles are repellers. Damped spring model explains human steering data.
  • image p352fig09.26 The final stage of the model computes a beautiful expansion optic flow that permits an easy estimate of the heading direction, with an accuracy that matches that of human navigators.
    || The model generates accurate heading (Warren, Hannon 1990; Royden, Crowell, Banks 1994). Maximally active MSTd cell = heading estimate. Accuracy matches human data. Random dots [mean +-1.5°, worst +-3.8°], Random dots with rotation [accurate with rotations <1°/s, rotation increases, error decreases], OpenGL & Yosemite benchmark +-1.5°, Driving video +-3°.
  • image p356fig10.03 Laminar computing achieves at least three basic properties of visual processing that have analogs in all biologically intelligent behaviors. These properties may be found in all cortical circuits in specialized form.
    || What does Laminar Computing achieve? 1. Self-stabilizing development and learning; 2. Seamless fusion of a) pre-attentive automatic bottom-up processing, b) attentive task-selective top-down processing; 3. Analog coherence: Solution of Binding Problem for perceptual grouping without loss of analog sensitivity. Even the earliest visual cortical stages carry out active adaptive information processing: [learn, group, attention]ing
  • image p357fig10.04 Laminar Computing achieves its properties by computing in a new way that sythesizes the best properties of feedforward and feedback interactions, analog and digital computations, and preattentive and attentive learning. The property of analog coherence enables coherent groupings and decisions to form without losing sensitivity to the amount of evidence that supports them.
    || Laminar Computing: a new way to compute. 1. Feedforward and feedback: a) Fast feedforward processing when data are unambiguous (eg Thorpe etal), b) slower feedback chooses among ambiguous alternatives [self-normalizing property, real-time probabiligy theory], c) A self-organizing system that trades certainty against speed: Goes beyond Bayesian models! 2. Analog and Digital: Analog Coherence combines the stability of digital with the sensitivity of analog. 3. Preattentive and Attentive Learning: Reconciles the differences of (eg) Helmholtz and Kanizsa, "A preattentive grouping is its own
  • image p360fig10.09 Perceptual grouping is carried out in layer 2/3 by long-range horizontal excitatory recurrent connections, supplemented by short-range disynaptic inhibitory connections that together realize the bipole grouping properties that are diagrammed in Figure 10.10.
    || Grouping starts in layer 2/3. LGN-> 6-> 4-> 2/3: 1. Long-range horizontal excitation links collinear, coaxial receptive fields (Gilbert, Wiesel 1989; Bosking etal 1997; Schmidt etal 1997) 2. Short-range disynaptic inhibition of target pyramidal via pool of intraneurons (Hirsch, Gilbert 1991) 3. Unambiguous groupings can form and generate feedforward outputs quickly (Thorpe etal 1996).
  • image p361fig10.10 Bipole grouping is achieved by long-range horizontal recurrent connections that also give rise to short-range inhibitory interneurons which inhibit nearby bipole cells as well as each other.
    || Bipole property controls perceptual grouping. Collinear input on both sides. Excitatory inputs summate. Inhibitory inputs normalize, Shunting inhibition! Two-against-one. Cell is excited.
  • image p367fig10.15 Data (left column) and simulation (right column) of how attention prevents a masking stimulus from inhibiting the response to the on-center of the cell from which the recording was made.
    || Attention protects target from masking stimulus (Reynolds etal 1999; Grossberg, Raizada 2000).
  • image p367fig10.16 Neurophysiological data (left image) and simulation (right image) of how a low-contrast target can be facilitated if it is surrounded by a paid (31May2023 Howell - is word correct?) of collinear flankers, and suppresssed by them if it has high contrast.
    || Flankers can enhance or suppress targets (Polat etal 1998; Grossberg, Raizada 2000). target alone, target + flankers, flankers alone.
  • image p368fig10.17 Neurophysiological data (left image) and simulation (right image) showing that attention has a greater effect on low contrast than high contrast targets.
    || Attention has greater effect on low contrast targets (DeWeerd etal 1999; Raizada, Grossberg 2001). Threshold increase (deg) vs Grating contrast (%), [no, with] attention
  • image p368fig10.18 Neurophysiological data (left image) and simulation (right image) of relative on-cell activities when the input to that cell may also be surroubded by iso-orientation or perpendicular textures.
    || Texture reduces response to a bar: iso-orientation suppression (Knierim, van Essen 1992), perpendicular suppression (Raizada, Grossberg 2001)
  • image p369fig10.19 Data from (Watanabe etal 2001) showing perceptual learning of the coherent motion direction, despite the lack of extra-foveal attention and awareness of the moving stimuli.
    || Unconscious perceptual learning of motion direction, % correct for two tests, compared to chance level results.
  • image p393fig11.31 (Todd, Akerstrom 1987) created a series of 2D images from discrete black patches on a white disk and showed how the perceived depth varies with the factors summarized in the figure. The LIGHTSHAFT model quantitatively simulated their data.
    || Factors determining depth-from-texture percept. Perceived depth varies with texture element width, but only when elements are elongated and sufficiently aligned with one another to form long-range groupings. Data of (Todd, Akerstrom 1987) simulated by the LIGHTSHAFT model of (Grossberg, Kuhlmann 2007). [HP, LP, CCE, CCS, RO]
  • image p399fig11.39 Simulation of the eye rivalry data of (Lee, Blake 1999).
    || [Binocular, [left, right] eye] activity
  • image p402fig11.43 A pair of disparate images of a scene from the University of Tsukuba. Multiview imagre database.
    || input [left, right]
  • image p407fig12.03 Neurophysiological data showing how motor cortical cells code different vectors that are sensitive to both the direction of the commanded movement and its length.
    || (a) Single primary motor cortex neuron, onset of movement -> on..., radial architecture... (b) Motor cortex neuronal population, radial architecture...
  • image p409fig12.04 (top half) Neurophysiological data of vector cell responses in motor cortex. (bottom half) VITE model simulations of a simple movement in which the model
  • image p414fig12.11 Neurophysiological data from cortical areas 4 and 5 (every other column) and simulations thereof (other columns) during a reach.
    || activation vs time. (a) area 4 phasic RT (IFV) (b) area 4 tonic (OPV) (c) area 4 phasic-tonic (OFPV) (d) area 4 phasic MT (DVV) (e) area 5 phasic (DV) (f) area 5 tonic (PPV)
  • image p424fig12.21 One of the many types of data about pitch processing that are simulated by the SPINET model. See the text for details.
    || Pitch shifts with component shifts (Patterson, Wightman 1976; Schouten 1962). Pitch vs lowest harmonic number.
  • image p425fig12.23 ARTSTREAM simulations of the auditory continuity illusion and other streaming properties (left column, top row). When two tones are separated by silence (Input), a percept of silence also separates them in a spectral-pitch resonance. (left column, bottom row). When two tones are separated by broadband noise, the percept of tone continues through the noise in one stream (stream 1) while the remainder of the noise occurs in a different stream (stream 2). (right column) Some of the other streaming properties that have been simulated by the ARTSTREAM model.
    || Auditory continuity does not occur without noise. Auditory continuity in noise. Other simulated streaming data.
  • image p428fig12.25 (left architecture) Auditory-articulatory feedback loop whereby babbled sounds active learning in an imitative map that is later used to learn to reproduce the sounds of other speakers. An articulatory-to-auditory expectation renders learning possible by making the auditory and motor data dimensionally consistent, as in the motor theory of speech. (right architecture) Parallel streams in the ARTSPEECH model for learning speaker-independent speech and language meaning, including a mechanism for speaker normalization (right cortical stream) and for learning speaker-dependent vocalic qualities (left cortical stream).
    || left: Speaker-dependent vocalic qualities; right: Speaker-independent speech and language meaning
  • image p432fig12.28 (left image) The SpaN model simulates how spatial representations of numerical quantities are generated in the parietal cortex. (right image) Behavior numerosity data and SpaN model simulations of it.
    || (Left) preprocessor-> spatial number map-> Comparison wave. (Right) data axis: number of lever presses; model axis: node position in the spatial number axis
  • image p437fig12.32 Data from a free recall experiment illustrate the bowed serial position curve.
    || Serial position function for free recall Data: (Murdock 1962 JEP 64, 482-488). % correct vs position of word on a 40-word list. Primacy gradient can be a mixture of STM and LTM read-out.
  • image p437fig12.33 Item and Order working memory models explain free recall data, as well as many other psychological and neurobiological data, by simulating how temporal series of events are stored as evolving spatial patterns of activity at content-addressable item categories. The categories with the largest activities are rehearsed first, and self-inhibit their activity as they do so in order to prevent tem from being rehearsed perseveratively. The laws whereby the items are stored in working memory obey basic design principles concerning list categories, or chunks, of sequences of stored items can be stably remembered.
    || Working memory models: item and order, or competitive queuing (Grossberg 1978; Houghton 1990; Page, Norris 1998). Event sequence in time stored as an evolving spatial pattern of activity. Primacy gradient of working memory activation stores correct temporal order at content-addressable cells. Maximally activated cell populations is performed next when a rehearsal wave is turned on. Output signal from chosen cell population inhibits its own activity to prevent perseveration: inhibition of return. Iterate until entire sequence is performed.
  • image p443fig12.41 Neurophysiological data from the Averbeck etal sequential copying experiments show the predicted primacy gradient in working memory and the self-inhibition of activity as an item is stored. When only the last item remains stored, it has the highest activity becasuse it has been freed from inhibition by earlier items.
    || Neurophysiology of sequential copying
  • image p444fig12.42 The LIST PARSE laminar cortical model of working memory and list chunking that I published with Lance Pearson in 2008 simulated the Averbeck etal data in Figure 12.41, as in the left column of the figure. It also simulated cognitive data about working memory storage by human subjects. See the text for details.
    || LIST PARSE: Laminar cortical model of working memory and list chunking (Grossberg, Pearson 2008). Simulates data about: [immediate, delayed, continuous] distractor free recall; immediate serial recall; and variable-speed sequential perfomance of motor acts. [velocity, acceleration] vs time (ms) from recall cue.
  • image p446fig12.44 (left column, top row) LIST PARSE can model linguistic data from human subjects. In this figure, model parameters are fixed to enable a close fit to data about error-type distributions in immediate free recall experiments, notably transposition errors. (right column, top row) Simulation and data showing bowing of the serial position curve, including an extended primacy gradient. (left column, bottom row) The simulation curve overlays data about list length effects, notably the increasing recall difficulty of longer lists during immediate serial recall (ISR). (right column, bottom row) Simulation (bottom image) and data (top image) of the limited temporal extent for recall.
    || (1. TL) Error-type distributions in immediate serial recall (Hanson etal 1996). % occurence vs serial position. Graph convention: Data- dashed lines; Simulations- solid lines. Six letter visual ISR. Order errors- transpositions of neighboring items are the most common. Model explanation: Noisy activation levels change relative oorder in primacy gradient. Similar activation of neighboring items most susceptible to noise. Model paramenters fitted on these data. (2. TR) Bowing of serial position curve (Cowan etal 1999). % correct vs serial position. Auditory ISR with various list lengths (graphs shifted rightward): For [, sub-]span lists- extended primacy, with one (or two) item recency; Auditory presentation- enhanced performance for last items. LIST PARSE: End effects- first and last items half as many members; Echoic memory- last presented item retained in separate store. (3. BL) List length effects, circles (Crannell, Parrish 1968), squares (Baddeley, Hitch 1975), solid line- simulation. % list correct vs list length. Variable list length ISR: longer lists are more difficult to recall. LIST PARSE: More items- closer activation levels and lower absolute activity level with enough inputs; Noise is more likely to produce order errors, Activity levels more likely to drop below threshold;. (4. BR) Limited temporal extent for recall (Murdoch 1961). % recalled vs retention interval (s). ISR task with distractor-filled retention intervals (to prevent rehersal): Increasing retention interval - decreases probabilty of recalling list correctly; Load dependence- longer list more affected by delays; Performance plateau- subjects reach apparent asymptote. LIST PARSE: Increase convergence of activities with time; loss of order information;.
  • image p447fig12.45 (left column) LIST PARSE simulations of the proportion of order errors as a function of serial position for 6 item lists with (a) an extended pause of 7 time units between the third and fourth items, and (b) pauses of 5 time units (solid curve) and 10 time units (dashed curve) between all items. (right column) Simulations (solid curves) and data (dashed curves) illustrating close model fits in various immediate free recall tasks.
    || (Left) Temporal grouping and presentation variability. Temporal grouping: Inserting an extended pause leads to inter-group bowing; Significantly different times of integration and activity levels across pause, fewer interchanges. (Right) Immediate free recall, and [delayed, continuous] distractor-free recall. Overt rehersal IFR task with super-span (ie 20 item) lists: Extended recency- even more extended with shorter ISIs; Increased probability of recall with diminished time from last rehearsal; Early items in list rehearsed most;. LIST PARSE (unique) for long lists: Incoming items from recency gradient; Rehearsal (re-presentation) based upon level of activity.
  • image p452fig12.48 (left column) In experiments of (Repp etal 1978), the silence duration between the words GRAY and SHIP was varied, as was the duration of the fricative noise in S, with surprising results. (right column) The red arrow directs our attention to surprising perceptual changes as silence and noise durations increase. See the text for details.
    || Perceptual integration of acoustic cues, data (Repp etal 1978). GRAY-> silence duration-> SHIP (noise duration from start of word). Noise duration vs silence duration: GRAY SHIP <-> [GREAT SHIP <-> GRAY CHIP] <-> GREAT CHIP.
  • image p453fig12.49 The ARTWORD model that I published in 2000 with my PhD student Christopher Myers simulates data such as the (Repp etal 1978) data in Figure 12.68. See the text for details.
    || ARTWORD model (Grossberg, Myers 2000). Input phonetic features-> Phonemic item working memory-> Masking Field unitized lists-> Automatic gain control-> Phonemic item working memory. [habituative gate, adaptive filter]s.
  • image p465fig12.63 Neurophysiological data (left image) and lisTELOS stimulation (right figure) showing how microstimulation biases saccadic performance order but not the positions to which the saccades will be directed. See the text for details.
    || Saccade trajectories converge to a single location in space. Microstimulation biased selection so saccade trajectories converged toward a single location in space. [Data, model] contra <-> Ipsi (msec)
  • image p468fig12.65 Linguistic properties of the PHONET model and some of the data that it simulates. The upper left image summarizes the asymmetric transient-to-sustained gain control that helps to create invariant intraword ratios during variable-rate speech. The lower left image summarizes the rate-dependent gain control of the ARTPHONE model that creates rate-invariant working memory representations in response to sequences of variable-rate speech. The right image summarizes the kind of paradoxical VC-CV category boundary data of (Repp 1980) that ARTPHONE simulates. See the text for details.
    || (left upper) [transient, sustained] [working memory, filter, category]. (left lower) phone inputs-> [input rate estimate, features], Features w <- habituative transmitter gates -> categories-> rate invariant phonetic output, input rate estimate-> gain control-> [features, categories] rate-dependent integration of categories and features. (right) % 2-stop vs VC-CV silent interval (msec): [ib-ga, ib-ba, iga, iba].
  • image p469fig12.66 (left column) A schematic of how preserving relative duration, as in the first and third images, of consonant and vowel pairs can preserve a percept, in this case of /ba/, but not doing so, as in the first and second images, can cause a change in percept, as from /ba/ to /wa/, as in the data of (Miller, Liberman 1979) that PHONET simulates. (right column) Changing frequency extent can also cause a /ba/ - /wa/ transition, as shown in data of (Schwab, Sawusch, Nusbaum 1981) that PHONET also simulates.
    || (left image) Maintaining relative duration as speech speeds up preserves percept (Miller, Liberman 1979). frequency vs time- [/ba/, /wa/, /ba/] (right image) Changing frequency extent causes /b/-/wa/ transition (Schwab, Sawusch, Nusbaum 1981). frequency vs time- [/ba/, /wa/] Dt extent.
  • image p473fig12.69 Error rate and mean reaction time (RT) data from the lexical decision experiments of (Schvanevelt, McDonald 1981). ART Matching Rule properties explain these data in (Grossberg, Stone 1986).
    || (left) Error rate vs type of prime [R, N, U], [non,] word. (right) Mean RT (msec) vs type of prime [R, N, U], [non,] word.
  • image p474fig12.70 The kind of model macrocircuit that was used in (Grossberg, Stone 1986) to explain lexical decision task data.
    || inputs-> A1 <-> A2 oconic sensory features <-> A3 item and order in sensory STM <-> A4 list parsing in STM (masking field) <-> A5 semantic network (self-feedback). [A4, A5] <-> V* visual object recognition system. M1-> [outputs, A1]. M1 <-> M2 iconic motor features <-> M3 item and order in motor STM. A2-> M2. A3-> M3.
  • image p476fig12.71 Word frequency data of (Underwood, Freund 1970) that were explained in (Grossberg, Stone 1986).
    || percent errors vs frequency of old words [L-H to H-H, L-L to H-L].
  • image p484fig13.04 The top-down feedback from the orbitofrontal cortex closes a feedback loop that supports a cognitive-emotional resonance. If this resonance can be sustained long enough, it enables us to have feelings at the same time that we experience the categories that caused them.
    || Cognitive-Emotional resonance. Basis of "core consciousness" and "the feeling of what happens". (Damasio 1999) derives heuristic version of CogEM model from his clinical data. Drive-> amygdala-> prefrontal cortex-> sensory cortex, resonance around the latter 3. How is this resonance maintained long enough to become conscious?
  • image p485fig13.06 (left column) An inverted-U occurs in conditioned reinforcer strength as a function of the ISI between the CS and the US. Why is learning attenuated at 0 ISI? (right column) Some classical conditioning data that illustrate the inverted-U in conditioning as a function of the ISI.
    || InterStimulus Interval (ISI) effect. Data from (Dmith etal 1969; Schneiderman, Gormezano 1964).
  • image p490fig13.15 The CogEM circuit is an ancient design that is found even in mollusks like Aplysia. See the text for details.
    || Aplysia (Buononamo, Baxter, Byrne, Neural Networks 1990; Grossberg, Behavioral and Brain Sciences 1983). Facilitator neuron ~ drive representation.
  • image p504fig13.31 Behavioral contrast can occur during reinforcement learning due to decreases in either positive or negative reinforcers. See Figure 13.32 for illustrative operant conditioning data.
    || Behavioral contrast: rebounds! Shock level vs trials. 1. A sudden decrease in frequency or amount of food can act as a negative reinforcer: Frustration. 2. A sudden decrease in frequency or amount of shock can act as a positive reinforcer: Relief.
  • image p523fig14.03 (a) The MOTIVATOR neural model generalizes CogEM by also including the basal ganglia. It can hereby explain and simulate complementary functions of the amygdala and basal ganglia (SNc) during conditioning and learned performance. The basal ganglia generate Now Print signals in response to unexpected rewards. These signals modulate learning of new associations in many brain regions. The amygdala supports motivated attention to trigger actions that are expected to occur in response to conditioned or unconditioned stimuli. Object Categories represent visual or gustatory inputs in anterior inferotemporal (ITA) and rhinal (RHIN) cortices, respectively. Value Categories represent the value of anticipated outcomes on the basis of hunger and satiety inputs, in amygdala (AMYG) and lateral hypothalamus (LH). Object-Value Categories resolve the value of competing perceptual stimuli in medial (MORB) and lateral (ORB) orbitofrontal cortex. The Reward Expectation Filter detects the omission or delivery of rewards using a circuit that spans ventral striatum (VS), ventral pallidum (VP), striosomal delay (SD) cells in the ventral striatum, the pedunculopontine nucleus (PPTN) and midbrain dopaminergic neurons of the substantia nigra pars compacta/ventral tegmental area (SNc/VTA). The circuit that processes CS-related visual information (ITA, AMTG, ORB) operates in parallel with a circuit that processes US-related visual and gustatory information (RHIN, AMYG, MORB). (b) Reciprocal adaptive connections between hypothalamus and amygdala enable amygdala cells to become learned value categories. The bottom region represents hypothalmic cells, which receive converging taste and metabolite inputs whereby they become taste-drive cells. Bottom-up signals from activity patterns across these cells activate competing value category, or US Value Representations, in the amygdala. A winning value category learns to respond selectively to specific combinations of taste-drive activity patterns and sends adaptive top-down priming signals back to the taste-drive cells that activated it. CS-activated conditioned reinforcer signals are also associatively linked to value categories. Adpative connections end in (approximately) hemidiscs. See the text for details.
    ||
  • image p533fig14.09 Search data and ARTSCENE Search simulations of them in each pair of images from (A) to (F). See the text for details.
    || 6*[data vs simulation], [Response time (ms) versus epoch].
  • image p542fig15.04 Conditioning data from (Smith 1968; Millenson etal 1977). The former shows the kind of Weber Law and inverted U that were simulated in Figure 15.3. The latter shows that, if there are two ISIs during an experiment, then the animals learn to adaptively time their responses with two properly scaled Weber laws.
    || (left) One ISI (Smith 1968) [mean membrane extension (mm) versus time after CS onset (msec)]. (right) Two ISIs (Millenson etal 1977) [200, 100] msec CS test trials, [mean momentary CS amplitude (mm) vs time after CS onset (msec)]. (bottom) Conditioned eye blinks, made with nictitating membrane and/or eyelid, are adaptively timed: peak closure occurs at expected time(s) of arrival of the US following the CS and obeys a Weber Law.
  • image p543fig15.05 Simulation of conditioning with two ISIs that generate their own Weber Laws, as in the data shown in Figure 15.4.
    || Learning with two ISIs: simulation: R = sum[all: f(xi)*yi*xi] vs msec. Each peak obeys Weber Law! strong evidence for spectral learning.
  • image p556fig15.24 (a) Data showing normally timed responding (solid curve) and short latency responses after lesioning cerebellar cortex (dashed curve). (b) computer simulation of short latency response after ablation of model cerebellar cortex.
    ||
  • image p559fig15.28 Neurophysiological data (left column) and model simulations (right column) of SNc responses. See the text for details.
    || membrane potential vs time
  • image p573fig16.01 The experimental chamber (A) and neurophysiological recordings from a rat hippocampus (B) that led to the discovery of place cells. See the text for details.
    ||
  • image p574fig16.02 Neurophysiological recordings of 18 different place cell receptive fields. See the text for details.
    ||
  • image p575fig16.03 As a rat navigates in its experimental chamber (black curves), neurophysiological recordings disclose the firing patterns (in red) of (a) a hippocampal place cell and (b) an entrorhinal grid cell.
    ||
  • image p582fig16.08 Some experimental evidence for stripe-like cell receptive fields has been reported. The band cells posited by Neil Burgess also exhibit the one-dimensional firing symmetry of stripe cells, but are modeled by oscillatory intererence. See the text for details.
    || Evidence for stripe-like cells. Entorhinal cortex data (Sargolini, Fyhn, Hafting, McNaughton, Witter, Moser, Moser 2006; Krupic, Burgess, O
  • image p589fig16.19 Neurophysiological data showing the smaller dorsal grid cell scales and the larger ventral grid cell scales.
    || Spatial scale of grid cells increase along the MEC dorsoventral axis (Hafting etal 2005; Sargolini etal 2006; Brun etal 2008). [dorsal (left), ventral (right)] cart [rate map, autocortelogram]. How does the spatial scale increase along the MEC dorsoventral axis?
  • image p593fig16.26 Data (left column) and simulations (right column) of the gradient of increasing grid cell spacing along the dorsoventral axis of MEC.
    || Gradient of grid spacing along dorsoventral axis of MEC (Brun etal 2008). data-[Distance (m?), Median grid spacing (m?)] simulations-[Grid spacing (cm), Grid spacing (cm)] vs response rate.
  • image p594fig16.27 Data (left column) and simulations (right column) of the gradient of increasing grid cell field width along the dorsoventral axis of MEC.
    || Gradient of field width along dorsoventral axis of MEC (Brun etal 2008). data-[Distance (m?), Width autocorr peak (m?)] simulations-[Grid field width (cm), Width autocorr peak (cm)] vs response rate.
  • image p595fig16.28 Data (left column) and simulations (right column) about peak and mean grid cell response rates along the dorsoventral axis of MEC.
    || Peak and mean rates at different locations along DV axis of MEC (Brun etal 2008). Peak rate (Hz) vs [data- DV quarter, simulations- Response rate].
  • image p596fig16.29 Data (top row) and simulations (bottom row) showing decreasing frequency of subthreshold membrane potential oscillations along the DV axis of MEC.
    || Subthreshold membrane potential oscillations at different locations along DV axis of MEC (Giocomo etal 2020; Yoshida etal 2011). Data [oscillations (Hz) vs distance from dorsal surface (mm) @[-50, -45] mV, Frequency (Hz) vs [-58, -54, -50] mV]. Simulations MPO frequency (Hz) s [response, habituation] rate.
  • image p596fig16.30 Data (top row) and simulations (bottom row) of spatial phases of learned grid and place cells.
    || Spatial phases of learned grid and place cells (Hafting etal 2005). Data: Cross-correlogram of rate maps of two grid cells; Distribution of phase difference: distance from origin to nearest peak in cross-correlogram. Simulations: Grid cell histogram of spatial correlation coefficients; Place cell histogram of spatial correlation coefficients.
  • image p597fig16.31 Data (a) and simulations (b-d) about multimodal place cell receptive fields in large spaces. The simulations are the result of learned place fields.
    || Multimodal place cell firing in large spaces (Fenton etal 2008; Henriksen etal 2010; Park etal 2011). Number of cells (%) vs Number of place fields. [2, 3] place fields, 100*100 cm space.
  • image p597fig16.32 Data (top row) and simulations (bottom row) about grid cell development in juvenile rats. Grid score increases (a-b and d), whereas grid spacing remains fairly flat (c and e).
    || Model fits data about grid cell development (Wills etal 2010; Langston etal 2010). Data: [Gridness, grid score, inter-field distance (cm)]. Simulations: [Gridness score, Grid spacing (cm)] vs trial.
  • image p598fig16.33 Data (top row) and simulations (bottom row) of changes in place cell properties in juvenile rats, notably about spatial information (a,c) and inter-trial stability (b,d).
    || Model fits data about grid cell development (Wills etal 2010). [Data, Simulation] vs [spatial information, inter-trial stability]. x-axis [age (postnatal day), trial].
  • image p599fig16.35 Data (a) and simulations (b,c) about anatomically overlapping grid cell modules. (a) shows the anatomical distribution of grid cells belonging to different modules in one animal. DV location (mm) vs postrhinal border. (b) shows the simulated distribution of learned grid cell spacings from two stripe cell scales. frequency (%) vs grid spacing (cm). mu = [1, 0.6]. (c) shows what happens when half the cells respond with one rate and half another rate. (d) shows the same with three rates. (e-g) show spatial maps and autocorrelograms of grid cells that arise from the different rates in (d). [rate map, autocorelogram] vs [score [1.07, 0.5, 0.67], spacing (cm) [23.58, 41, 63.64]].
    ||
  • image p602fig16.37 Data showing the effect of hippocampal inactivation by muscimol on grid cell firing before, during, and six hours after the muscimol, reading from left to right.
    || Hippocampal inactivation disrupts grid cells (Bonnevie etal 2013). muscimole inactivation. spikes on trajectory: [before, after min [6-20, 20-40, 40-60, 6h]]. rate map (Hz) [18.6, 11.4, 9.5, 6.7, 10.8]. spatial autocorrelogram g=[1.12, 0.05, -0.34, 0.09, 1.27].
  • image p603fig16.38 Role of hippocampal feedback in maintaining grid fields. (a) Data showing the effect of hippocampal inactivation before and during muscimol inhibition of hippocampal cells, as in Figure 16.37. (b) Model simulation with normal grid fields. (c) Model simulation that emulates the effect of hippocampal inhibition on grid fields.
    || (a) Data: hippocampal inactivation [before, after] cart [spikes on trajectory (p: [18.6, 6.7] Hz), spatial autocorrelogram (g= [1.12, 0.09])]. (b) Model: noise-free path integration, [spikes on trajectory (p: 14.56 Hz), rate map, spatial autocorrelogram (g= 1.41), dynamic autocorrelogram (g=0.6)]. (c) Model: noisy path integration + non-specific tonic inhibition, [spikes on trajectory (p: 11.33 Hz), rate map, spatial autocorrelogram (g= 0.05), dynamic autocorrelogram (g=0.047)].
  • image p605fig16.39 Data showing effects of medial septum (MS) inactivation on grid cells and network theta oscillations in medial entorhinal cortex (MEC). (A) Examples of disruption in the spatial expression of the hexagonal grid structure for two grid cells (Brandon etal 2011). (B) Temporal reduction in the power and frequency of network theta oscillations (Koenig etal 2011). (C) Temporary reduction in the gridness score, mean firing rate, and patial stability of grid cells (Koenig etal 2011).
    || Disruptive effects of Medial Septum inactivation in Medial Entorhinal Cortex (Brandon etal 2011; Koenig etal 2011). (A) Rate map [rate map, spatial autocorrelations, trajectory] vs [baseline, sub-sampled, medial septum inactivation, 3-6 hour recovery, 24 hour recovery], [rate map (Hz- m, p), spatial autocorrelations (gridness)][ 1.2, 7.2, 1.1; 0.25, 1.7, 0.6; 0.25, 2.5, -0.53; 0.7, 5.1, 0.55; 1.0, 5.3, 1.3; 2.1, 15, 0.19; 1.7, 12, 0.71; 1.7, 3.2, -0.22; 1.8, 9.1, 0.68; 2.5, 13, 0.46]. (B) [normalized power at 7-9 Hz, frequency (Hz)] vs 5-minute periods. (C) [mean gridness score (+-SEM), mean firing rate (% of baseline), mean correlation coeff (+-SEM)] vs 10-minute periods.
  • image p607fig16.40 Effects of medial septum (MS) inactivation on grid cells. (a) Each row shows data and different data-derived measures of grid cell responsiveness, starting from the left with the baseline response to the middle column with maximal inhibition. (b) Data showing the temporary reduction in the gridness scores during MS inactivation, followed by recovery. (c) Simulation of the collapse in gridness, achieved by reduction in cell response rates to mimic reduced cholinergic transmission. (d,e) Simulations of the reduction in gridness scores in (d) by reduction of cell response rates, in (e) by changing the leak conductance. See the text for details.
    ||
  • Grossberg 2021 p229c2h0.60 SMART computer simulations demonstrate that a good enough match of a top-down expectation with a bottom-up feature pattern generates an attentive resonance during which the spikes of active cells synchronize in the gamma frequency range of 20-70 Hz (Figure 5.40). Many labs have reported a link between attention and gamma oscillations in the brain, including two articles published in 2001, one from the laboratory of Robert Desimone when he was at the the National Institute of Mental Health in Bethseda (Fries, Reynolds, Rorie, Desimone 2001), and the other from the laboratory of Wolf Singer in Frankfurt (Engel, Fries, Singer 2001). You
  • Grossberg 2021 p081c2h0.66 sub-section: How does one evolve a computational brain?
    The above discussion illustrates that no single step of theoretical derivation can derive a whole brain. One needs a method for deriving a brain in stages, or cycles, much as evolution has incrementally discovered ever more complex brains over many thousands of years. The following theoretical method has been successfully applied many times since I first used it in 1957. It embodies a kind of conceptual evolutionary process for deriving a brain.

    Because "brain evolution needs to achieve behavioural success", we need to start with data that embodiey indices of behavioral success. That is why, as illustrated in Figure 2.37 Modelling method and cycle, one starts with Behavioral Data from scores or hundreds of psychological experiments. These data are analyszed as the result of an individual adapting autonomously in real time to a changing world. This is the Arty of Modeling. It requires that one be able to infer from static data curves the dynamical processes that control individual behaviors occuring in real time. One of the hardest things that I teach to my students to do is "how to think in real time" to be able to carry out this speculative leap.

    Properly carried out, this analysis leads to the discovery of new Design Principles that are embodied by these behavioral processes. The Design Principles highlight the functional meaning of the data, and clarify how individual behaviors occurring in real time give rise to these static data curves.

    These principles are then converted into the simplest Mathematical Model using a method of minimal anatomies, which is a form of Occam
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
    bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :image p085fig02.38 Our models have been used in many large-scale applications to engineering and technology. Linking brain to behavior explains how brain mechanisms give rise to psychological functions, and do so autonomously. The combination of mechanism, function, and autonomy helps to explain their value in helping to solve outstanding problems in technology.
    || Modeling method and cycle.
    Behavioral data -(art of modeling)-> Design principles <- Neural data <-(brain predictions)- Mathematical model and analysis -(behavioral predictions)-> Behavioural data
    Technology: Mathematical model and analysis <-> Technological applications
    At every stage, spin off new model designs and mechanisms to technologist who need autonomous intelligent applications.
  • image p225fig05.33 Some early ARTMAP benchmark studies. These successes led to the use of ARTMAP, and many variants that we and other groups have developed, in many large-scale applications in engineering and technology that has not abated even today.
    || see Early ARTMAP benchmark studies
  • image p490fig13.15 The CogEM circuit is an ancient design that is found even in mollusks like Aplysia. See the text for details.
    || Aplysia (Buononamo, Baxter, Byrne, Neural Networks 1990; Grossberg, Behavioral and Brain Sciences 1983). Facilitator neuron ~ drive representation.
  • image p563fig15.33 The basal ganglia gate neural processing in many parts of the brain. The feedback loop through the lateral orbitofrontal cortex (blue arrow, lateral orbitofrontal) is the one that MOTIVATOR models.
    || MOTIVATOR models one of several thalamocortical loops through basal ganglia (Adapted from Fundamental Neuroscience. 2002 Copyright Elsevier). [cortex-> striatum-> pallidum S. nigra-> thalamus] vs [motor, oculomotor, dorsolateral prefrontal, lateral orbitofrontal, anterior cingulate]. thalamus-> [striatum, cortex].
  • image p563fig15.34 The colored regions are distinct parts of the basal ganglia in the loops depicted in Figure 15.33.
    || Distinct basal ganglia zones for each loop (Adapted from Fundamental Neuroscience. 2002 Copyright Elsevier).
  • p370 Chapter 11 means (Grossberg 2021) page 370, Chapter 11
    p002sec Illusion and realitymeans (Grossberg 2021) page 2, section Illusion and reality
    p013fig01.09means (Grossberg 2021) page 13, Figure 1.09 (1.9 as in book)
    p030tbl01.02 means (Grossberg 2021) page 30, Table 1.02 (1.2 as in book)
    p111c2h0.5means (Grossberg 2021) page 111, column 2, height from top as fraction of page height
    || text...Are notes in addition to [figure, table] captions, mostly comprised of text within the image, but also including quotes of text in the book. Rarely, it includes comments by Howell preceded by "Howell". The latter are distinct from "readers notes" (see, for example : reader Howell notes).
    p044 Howell: grepStr
  • p00I PrefacePreface - Biological intelligence in sickness, health, and technology
  • p001 Chapter 1 Overview - From Complementary Computing and Adaptive Resonance to conscious awareness
  • p050 Chapter 2 How a brain makes a mind - Physics and psychology split as brain theories were born
  • p086 Chapter 3 How a brain sees: Constructing reality - Visual reality as illusions that explain how we see art
  • p122 Chapter 4 How a brain sees: Neural mechanisms - From boundary completion and surface flling-in to figure-ground perception
  • p184 Chapter 5 Learning to attend, recognize, and predict the world -
  • p250 Chapter 6 Conscious seeing and invariant recognition - Complementary cortical streams coordinate attention for seeing and recognition
  • p280 Chapter 7 How do we see a changing world? - How vision regulates object and scene persistence
  • p289 Chapter 8 How we see and recognize object motion - Visual form and motion perception obey complementary laws
  • p337 Chapter 9 Target tracking, navigation, and decision-making - Visual tracking and navigation obey complementary laws
  • p353 Chapter 10 Laminar computing by cerebral cortex - Towards a unified theory of biologucal and artificial intelligence
  • p370 Chapter 11 How we see the world in depth - From 3D vision to how 2D pictures induce 3D percepts
  • p404 Chapter 12From seeing and reaching to hearing and speaking - Circular reaction, streaming, working memory, chunking, and number
  • p480 Chapter 13 From knowing to feeling - How emotion regulates motivation, attention, decision, and action
  • p517 Chapter 14 How prefrontal cortex works - Cognitive working memory, planning, and emotion conjointly achieved valued goals
  • p539 Chapter 15 Adaptively timed learning - How timed motivation regulates conscious learning and memory consolidation
  • p572 Chapter 16 Learning maps to navigate space - From grid, place, and time cells to autonomous mobile agents
  • p618 Chapter 17 A universal development code - Mental measurements embody universal laws of cell biology and physics
  • p370 Chapter 11 means (Grossberg 2021) page 370, Chapter 11
    p002sec Illusion and realitymeans (Grossberg 2021) page 2, section Illusion and reality
    p013fig01.09means (Grossberg 2021) page 13, Figure 1.09 (1.9 as in book)
    p030tbl01.02 means (Grossberg 2021) page 30, Table 1.02 (1.2 as in book)
    p111c2h0.5means (Grossberg 2021) page 111, column 2, height from top as fraction of page height
    || text...Are notes in addition to [figure, table] captions, mostly comprised of text within the image, but also including quotes of text in the book. Rarely, it includes comments by Howell preceded by "Howell". The latter are distinct from "readers notes" (see, for example : reader Howell notes).
    p044 Howell: grepStr
  • image pxvifig00.01 Macrocircuit of the visual system
  • image p002fig01.01 The difference between seeing and recognizing.
    || (W. Epstein, R. Gregory, H. von Helmholtz, G. Kanizsa, P. Kellman, A. Michote...) Seeing an object vs Knowing what it is. Seeing Ehrenstein illusion (See, recognize) va Recognizing offset grating Do not see, recognize). offset grating: some boundaries are invisible or amodal.
  • image p002fig01.02 Dalmation in snow
    || p002c2h0.55 "...This image reminds us that invisible boundaries can sometimes be very useful in helping us to recognize visual objects in the world. ... When we first look at this picture, it may just look like an array of black splotches of different sizes, desities, and orientations across the picture. Gradually, however, we can recognize the Dalmatian in it as new boundaries form in our brain between the black splotches. ..."
  • image p003fig01.03 Amodal completion
    || p00c1h0.75 "... Figure 1.3 illustrates what I mean by the claim that percepts derived from pictures are often illusions. Figure 1.3 (left column) shows three rectangular shapes that abut one another. Our percept of this image irresitably creates a different interpretation, however. We perceive a horizontal bar lying in front of a partially occluded vertical bar that is amodally completed behind it. ..."
  • image p004fig01.04 (top row) Kanizsa stratification; (botton row) transparency images
    || [top row images] "... are called stratification percepts... This simple percept can ... be perceived either as a white cross in front of a white outline square, or as a white outline square in front of a white cross. The former percept usually occurs, but the percept can intermittently switch between these two interpretations. ...it is said to be a bistable percept. ..."
  • image p008fig01.05 Noise-saturation dilemma.
    || cell activity vs cell number; [minimum, equilibrium, current, maximal] activity
  • image p009fig01.06 Primacy gradient of activity stored in working memory within a recurrent shunting on-center off-surround network. Rehersal is controlled by a nonspecific rehersal wave and self-inhibitory feedback of the item that is currently being rehearsed. Rehearsal is controlled by a nonspecific rehearsal wave and self-inhibitory feedback of the item that is currently being rehearsed. Green = excitatory, red = inhibitory
    || inputs? -> item and order WM storage -> competitive selection-> rehearsal wave -> outputs
  • image p011fig01.07 The choice of signal function f determines how an initial activity pattern will be transformed and stored in short-term memory (STM). Among [same, slower, faster]-than-linear signal functions, only the last one can suppress noise. It does so as it chooses the population that receives the largest input for storage, while suppressing the activities of all other population, thereby giving rise to a winner-take-all choice.
    || initial pattern (xi(0) vs i):
    fXi(∞)= xi(∞)/sum[j: xj(∞)]x(∞)
    linearperfect storage of any patternamplifies noise (or no storage)
    slower-than-linearsaturatesamplifies noise
    faster-than-linearchooses max [winner-take-all, Bayesian], categorical perceptionsuppresses noise, [normalizes, quantizes] total activity, finite state machine
  • image p012fig01.08 A sigmoidal signal function is a hybrid signal that combines the best properties of [faster, same, slower]-than linear signals. It can suppress noise and store a partially contrast-enhanced activity pattern. slower-than-linear saturates pattern; approximately linear- preserves pattern and normalizes; faster-than-linear- noise suppression and contrast-enhancement.
    || Sigmoidal signal: a hybrid. (upper) saturates pattern- slower-than-linear; (middle) preserves pattern and normalizes- approximately linear. (lower) noise suppression and contrast enhancement- faster-than-linear.
  • image p013fig01.09 A sigmoid signal function generates a quenching threshold below which cell activities are treated like noise and suppressed. Activities that are larger than the quenching threshold are contrast enhanced and stored in short-term memory.
    || Quenching threshold xi(o) vs i.
    fXi(∞)= xi(∞)/sum[j: xj(∞)]x(∞)
    sigmoidtunable filter
    stores infinitely many contrast-enhanced patterns
    suppresses noise
  • image p016fig01.10 The blocking paradigm shows how sensory cues that are conditioned to predict specific consequences can attentionally block other cues that do not change those predictions. On the other hand, if the total cue context is changed by adding a cue that does not change the predicted consequences, then the new cues can be conditioned to the direction of that change. They can hereby learn, for example, to predict fear if the shock level unexpectedly increases, or relief if the shock level unexpectedly decreases.
    || Minimal adaptive prediction. blocking- CS2 is irrelevant, unblocking- CS2 predicts US change. Learn if CS2 predicts a different (novel) outcome than CS1. CS2 is not redundant.
  • image p016fig01.11 A sufficiently big mismatch between a bottom-up input pattern and a top-down expectation can activate the orienting system, which triggers a burst of nonspecific arousal that can reset the recognition category that read out the expectation. In this way, unexpected events can reset short-term memory and initiate a search for a category that better represents the current situation.
    || [category- top-down (TD) expectation; Bottom-up (BU) input pattern] -> Feature pattern -> BU-TD mismatch -> orienting system -> non-specific arousal -> category.
  • image p018fig01.12 Peak shift and behavioural contrast. When a negative generalization gradient (in red) is subtracted from a positive generalization gradient (in green), the net gradient (in purple) is shifted way from the negative gradient and has a width that is narrower than any of its triggering gradients. Because the total activity of the network tends to be normalized, the renormalized peak of the net gradient is higher than that of the rewarded gradient, thereby illustrating that we can prefer experiences that we have never previously experienced over those for which we have previously been rewarded.
    ||
  • image p019fig01.13 Affective circuits are organized into opponent channels, such as fear vs. relief, and hunger vs. frustration. On a larger scale of affective behaviours, exploration and consummation are also opponent types of behaviour. Exploration helps to discover novel sources of reward. Consummation enables expected rewards to be acted upon. Exploration must be inhibited to enable an animal to maintain attention long enough upon a stationary reward in order to consume it.
    || exploration vs consummation
  • image p023fig01.14 A gated dipole opponent process can generate a transient antagonistic reboubnd from its OFF channel in response to offset of an input J to its ON channel. sustained on-response; transient off-response; opponent process; gates arousal: energy for rebound.
    ||
  • image p024fig01.15 A REcurrent Associative Dipole, or READ, circuit is a recurrent shunting on-center off-surround network with habituative transmitter gates. Sensory cues sample it with LTM traces and thereby become conditioned reinforcers.
    ||
  • image p025fig01.16 (left panel) The main processing stages of the Cognitive-Emotional-Motor (CogEM) model have anatomical interpretations in terms of sensory cortex, amygdala, and prefrontal cortex. Chapter 13 will describe in greater detail how CS cues activate invariant object categories in the sensory cortex, value categories in the amygdala, and object-value categories in the prefrontal cortex, notably the orbitofrontal cortex. The amygdala is also modulated by internal drive inputs like hunger and satiety. (right panel) Anatomical data support this circuit, as do many neurophysiological data.
    || drive -> amygdala -> prefrontal cortex <-> sensory cortex -> amygdala. [visual, somatosensory, auditory, gustatory, olfactory] cortex -> [amygdala, Orbital Prefrontal Cortex]. amygdala -> Lateral Prefrontal Cortex
  • image p025fig01.17 Sensory-drive heterarchy vs. drive hierarchy. How cues and drives interact to choose the drive and motivation that will control behavioral choices.
    || [drive inputs, sensory cue [before, after] cross-over] -> incentive motivation [eat, sex].
  • image p026fig01.18 Inverted U as a function of arousal. A Golden Mean at intermediate levels of arousal generates a combination of behavioral threshold, sensitivity, and activation that can support typical behaviors. Both underarousal and overarousal lead to symptoms that are found in mental disorders.
    || Behavior vs arousal.
    depressionunder-arousedover-aroused
    thresholdelevatedlow
    excitable above thresholdHyperHypo
    "UPPER" brings excitability "DOWN".
  • image p027fig01.19 The ventral What stream is devoted to perception and categorization. The dorsal Where stream is devoted to spatial representation and action. The Where stream is also often called the Where/How stream because of its role in the control of action.
    ||
    Spatial representation of actionPerception categorization
    WHERE dorsalWHAT ventral
    Parietal pathway "where"Temporal pathway "what"
    Posterior Parietal Cortex (PPC)Inferior temporal Cortex (IT)
    Lateral Prefrontal Cortex (LPFC)Lateral Prefrontal Cortex (LPFC)
  • image p029tbl01.01 Some pairs of complementary processing streams.
    ||
    visual boundary:
    interblob stream V1-V2-V4
    visual surface:
    blob stream V1-V2-V4
    visual boundary:
    interblob stream V1-V2-V4
    visual motion:
    magno stream V1-MT-MST
    WHAT streamWHERE stream
    perception & recognition:
    interferotemporal & prefrontal areas
    space & action:
    parietal & prefrontal areas
    object tracking:
    MT interbands & MSTv
    optic flow navigation:
    MT+ bands & MSTd
    motor target position:
    motor & parietal cortex
    volitional speed:
    basal ganglia
  • image p030tbl01.02 The What and Where cortical processing streams obey complementary laws. These laws enable the What stream to rapidly and stably learn invariant object categories without experiencing catastrophic forgetting, while the Where stream learns labile spatial and action representations to control actions that are aimed towards these objects.
    ||
    WHATWHERE
    spatially-invariant object learning and recognitionspatially-variant reaching and movement
    fast learning without catastrophic forgettingcontinually update sensory-motor maps and gains
    IT InterferoTemporal CortexPPC Posterior Parietal Cortex
    WhatWhere
    matchingexcitatoryinhibitory
    learningmatchmismatch
  • image p030fig01.20 A schematic cross-section of a slice of laminar neocortex whose cells are organized in a characteristic way in six layers, which themselves may be organized into distinct sublaminae. The computational paradigm of Laminar Computing attempts to show how different parts of neocortex can represent and control very different kinds of behavior - including vision, speech, can cognition - using specializations of the same canonical laminar cortical design.
    || Projection fibres: Cortico[spinal, bulbar, pontine, striate, reticulat, etc]; Thalamocortical fibres; Diffuse cortical afferent fibres: [nonspecific thalamocortical, Cholinergic, Monoaminergic]; Corticocortical efferents; Projection [cell, fibre]; Corticocortical efferent terminals.
  • image p032fig01.21 At least three parallel visual cortical streams respond to visual inputs that reach the retina. Two parvocellular streams process visual surfaces (blob stream) and visual boundaries (interblob stream). The magnocellular stream processes visual motion.
    || [Retina, LGNs, V[1,2,3,4], MT] to [What- inferotemporal areas, Where- parietal areas]: visual parallel streams [2x blob, 1x bound]
  • image p035fig01.22 A classical example of phonemic restoration. The spectrogram of the word "legislatures" is either excised, leaving a silent interval, or filled with broad-band noise. A percept of the restored phoneme is heard when it is replaced by noise, but not by silence.
    || [normal, silence, noise replaced] presentations. frequency (Hz) vs time (sec).
  • image p036fig01.23 As more items are stored in working memory through time, they can select larger chunks with which to represent the longer list of stored items.
    || [x, y, z] -> [xy, xyz]
  • image p037fig01.24 Only three processing stages are needed to learn how to store and categorize sentences with repeated words in working memory. See the text for more discussion.
    || IOR working memory (item chunk-> sequences) <-> IOR masking field: [item->list]<->[list->list] chunks. (<-> signifies <- expectation/attention, adaptive filter ->)
  • image p038fig01.25 The ART Matching Rule stabilizes real time learning using a [top-down, modulatory on-center, off-surround] network. Object attention is realized by such a network. See text for additional discussion.
    || ART Matching Rule [volition, categories, features]. [one, two] against one.
  • image p039tbl01.03 The link between consciousness and movement
    ||
    VISUALseeing, knowing, and reaching
    AUDITORYhearing, knowing, and speaking
    EMOTIONALfeeling, knowing, and acting
  • image p042tbl01.04 The six main kinds of resonances which support different kinds of conscious awareness that will be explained and discussed in this book.
    ||
    type of resonancetype of consciousness
    surface-shroudsee visual object or scene
    feature-categoryrecognize visual object or scene
    stream-shroudhear auditory object or stream
    spectral-pitch-and-timbrerecognize auditory object or stream
    item-listrecognize speech and language
    cognitive-emotionalfeel emotion and know its source
  • image p051fig02.01 Along the boundaries between adjacent shades of gray, laterial inhibition makes the darker area appear even darker, and the lighter areas appear even lighter. (Ernst Mach bands)
    ||
  • image p052fig02.02 Feature-category resonances enable us to rapidly learn how to recognize objects without experiencing catastrophic forgetting. Attentive matching between bottom-up feature pattern inputs and top-down expectations prevent catastrophic forgetting by focussing object attention upon expected patterns of features, while suppressing outlier features that might otherwise have caused catastophic forgetting if they were learned also.
    || Adaptive Resonance. Attended feature clusters reactivate bottom-up pathways. Activated categories reactivate their top-down pathways. Categories STM, Feature patterns STM. Feature-Category resonance [synchronize, amplify, prolong]s system response. Resonance triggers learning in bottom-up and top-down adaptive weights: adaptive resonance!
  • image p057fig02.03 Some basic anatomical and physiological properties of individual neurons. See the text for additional discussion.
    ||
    physiologycell body potentialaxonal signalchemical transmitter
    anatomynerve cell bodyaxonsynaptic knob, synapse
  • image p058fig02.04 Serial learning paradigm: Learning the temporal order of events by practicing them in the order that they occur in time.
    || Learning a global arrow in time. How do we learn to encode the temporal order of events in LTM? serial learning. [w=intra, W=inter]trial interval. "... data about serial verbal learning (Figure 2.4) seemed to suggest that events can go "backwards in time". ..."
  • image p059fig02.05 Bowed serial position curve. This kind of data emphasizes the importance of modelling how our brains give rise to our minds using nonlinear systems of differential equations.
    || Effects of [inter, intra]trial intervals (Hovland 1938). # of errors vs list position. [w (sec), W (sec)] = (2 6) (4 6) (2 126) (4 126). Nonoccurance of future items reduces the number of errors in response to past items. Thes data require a real-time theory for their explanation! that is, DIFFERENTIAL equations.
  • image p059fig02.06 The bowed serial position curve illustrates the sense in which "events can go backwards in time" during serial learning.
    || Bow due to backward effect in time. If the past influenced the future, but no conversely: # of errors vs list position; Data (Hoyland Hull, Underwood, etc).
  • image p060fig02.07 Position-specific-forward and backward error gradients illustrate how associations can form in both the forward and backward directions in time before the list is completely learned.
    || Error gradients: depend on list position. # of responses vs list position:
    list beginninganticipatory errorsforward in time
    list middleanticipatory and perseverative errorsforward and backward in time
    list endperseverative errorsbackward in time
  • image p061fig02.08 The existence of forward and backward associations, such as from A to B and from B to A is naturally explained by a network of neurons with their own activities or STM traces, and bidirectional connections between them with their own adaptive weights or LTM traces.
    || How these results led to neural networks (Grossberg 1957). Networks can learn forward and backward associations! Practice A->B also learn B<-A. Because learning AB is not the same as learning BA, you need STM traces, or activations, xp at the nodes, or cells, and LTM traces, or adaptive weights, zg, for learning at the synapses.
  • image p063fig02.09 The Additive Model describes how multiple effects add up influence the activities, or STM, traces of neurons.
    || STM: Additive model (Grossberg, PNAS 1967, 1968).
    Short-term memory (STM)
    trace activation
    signaladaptive weightLong-term memory (LTM)
    trace
    xi(j)fi(xi(t))*Bijzij(t)xj(t)
    learning rate?passive decaypositive feedbacknegative feedbackinput
    d[dt: xi(t)] = - Ai*xi + sum[j=1 to n: fj(xj(t))*Bji*zji] - sum[j=1 to n: gj(xj)*Cp*Zp] + Ii
    Special case : d[dt: xi(t)] = - Ai*xi + sum[j=1 to n: fj(xj(t))*zp] + Ii
  • image p064fig02.10 The Shunting Model includes upper and lower bounds on neuronal activities. These bound have the effect of multiplying additive terms by excitatory and inhibitory automatic gain terms that enable such models to preserve their sensitivity to inputs whose size may vary greatly in size through time, while also approximately normalizing their total activities.
    || STM: Shunting Model (Grossberg, PNAS 1967, 1968). Mass action in membrane equations. Bi/Ci -> xi(t) -> O -> -Fi/Ei. Bounded activations, automatic gain control. d[dt: xi(t)] = - Ai*xi + (Bi - Ci*xi)sum[j=1 to n: fj(xj(t))*Dji*yji*zji + Ii] - (Ei*Xi + Fi)*sum[j=1 to n: gj(xj)*Gji*Yji*Zji + Ji]. Includes the Additive Model.
  • image p064fig02.11 Medium-Term Memory (MTM) and Long-Term Memory (LTM) equations complement the Additive and Shunting Models of STM. MTM is typically defined by a chemical transmitter that is released from the synaptic knobs of a neuron (Figure 2.03). Its release or inactivation in an activity-dependent way is also called habituation. LTM defines how associative learning occurs between a pair of neurons whose activities are approximately correlated through time. See the text for details.
    || Medium and Long Term memory.
    MTMhabituative transmitter gated[dt: yki(t)] = H*(K - yki) - L*fk(xk)*yki
    LTMgated steepest descent learningd[dt: zki(t)] = Mk*fk(xk)*(hi(xi) - zki)
  • image p065fig02.12 Three sources of neural network research: [binary, linear, continuous nonlinear]. My own research has contributed primarily to the third.
    || Three sources of neural network research.
    BinaryLinearContinuous and non-Linear
    neural network signal processingSystems theoryNeurophysiology and Psychology
    McCullogh-Pitts 1943
    ... Xi(t+1) = sgn{sum[j: Aij*Xj(t) - Bi}
    Von Neumann 1945
    Calanielio 1961
    Rosenblatt 1962
    Widrow 1962
    Anderson 1968
    Kohonen 1971
    Hodgkin, Huxley 1952
    Hartline, Ratliff 1957
    Grossberg 1967
    Von der Malsburg 1973
    digital computerY-A*X
    cross-correlate
    steepest descent
  • image p068fig02.13 Hartline
  • image p068fig02.14 Hodgkin and Huxley developed a model to explain how spikes travel down the squid giant axon.
    || Neurophysiology (single cell): spike potentials in squid giant axon (Hodgekin, Huxley 1952, Nobel Prize). time -> (dendrites -> cell body -> axon).
    C*dp[dt: V] = α*dp^2[dX^2: V] + (V(+) - V)*g(+) + (V(-) - V)*g(-) + (V^p - V)*g^p
    g(+) = G(+)(m,h), g(-) = G(-)(n), G^p = const, [m, h, n] - ionic processes, V - voltage
    Precursor of Shunting network model (Rail 1962). (Howell: see p075fig02.24 Membrane equations of neurophysiology. Shunting equation
  • image p071fig02.15 The noise saturation dilemma: How do neurons retain their sensitivity to the relative sizes of input patterns whose total sizes can change greatly through time?
    || Noise-Saturation Dilemma (Grossberg 1968-1973). Bounded activities from multiple input sources.
    If activities xi are sensitive to SMALL inputs, then why don
  • image p071fig02.16 To solve the noise-saturation dilemma, individual neurons in a network that is receiving a distributed spatial patterns of inputs need to remain sensitive to the ratio of input to them divided by all the inputs in that spatial pattern. Although the inputs are delivered to a finite number of neurons, the input and activity patterns are drawn continuously across the cells for simplicity.
    || Noise-Saturation Dilemma. [Ii, xi] vs t. [Input, Activity] pattern [small -> noise, large -> saturation]. Problem: remain sensitive to input RATIOS θi = Ii / sum[j: Ij] as total input I = sum[j: Ij] -> ∞. Many kinds of data exhibit sensitivity to ratios of inputs.
  • image p072fig02.17 Brightness constancy.
    || Vision: brightness constancy, contrast normalization. Compute RATIOS of reflected light. Reflectance processing. p72c1h0.45 "... In other words, the perceived brightness of the gray disk is constant despite changes in the overall illumination. On the other hand, if only the gray disk were illuminated at increaing intensities, with the annulus illuminated at a constant intensity, then the gray disk would look progressively brighter.
  • image p072fig02.18 Vision: brightness contrast. Conserve a total quantity, Total activity normalization.
    LUCERatio scales in choice behavior
    ZEILERAdaptation level theory

    ||
  • image p073fig02.19 Computing with cells: infinity does not exist in biology!
    || Computing in a bounded activity domain, Gedanken experiment (Grossberg 1970). Vm sub-areas [xm, B - xm], I(all m)], m=[1, i, B].
    Bexcitable sites
    xi(t)excited sites (activity, potential)
    B - xi(t)unexcited sites
  • image p073fig02.20 Shunting saturation occurs when inputs get larger to non-interacting cells.
    || Shunting saturation. [xi(t), B - xi(t)].
    (a)(b)
    d[dt: xi] = -A*xi + (B - xi)*Ii
    (a) Spontaneous decay of activity xi to equilibrium
    (b) Turn on unexcited sites B - xo by inputs Ii (mass action)
    Inadequate response to a SPATIAL PATTERN of inputs: Ii(t) = θi*I(t)
    θirelative intensity (cf. reflectance)
    I(t)total intensity (cf. luminance)
  • image p073fig02.21 How shunting saturation turns on all of a cell
  • image p073fig02.22 An on-center off-surround network is capable of computing input ratios.
    || Computing with patterns.
    How to compute the pattern-sensitive variable: θi = Ii / sum[k=1 to n: Ik]?
    Needs interactions! What type? θi = Ii / sum[k ≠ i: Ik]
    Ii↑ ⇒ θi↑ excitation, Ik↑ ⇒ θk↓, k ≠ i inhibition
    On-center off-surround network.
  • image p074fig02.23 The equations for a shunting on-center off-surround network. Shunting terms lead to many beautiful and important properties of these networks, which are found ubiquitously, in one form or another, in all cellular tissues.
    || Shunting on-center off-surround network.
    Mass action: d[dt: xi] = -A*xi +(B - xi)*Ii -xi*sum[k≠i: Ik]
    Turn on unexcited sitesTurn off excited sites
    At equilibrium:
    0 = d[dt: xi] = -(A + Ii + sum[k≠i: Ik])*xi + B*Ii = -(A + I)*xi + B*Ii
    xi = B*Ii/(A + I) = B*θi*I/(A + I) = θi* B*I/(A + I)No saturation!
    Infinite dynamical range
    Automatic gain control
    Compute ratio scale
    Weber law
    x = sum[k-1 to n: xk] = B*I/(A + I) ≤ B Conserve total activity
    NORMALIZATION
    Limited capacty
    Real-time probability
  • image p075fig02.24 The membrane equations of neurophysiology describe how cell voltages change in response to excitatory, inhibitory, and passive input channels. Each channel is described by a potential difference multiplied by a conductance. With the special choices shown in the lower right-hand corner, this equation defines a feedforward shuntin on-center off-surround network.
    || Membrane equations of neurophysiology.
    C*dp[dt] = (V(+) - V)*g(+) +(V(-) - V)*g(-) +(V(p) - V)*g(p)
    Shunting equation (not additive)
    V Voltage
    V(+), V(-), V(p) Saturating voltages
    g(+), g(-), g(p) Conductances
    V(+) = B, C = 1; V(-) = V(p) = 0; g(+) = Ii; g(-) = sum[k≠i: Ik];
    lower V: V(+) = V(p) Silent inhibition, upper V: V(+). (Howell: see p068fig02.14 Grossberg
  • image p076fig02.25 An on-center off-surround network can respond to increasing on-center excitatory inputs without a loss of sensitivity. Instead, as the off-surround input increases, the region of a cell
  • image p076fig02.26 The mudpuppy retina exhibits the shift property that occurs in the feedforward shunting on-center off-surround network in Figure 2.25. As a result, its sensitivity also shifts in response to different background off-surrounds, and therefore exhibits no compression (dashed purple lines).
    || Mudpuppy retina neurophysiology.
    I center, J background
    a) Relative figure-to-ground
    b) Weber-Fechner I*(A + J)^(-I)
    c) No hyperpolarization, SHUNT: Silent inhibition
    d) Shift property(Werblin 1970) xi(K,J) vs K = ln(I)
    Adaptation- sensitivity shifts for different backgrounds. NO COMPRESSION.
  • image p077fig02.27 A schematic of the on-center off-surround network that occurs in the mudpuppy retina, including three main cell types: receptors, horizontal cells, and bipolar cells.
    || Mechanism: cooperative-competitive dynamics.
    On-center off-surround (Kuffler 1953) cat retina
    Subtractive lateral inhibition (Hartline, Ratcliff 1956/7+) limulus retina.
    R receptor -> H horizontal -> B bipolar (Werblin, Dowling, etal 1969+) mudpuppy retina.
  • image p077fig02.28 Silent inhibition is replaced by hyperpolarization when the inhibitory saturating potential is smaller than the passive saturating potential. Then an adpatation level is created that determines how big input ratios need to be to activate their cells.
    || Weber Law and adaptation level.
    Hyperpolarization vs Silent inhibition
    d[dt: xi] = -A*xi +(B - xi)*Ii -(xi + C)*sum[k≠i: Ik]
    At equilibrium:
    0 = d[dt: xi] = -(A + Ii + )*xi +B*Ii -C*sum[k≠i: Ik]
    = -(A + I)*xi +(B + C)*Ii -C*I
    = -(A + I)*xi +(B + C)*I*[θi -C/(B + C)]
    xi = (B + C)*I/(A + I)* [θi -C/(B + C)]
    Weber Law Reflectance Adaptation level
  • image p078fig02.29 How the adaptation level is chosen to enable sufficiently distinct inputs to activate their cells.
    || Weber Law and adaptation level.
    xi = (B + C)*I/(A + I)* [θi -C/(B + C)]
    Weber Law Reflectance Adaptation level
    V(+) >> V(-) ⇒ B >> C ⇒ C/(B + C) << 1
    Adaptation level theory (Zeiler 1963).
  • image p078fig02.30 Choosing the adaptation level to achieve informational noise suppression.
    || Noise suppression. Attenuate Zero Spatial frequency patterns: no information. Ii vs i (flat line), xi vs i (flat line at zero)
    B >> C: Try B = (n - 1)*C or C/(B + C) = 1/n
    Choose a uniform input pattern (no distinctive features): All θi = 1/n
    xi = (B + C)*I/(A + I)*[θi -C/(B + C)] = 0 no matter how intense I is.
  • image p078fig02.31 How noise suppression enables matching of bottom-up and top-down input patterns.
    || Noise suppression -> pattern matching. mismatch (out of phase) suppressed, match (in phase) amplifies pattern.
  • image p079fig02.32 Matching amplifies the matched pattern due to automatic gain control. See terms I and J in the equation.
    || Substrate of resonance. Match (in phase) of BU and TD input patterns AMPLIFIES matched pattern due to automatic gain control by shunting terms. J = sum[i: Ji], I = sum[i: Ii], θi = (Ii + Ji)/(I + J)
    xi = (B + C)*(I + J)/(A + I + J)*[θi -C/(B + C)]
    Need top-down expectations to be MODULATORY.
  • image p080fig02.33 An opposite-attracts rule during the development of intracellular connections can lead to a mature network that realizes informational noise suppression.
    || How do noise suppression parameters arise? Symmetry-breaking during morphogenesis? Opposites attract rule.
    Intracellular parameters C/B = 1/(1 - n) Intercellular parameters
    Predicts that:
    • Intracellular excitatory and inhibitory saturation points can control the growth during development of :
    • Intercellular excitatory and inhibitory connections.
  • image p080fig02.34 How to achieve informational noise suppression in a network with multiple parallel processing channels.
    || Symmetry-breaking: dynamics and anatomy.
    Dynamics:
    • excitatory range is amplified
    • inhibitory range is compressed
    Anatomy:
    • narrow on-center
    • broad off-surround
    Noise suppression: attenuates uniform patterns
    Contour direction: enhances pattern gradients
  • image p081fig02.35 The equilibrium activities of a shunting netwok with Gaussian on-center off-surround kernels are sensitive to the ratio-contrasts of the input patterns that they process. The terms in the denominator of the equilibrium activities accomplish this using the shunting on-center and off-surround terms.
    || Ratio-contrast detector. flat versus [Gaussian Cki, flattened Gaussian? Eki]
    d[dt: xi] = -A*xi +(B - xi)*sum[k≠i: Ik]*Cki -(xi + D)*sum[k=1 to n: Ik*Eki]
    Cki = C*e^(-μ*(k - i)^2), Eki = E*e^(-μ*(k - i)^2)
    At equilibrium: xi = I*sum[k=1 to n: θk*Fki] / (A + I*sum[k=1 to n: θk*Gki])
    Fki = B*Cki -D*Eki (weighted D.O.G)
    Gki = Cki +Eki (S,O,G)
    • Reflectance processing
    • Contrast normalization
    • Discount illuminant
  • image p081fig02.36 Informational noise suppression in network with Gaussian on-center and off-surround function as contour detectors that are sensitive to ratio-contrast.
    || Noise suppression and contour detection.
    If B*sum[k=1 to n: Cki] <= D*sum[k=1 to n: Eki] then:
    • uniform patterns are suppressed
    • contrasts are selectively enhanced
    • contours are detected
    Ii vs i, xi vs i
    Responses are selective to [REFLECTANCE, SPATIAL SCALE], eg color [feature, surface] contours.
  • image p082fig02.37 My models begin with behavioral data, since brains are designed to achieve behavioral success. The text explains how models evolve in stages, through a process of successive refinements, or unlumpings. These unlumpings together carry out a kind of conceptual evolution, leading to models that can explain and predict ever larger psychological and neurobiological databases.
    || Modelling method and cycle.
    Behavioral data -(art of modeling)-> Design principles <- Neural data <-(brain predictions)- Mathematical model and analysis -(behavioral predictions)-> Behavioural data
    Operationalizes "proper level of abstraction"
    Operationalizes that you cannot "derive a brain" in one step.
  • image p085fig02.38 Our models have been used in many large-scale applications to engineering and technology. Linking brain to behavior explains how brain mechanisms give rise to psychological functions, and do so autonomously. The combination of mechanism, function, and autonomy helps to explain their value in helping to solve outstanding problems in technology.
    || Modeling method and cycle.
    Behavioral data -(art of modeling)-> Design principles <- Neural data <-(brain predictions)- Mathematical model and analysis -(behavioral predictions)-> Behavioural data
    Technology: Mathematical model and analysis <-> Technological applications
    At every stage, spin off new model designs and mechanisms to technologist who need autonomous intelligent applications.
  • image p087fig03.01 A macrocircuit of key visual processes (in green) and the cortical areas in which they primarily occur (in red), from the retina to the Prefrontal Cortex (PFC), including both the What and Where cortical streams. The [bottom-up, horizontal, and top-down] interactions help each of these processes to overcome computationally complementary processing deficiencies that they would experience without them, and aso to read-out top-down expectations that help to stabilize learning while they focus attention on salient objects and positions.
    || Emerging unified theory of visual intelligence. [What, Where] streams. Bottom-up and top-down interactions overcome COMPLEMENTARY processing deficiencies.
  • image p089fig03.02 What do you think lies under the two grey disks? (on a checkers board)
    || p089c1h0.55 "... As your eye traverses the entire circular boundary (Howell: of a grey disk on a checkerboard), the contrast keeps flipping between light-to-dark and dark-to-light. Despite these contrast reversals, we perceive a single continuous boundary surrounding the gray disk. ...".
  • image p090fig03.03 Kanizsa square and reverse-contrast Kanizsa square precepts. The spatial arrangement of pac-men, lines, and relative contrasts determines the perceived brightness of the squares, and even if they exhibit no brightness difference from their backgrounds, as in (b). These factors also determine whether pac-men will appear to be amodally completed behind the squares, and how far behind them.
    || p089c2h0.65 "...
    a) The percept of the square that abuts the pac-men is a visual illusion that is called the Kanizsa square. The enhanced brightness of the square is also an illusion.
    c) shows that these boundaries can be induced by either collinear edges or perpendicular line ends, and that both kinds of inducers cooperate to generate an even stronger boundary.
    d) if the perpendicular lines cross the positions of the illusory contours, then they can inhibit the strength of these contours. ..."
  • image p091fig03.04 A cross-section of the eye, and top-down view of the retina, shao how the blind spot and retinal veins can occlude the registration of light signals at their positions on the retina.
    || Eye: [optic nerve, ciliary body, iris,lens, pupil, cornea, sclera, choroid, retina]. Human retina: [fovea, blind spot, optic nerve]. see alsi cross-section of retinal layer.
  • image p092fig03.05 A cross-section of the retinal layer. Note that light stimuli need to go through all retinal layers before they reach the photoreceptor layer at which the light signals are registered.
    || light stimuli ->
    retinal layerscellular composition
    inner limiting membrane
    retinal nerve fibreganglion nerve fibres
    ganglion cellganglion
    inner plexiformamacrine
    inner nuclearhorizontal
    outer plexiform
    outer limiting membrane
    photoreceptorrod
    photoreceptorcone
    retinal pigment epithelium
    <- signal transduction. http://brain.oxfordjournals.org/content/early/2011/01/20/brain.awq346
  • image p093fig03.06 Every line is an illusion because regions of the line that are occluded by the blind spot or retinal veins are completed at higher levels of brain processing by boundary completion and surface filling-in.
    || Every line is an illusion!
    Boundary completionWhich boundaries to connect?
    Surface filling-inWhat color and brightness do we see?
  • image p094fig03.07 The processes of boundary completion and surface filling-in are computationally complementary.
    ||
    Boundary completionSurface filling-in
    outwardinward
    orientedunoriented
    insensitive to direction of contrastsensitive to direction-of-contrast
  • image p095fig03.08 Computer simulation of a Kanizsa square percept. See the text for details.
    || p094c2h0.2 "...
    b) shows the feature contours that are induced just inside the pac-man boundaries.
    c) feature contours fill-in within the square boundary
    d) create a percept of enhanced brightness throughout the square surface ..."
  • image p095fig03.09 Simulation of a reverse-contrast Kanizsa square percept. See the text for details.
    || p094c2h0.5 "...
    b) whereas bright feature contours are induced just inside the boundaries of the two black pac-men at the bottom of the figure, dark feature contours are induced inside the boundaries of the two white pac-man at the top of the figure
    c) the square boundary is recognized
    d) Because these dark and bright feature contours are approximately balanced, the filled-in surface color is indistinguishable from the filled-in surface color outside of the square, ... but [the square boundary is] not seen ..."
  • image p096fig03.10 The visual illusion of eon color spreading. Neither the square nor the blue color that are percieved within it are in the image that defines a neon color display. The display consists only of black and blue arcs.
    ||
  • image p096fig03.11 Another example of neon color spreading. The image is composed of black and blue crosses. See the text for details.
    || Howell: note the appearance of illusory red squares
  • image p100fig03.13 The Ehrenstein percept in the left panel is significantly weakened as the orientations of the lines that induce it deviate from being perpendicular deviate from being perpendicular to the illusory circle.
    ||
  • image p100fig03.14 Boundaries are completed with the orientations that receive the largest total amount of evidence, or support. Some can form in the locally preferred orientations that are perpendicular to the inducing lines, while others can form through orientations that are not locally preferred, thus showing that there is initially a fuzzy band of almost perpendicular initial grouping orientations at the end of each line.
    || Perpendicular induction at line ends wrt [circular, square] boundaries
    line ends localglobal
    perpendicular, crisppreferredpreferred
    NOT perpendicular, fuzzyunpreferredpreferred
  • image p100fig03.15 A fuzzy band of possible initial grouping orientations allows grouping to get started. Cooperative-competitive feedback via a hierarchical resolution of uncertainty chooses a sharp final grouping that has the most evidence to support it.
    || before choice: transient; after choice: equilibrium
  • image p102fig03.16 T
  • image p102fig03.17 The relative positions of the squares give rise to a percept of three regions. In the middle region, emergent diagonal groupings form, despite the fact that all the orientations in the image are verticals and horizontals.
    ||
  • image p103fig03.18 Computer simulations in [b, c, e, f] of groupings in response to different spatial arrangements in [a,c, e, g] of inducers that are composed of short vertical boundaries. Note the emergent horizontal groupings in [d, f, h] and the diagonal groupings in h, despite the fact that all its inducers have vertical orientations.
    ||
  • image p103fig03.19 As in Figure 3.18, emergent groupings can form whose orientations differ from thos of the inducing stimuli.
    || Thats how multiple orientations can induce boundary completion of an object. [diagonal, perpendicular, parallel]
  • image p104fig03.20 Sean Williams: how boundaries can form
    ||
  • image p104fig03.21 Four examples of how emergent boundaries can form in response to different kinds of images. These examples show how boundary webs can shape themselves to textures, as in (c), and shading, as in (d), in addition to lines, as in (a). In all these cases, the boundaries are invisible, but reveal themselves by supporting filling-in of surface brightness and color within their form-sensitive webs.
    ||
  • image p105fig03.22 Depth-selective boundary representations capture brightness and colors in surface filling-in domains. See the text for details.
    || 3D vision and figure-ground separation. multiple-scale, depth-selective boundary webs. refer to Figure 3.21(d)
    depth increasing ↓boundariessurfaces
    BC inputsurface capture!
    FC input
  • image p105fig03.23 The pointillist painting A Sunday on la Grande Jatte by Georges Seurat illustrates how we group together both large-scale coherence among the pixels of the painting, as well as forming small groupings around the individual dabs of color.
    ||
  • image p106fig03.24 In response to the Synthetic Aperture image (upper corner left), a shunting on-center off-surround network "discounts the illiminant" and thereby normalizes cell activities to compute feature contours, without causing saturation (upper right corner). Multiple-scale boundaries form in response to spatially coherent activities in the feature contours (lower left corner) and create the webs, or containers, into which the feature contours fill-in the final surface representations (lower right corner).
    || Do these ideas work on hard problems? SAR!
    input imagefeature contoursboundary contoursfilled-in surface
    Synthetic Aperture Radar: sees through weather 5 orders of magnitude of power in radar returndiscounting the illuminant
    • normalizes the image: preseves RELATIVE activities without SATURATION
    • shows individual PIXELS
    boundaries complete between regions where normalized feature contrasts changefilling-in averages brightnesses within boundary compartments
  • image p107fig03.25 The Roofs of Collioure by Matisse. See the text for details
    || p107c1h0.6 "... [Matisse] showed how patches of pure color, when laid down properly on a canvas, could be grouped by the brain into emergent boundarues, without the intervention of visible outlines. ... The trick was that these emergent boundaries, being invisible, or amodal, did not darken the colors in the surface representations. In this sense, Matisse intuitively realized that "all boundaries are invisible" through the masterful way in which he arranged his colors on canvas to generate boundaries that could support compelling surface representations. ..."
  • image p107fig03.26 How "drawing directly in color" leads to colored surface representations. Amodal boundary webs control the filling-in of color within these surface representations. See the text for details.
    || color patches on canvas -> [surface color and form, Amodal boundary web]. Amodal boundary web -> surface color and form.
  • image p108fig03.27 Matisse
  • image p108fig03.28 The watercolor illusion of Baingio Pinna 1987 can be explained using spatial competition betweeen like-oriented boundary signals. This occurs at what I have called the First Competitive Stage. This is one stage in the brain
  • image p109fig03.29 The 3D percepts that are generated by chiaroscuro and trompe l
  • image p109fig03.30 The triptych of Joe Baer, called Primary Light Goup: Red, Green, and Blue 1964-1965, generates watercolor illusion percepts which, when displayed side by side in a museum, create a striking impression.
  • image p110fig03.31 Henry Hensche
  • image p110fig03.32 Claude Monet
  • image p112fig03.33 Various ways that spatial gradients in boundary webs can cause self-luminous percepts. See the text for details.
    || Boundary web gradient can cause self luminosity. Similar to watercolor illusion. Gloss by attached highlight (Beck, Prazdny 1981), glare. (Bresan 2001) Double brilliant illusion, (Grossberg, Hong 2004) simulation. p111c2h0.5 "... This effect may be explained as the result of the boundary webs that are generated in response to the luminance gradients and how they control the filling-in of lightness within themselves and abutting regions. ... Due to the mutually inhibitory interactions across the boundaries that comprise these boundary webs, more lightness can spread into the central square as the steepness of the boundary gradients increases. ...".
  • image p113fig03.35 The Highest Luminance As White (HLAW) rule of (Hans Wallach 1948) works in some cases (top row) but not others (bottom row).
  • image p113fig03.36 The Blurred Highest Luminance As White (BHLAW) rule that I developed with my PhD student, Simon Hong, works in caseswhere the rule of Hans Wallach fails, as can be seen by comparing the simulation in Figure 3.35 with the one in this figure.
    || Blurred Highest Luminance As White (BHLAW) rule (Grossberg, Hong 2004, 2006). Spatial integration (blurring) adds spatial context to lightness perception.
  • image p114fig03.37 How the Blurred Highest Luminance as White rule sometimes normalizes the highest luminance to white (left panel) but at other times normalizes it to be self-luminous (right panel). See the text for details.
    || perceived reflectance vs cross-section of visual field. [white level, anchored lightness, self-luminous*, BHCAW]. *self-luminous only when conditions are right.
  • image p114fig03.38 Four color-field spray paintings of Jules Olitski. The text explains why they generate surfaces percepts with such ambiguous depth.
    || Jules and his friends (1967), Lysander-1 (1970), Instant Loveland (1968), Comprehensive Dream (1965). p114c2h0.4 "... it is impossible to visually perceive descrete colored units within the boundary webs in Olitski
  • image p115fig03.39 Two of Gene Davis
  • image p116fig03.40 A combination of T-junctions and perspective cues can create a strong percept of depth in response to 2D images, with a famous example being Leonardo da Vinci
  • image p117fig03.41 End gaps, or small breaks or weakenings of boundaries, can form where a stronger boundary abuts a weaker, like-oriented, boundary, as occurs where black boundaries touch red boundaries in the neon color spreading image of Figure 3.11.
    || Boundary contours - lower contrast boundary signals are weakened. feature contours- no inhibition, feature signals survive and spread. MP -> [BCS, FCS]. BCS -> FCS.
  • image p117fig03.42 Two paintings by Frank Stella. See the text for details.
    || Firzubad (top row) ... and Khurasan Gate (variation) (bottom row). p117c1h0.75 "... The luminance and color structure within a painting affects how it groups and stratifies the figures within it. These processes, in turn, affect the formation of attentional shrouds that organize how spatial attention is is allocated as we view them. ..." "... Stella wrote Furzabad is a good example of of lookng for stability and trying to create as much instability as possible.
  • image p120fig03.43 Four paintings by Monet of the Rouen cathedral under different lighting conditions (top row) and their monochromatic versions (bottom row). See the text for details.
    || p119c2h0.25 "... Monet uses nearby colors that are nearly equiluminant, and sharp, high-contrast luminance defined edges are sparse. He hereby creates weaker boundary signals within and between the parts of many forms, and stronger boundary signals between the forms. This combination facilitates color spreading within the forms and better separation of brightness and collor differences between forms. ... The grayscale versions of these paintings demonstrate the near equiluminance of the brushstrokes within forms, and places in which brightness and color differences significantly influence the groupings that differentiate between forms, including the differentiation between the cathedral and the sky. ..."
  • image p120fig03.44 The Rouen cathedral at sunset generates very different boundary webs than it does in full sunlight, as illustrated by Figure 3.45.
    || Rouen Cathedral at sunset (Monet 1892-1894).
    • Lighting almost equiluminant
    • Most boundaries are thus caused by color differences, not luminance differences
    • Fine architectural details are obscured, leading to...
    • Coarser and more uniform boundary webs, so...
    • Less depth in the painting.
  • image p121fig03.45 The Rouen cathedral in full sunlight.
    || Rouen Cathedral full sunlight (Monet 1892-1894).
    • Lighting is strongly non-uniform across most of the painting
    • Strong boundaries due to both luminance and color differences
    • Fine architectural details are much clearer, leading to...
    • Finer and more non-uniform boundary webs, so...
    • Much more detail and depth
  • image p121fig03.46 The Rouen cathedral in full sunlight contains T-Junctions that are not salient in the painting of it at sunset. These are among the painting
  • image p123fig04.01 A classical example of how boundaries are barriers to filling-in.
    || Combining stabilized images with filling-in (Krauskopf 1963, Yarbus 1967). Image: Stabilize these boundaries with suction cup attached to retina or electronic feedback circuit. Percept: A visible effect of an invisible cause!
  • image p124fig04.02 The verical cusp of lesser and greater illuminance is the same in both images, but the one on the left prevents brightness from flowing around it by creating closed boundaries that tighly surround the cusp.
  • image p126fig04.03 A McCann Mondrian is an excellent display with which to illustrate how our brains discount the illuminant to compute the "real" colors of objects. See the text for details.
    || Color constancy: compute ratios. McCann Mondrian. Biological advantage: never see in bright light, eg tropical fish
    Discount the illuminantCompute lightness
    Different colors seen from the same spectrum
    ... similar to those seen in white light
    Physical basis: reflectance RATIOS!
  • image p128fig04.04 When a gradient of light illuminates a McCann Mondrian, there is a jump in the total light that is reflected at nearby positions where the reflectances of the patches change,
    || Compute reflectance changes at contours. Fill-in illuminant-discounted surface colors.
    leftright
    I + εI - ε
    A*(I + ε)B*(I - ε)
    A*(I + ε)/(B*(I - ε)) - 1 = A/B - 1
  • image p129fig04.05 Multiple-scale balanced competition chooses color contours where the reflectance of the patches change. These color contours discount the illuminant.
    || Compute reflectance changes at contours. Fill-in illuminant-discounted surface colors. Discount illuminant: compute color contours.
  • image p129fig04.06 Filling-in of color contours restores a surface percept with colors that substantially discount the illuminant.
    || Compute reflectance changes at contours. Fill-in illuminant-discounted surface colors. Fill-in surface color: hierarchical resolution of uncertainty.
  • image p130fig04.07 Simulation of brightness constancy under uniform illumination.
    || Simulation of brightness constancy (Grossberg & Todorovic 1988). Uniform illumination. [stimulus (S), feature (F), boundary (B), output]. B -> F -> S -> B: Veridical! Boundary peaks are spatially narrower than feature peaks.
  • image p131fig04.08 Simulation of brightness constancy under an illimination gradient. Note that the feature content pattern (F) is the same in both cases, so too is the boundary contour (B) pattern that is derived from it, and the final filled-in surface.
    || Simulation of brightness constancy. Discount the illuminant. [stimulus (S), feature (F), boundary (B), output]. B -> F -> S -> B: not veridical, but useful! Ratio-sensitive feature contours (F).
  • image p131fig04.09 Simulation of brightness contrast
    || Simulation of brightness contrast. [stimulus (S), feature (F), boundary (B), output].
  • image p132fig04.10 Simulation of brightness assimilation. Note how the equal steps on the left and right sides of the luminance profile are transformed into different brightness levels.
    || Simulation of brightness assimilation. [stimulus (S), feature (F), boundary (B), output].
  • image p132fig04.11 Simulations of a double step (left panel) and the Craik-O
  • image p133fig04.12 Simulation of the 2D COCE.
    || (Todorovic, Grossberg 1988). p132c2h0.6 "... 2D Craik-O
  • image p134fig04.13 Contrast constancy shows how the relative luminances when a picture is viewed in an illumination gradient can even be reversed to restore the correct reflectances due to discounting the illuminant.
  • image p134fig04.14 The kinds of displays that Michael Paradiso and Ken Nakayamas used to catch filling-in "in the act" and which Karl Arrington then simulated using the Grossberg and Todorovic 1988 model.
    || Experiments on filling-in. Catching "filling0in" in the act (Paradiso, Nakayama 1991). (Arrington 1994 Vision Research 34, 3371-3387) simulated these data using the model of Grossberg and Todorovic 1988.
  • image p138fig04.15 Simple cells are oriented contrast detectors, not edge detectors.
    || From oriented filtering to grouping and boundary completion (Hubei, Weisel 1968). Oriented receptive fields: SIMPLE CELLS. Sensitive to : orientation, [amount, direction] of contrast, spatial scale. Oriented local contrast detectors, not edge detectors!
  • image p139fig04.16 The simplest way to realize an odd simple cell receptive field and firing threshold.
    || "Simplest" simple cell model. need more complexity for processing natural scenes. Difference-of-Gaussian or Gabor filter (J. Daugman, D. Pollen...). Output signal vs cell activity. Threshold linear signal, half-wave rectification.
  • image p140fig04.17 Complex cells pool inputs from simple cells that are sensitive to opposite contrast polarities. Complex cells hereby become contrast invartiant, and can respond to contrasts of either polarity.
    || Complex cells: pool signals from like-oriented simple cells of opposite contrast polarity at the same position. They are "insensitive to contrast polarity". Half-wave rectification of inputs from simple cells.
  • image p141fig04.18 The images formed on the two retinas in response to a single object in the world are displaced by different amounts with respect to their foveas. This binocular disparity is a powerful cue for determining the depth of the object from an observer.
    || Binocular Disparity. Binocular disparities are used in the brain to reconstruct depth from 2D retinal inputs, for relatively near objects.
  • image p141fig04.19 A laminar cortical circuit for computing binocular disparities in layer 3B of V1 at binocular simple cells. These cells add positionally disparate inputes from like polarized monocular simple cells (layer 4 of V1). Binocular simple cells at each position that is sensitive to opposite polarities then add their outputs at complex cells in layer 2/3. Chapter 10 will explain how these laminar circuits work in greater detail.
    || Laminar cortical circuit for complex cells. [left, right] eye.
    V1 layerdescription
    2/3Acomplex cells
    3Bbinocular simple cells
    4monocular simple cells
  • image p142fig04.20 A Glass pattern and a reverse-contrast Glass pattern give rise to a different boundary groupings because simple cells can only pool signals from like-polarity visual features. See the text for details.
  • image p143fig04.21 Oriented simple cells can respond at the ends of thick enough bar ends, but not at the ends of thin enough lines. See the text for an explanation of why this is true, and its implications for visual system design.
    || Hierarchical resolution of uncertainty. For a given field size. Different responses occur at bar ends and line ends. For a thin line no detector perpendicular to line end can respond enough to close the boundary there. Network activity.
  • image p144fig04.22 Computer simulation of how simple and complex cells respond to the end of a line (gray region) that is thin enough relative to the receptive field size (thick dashed region in the left panel). These cells cannot detect the line end, as indicated by the lack of responses there in the left panel (oriented short lines denote the cells
  • image p145fig04.23 If end gaps were not closed by end cuts, then color would flow out of every line end!
    || A perceptual disaster in the feature contour system. feature contour, line boundary. input -> [boundary, surface]. boundary -> surface. Color would flow out of every line end! as it does during neon color spreading.
  • image p145fig04.24 A brain
  • image p146fig04.25 Networks of simple, complex, and hypercomplex cells can create end cuts as an example of hierarchical resolution of uncertainty. See the text for details.
    || How are end cuts created? (Grossberg 1984) Two stages of short-range competition. 1st stage: Simple cells -> complex cells -> hypercomplex - endstopped complex. First competitive stage- across position, same orientation; Second competitive stage- same position, across orientation. -> cooperation.
  • image p148fig04.26 End cuts are formed during neon color spreading in the same way that they are formed at line ends.
    || End cut during neon color spreading.
    FIRST competitive stageSECOND competitive stage
    within orientationacross orientation
    across positionwithin position
    to generate end cuts.
  • image p149fig04.27 Bipole cells can form boundaries that interpolate end cuts, and use their cooperative-competitive interactions to choose the boundary groupings that have the most support from them.
    || Bipole cells: boundary completion. long-range cooperation & short-range inhibition: complete winning boundary groupings and suppress weaker boundaries.
  • image p150fig04.28 Bipole cells have two branches (A and B), or poles, in their receptive fields. They help to carry out long-range boundary completion.
    || Bipole property. Boundary completion via long-range cooperation. Completing boundaries inwardly between pairs or great numbers of inducers in an oriented way. fuzzy "AND" gate.
  • image p151fig04.29 Experimental evidence of bipole cells in cortical area V2 was reported by Von der Heydt, Peterhans, and Baumgarter (1984).
    || Bipoles: first neurophysiological evidence (V2) (von der Heydt, Peterhans, Baumgartner 1984, Peterhans, von der Heydt 1988). (Grossberg 1984) prediction.
    Ordering:
    Stimulus (S)
    probe location *
    cells in V2
    response?
    ...(S)*...YES
    ...*...(S)NO
    (S)...*...NO
    (S)...*...(S)YES
    (S)...*...
    (more contrast)
    NO
    (S)...*.....(S)YES
    Evidence for receptive field.
  • image p151fig04.30 Anatomical evidence for long-range horizontal connections has also been reported, as illustrated by the example above from (Bosking etal 1997).
    || Anatomy: horizontal connections (V1) (Bosking etal 1997). tree shrew. [10, 20]*[20, 10, 0, -10, -20] (degrees).
  • image p152fig04.31 The predicted bipole cell receptive field (upper left corner) has been supported by both neurophysiological data and psychophysical data, and used in various forms by many modelers. See the text for details.
    || Bipoles through the ages. (Grossberg 1984; Grossberg, Mongolla 1985). (Field, Hayes, Hess 1993) "association field". (Heitger, von der Heydt 1993). (Williams, Jacobs 1997). cf. "relatability" geometric constraints on which countours get to group (Kellman & Shipley 1991). Also "tensor voting" (Ullman, Zucker, Mumford, Guy, Medioni, ...).
  • image p153fig04.32 The double filter network embodies simple, complex, and hypercomplex (or endstopped complex) cells. It feeds into a network of bipole cells that can complete boundaries when it properly interacts with the double filter.
    || Double filter and grouping network. Cells : simple -> complex -> hypercomplex (endstopping) -> bipole
    Grouping networkbipole cells
    Double filterhypercomplex cells
    endstopping
    complex cells
    simple cells
  • image p156fig04.33 A tripartite texture (top row) and two bipartite textures (bottom row) that illustrate how emergent boundary groupings can segregate textured regions from one another.
  • image p157fig04.34 Some textures that were simulated with mixed success by the complex channels model. In particular, the model gets the wrong answer for the textures in (g) and (i). The Boundary Contour System model of Figure 4.32, which includes both a double filter and a bipole grouping network, simulates the observed results.
  • image p159fig04.35 Spatial impenetrability prevents grouping between the pac-men figures in the left figure, but not in the figure on the right.
    || p158c2h0.75 "... In the image shown in the left panel, the horizontal boundaries of the background squares interfere with vertical boundary completion by vertically-oriented bipole cells, again by spatial impenetrability. In contrast, the vertical boundaries of the background squares are collinear with the vertical pac-man inducers, thereby supporting formation of the square boundaries. Finer aspects of these percepts, such as why the square ... (right panel) appears to lie in front of four partially occuded circular discs, as regularly occurs when the Kanizsa square can form (eg Figure 3.3), can be understood using FACADE theory mechanism that will shown below to explain many figure-ground percepts using natural extensions to the three dimensional world of boundary and and surface mechanisms that we have already discussed. ..."
  • image p159fig04.36 Graffiti art by Banksy exploits properties of amodal boundary completion and spatial impenetrability.
    || p159c1h0.75 perceptual psychologist Nava Rubin "... When the wall is smooth, Banksy leaves the regions previously covered by stencil unpainted, relying of observers
  • image p161fig04.37 Kanizsa squares that form either collinearly to their inducers (left panel) or perpendicular to them (right panel) confirm predictions of the BCS boundary completion model.
    || Analog-sensitive boundary completion. contour strength vs Kanizsa square image. Increases with "support ratio" (Shipley, Kellman 1992). Inverted-U (Lesher, Mingoloa 1993; cf Soriano, Spillmann, Bach 1994)(shifted gratings). p370h0.6 BCS = Boundary Contour System, FCS = Feature Contour System. p161c1h0.85 "... As predicted by the BCS, they found an Inverted-U in contour strength as a function of line density. ... This effect may be explained by the action of the short-range competition that occurs before the stage of long-range cooperative grouping by bipole cells (Figure 4.32). It is thus another example of the balance between cooperative and competitive mechanisms. ..."
  • image p162fig04.38 How long-range cooperation among bipole cells and short-range competition by hypercomplex cells work together to generate the inverted-U in boundary strength that is found in the data of Figure 4.37 (right panel).
    || Cooperation and competition during grouping.
    few lineswide spacing, inputs outside spatial range of competition, more inputs cause higher bipole activity
    more linesnarrower spacing, slightly weakens net input to bipoles from each inducer
    increasing line densitycauses inhibition to reduce net total input to bipoles
  • image p163fig04.39 A schematic of the LAMINART model that explains key aspects of laminar visual cortical anatomy and dynamics. LGN -> V1 [6, 4, 2/3] -> V2 [6, 4, 2/3]
    || p163c1h0.6 "... The first article abount laminar computing ... proposed how the laminar cortical model could process 2D pictures using bottom-up filtering and horizontal bipole grouping interactions (Grossbeerg, Mingolla, Ross 1997). In 1999, I was able to extend the model to also include top-down circuits for expectation and attention (Grossberg 1999)(right panel). Such a synthesis of laminar bottom-up, horizontal, and top-down circuits is characteristic of the cerebral cortex (left panel). I called it LAMINART because it began to show how properties of Adaptive Resonance Theory, or ART, notably the ART prediction about how top-down expectations and attention work, and are realized by identical cortical cells and circuits. You can immediately see from the schematic laminar circuit diagram ... (right panel) that circuits in V2 seem to repeat circuits in V1, albeit with a larger spatial scale, despite the fact that V1 and V2 carry out different functions, How this anatomical similarity can coexist with functional diversity will be clarified in subsequent sections and chapters. It enable dfifferent kinds of biological intelligence to communicate seamlessly while carrying out their different psychological functions. ..."
  • image p164fig04.40 The Koffka-Benussi ring. See the text for details.
    || p164c2h0.25 "... [left image] The luminance of the ring is intermediate between the luminances of the two background regions. Its perceived brightness is also between the brightnesses of the two background regions, and appears to be uniform throughout. The right image differs from the left only in that a vertical line divides the two halves of the ring where it intersects the two halves in the background. Although the luminance of the ring is still uniform throughout, the two halves of the rig now have noticeably different brightnesses, with the left half of the ring looking darker than the right half. How can drawing a line have such a profound effect on the brightnesses of surface positions that are so far away from the line? ..."
  • image p165fig04.41 The Kanizsa-Minguzzi ring. See the text for details.
    || p165c1h0.6 "... (left panel), the annulus is divided by two line segments into annular sectors of unequal area. Careful viewing shows that the smaller sector looks a little brighter than the larger one. (Kanizsa, Minguzzi 1986) noted that "this unexpected effect is not easily explained. In fact, it cannot be accounted for by any simple psychological mechanism such as lateral inhibition or freuency filtering. Furthermore, it does not seem obvious to invoke oganizational factors, like figural belongingness or figure-ground articulation."". p165c2h0.35 "... (Grossberg, Todorovic 1988). Our main claim is that the two radial lines play two roles, one in the formation of boundaries with which to contain the filling-in process, and the other as a source of feature contour signals that are filled-in within the annular regions to create a surface brightness percept. ..."
  • image p166fig04.42 Computer simulation of Kanizsa-Minguzzi ring percept. See the text for details.
  • image p167fig04.43 (a) How bipole cells cause end cuts. (b) The Necker cube generates a bistable percept of two 3D parallelopipeds. (c) Focusing spatial attention on one of the disks makes it look both nearer and darker, as (Tse 1995) noted and (Grossbert, Yazdanbakhsh 1995) explained.
    || T-junction sensitivity. image -> bipole cells -> boundary. (+) long-range cooperation, (-) short-range competition.
  • image p168fig04.44 Macrocircuit of the main boundary and surface formation stages that take place from the lateral geniculate nucleus, or LGN, through cortical areas [V1, V2, V4]. See the text for details.
    || image p168fig04.45 How ON and OFF feature contour (FC) activities give rise to filled-in surface regions when they are adjacent to a like oriented boundary, but not otherwise.
  • image p170fig04.46 Surface regions can fill-in using feature contour inputs (+ and - signs) if they are adjacent to, and collinear with, boundary contour inputs (solid) line, as in (a), but not otherwise, as in (b).
  • image p170fig04.47 A double-opponent network processes output signals from opponent ON and OFF Filling-In DOmains, or FIDOs.
    || OFF FIDO -> shunting networks -> ON FIDO -> shunting networks-> opponent interation -> FIDO outputs
  • image p171fig04.48 How closed boundaries contain filling-in of feature contour signals, whereas open boundaries allow color to spread to both sides of the boundary.
    || Before filling-in: boundary contour, illuminant-discounted feature contour; After filling-in: no gap, gap
  • image p171fig04.49 An example of DaVinci stereopsis in which the left eye sees more of the wall between A and C than the right eye does. The region between B and C is seen only by the left eye because the nearer wall between C and D occludes it from the right eye view.
  • image p173fig04.50 This figure illustrates how a closed boundary can be formed in a prescribed depth due to addition of binocular and monocular boundaries, but not at other depths.
    || How are closed 3D boundaries formed? V1 Binocular, V2 boundary, V2 surface; Prediction: monocular and horizontal boundaries are added to ALL binocular boundaries along the line of sight. Regions that are surrounded by a CLOSED boundary can depth-selectively contain filling-in of lightness and colored signals.
  • image p174fig04.51 The same feedback circuit that ensures complementary consistency between boundaries and surfaces also, automatically, initiates figure-ground separation! See the text for details.
    || before feedback: [V1 -> V2 pale stripe -> V2 thin stripe, "attention pointers" (Cavanagh etal 2010)]; after feedback: [V1 + V2 thin stripe] -> V2 pale stripe via contrast sensitive [exhitation, inhibition] for depths [1, 2] -> object recognition
  • image p174fig04.52 An example of how the 3D LAMINART model can transform the two monocular images of the random dot stereogream in the top row into the three depth-separated surfaces representations in the bottom row.
    || Stereogram surface percepts: surface lightnesses are segregated in depth (Fang and Grossberg 2009). [left, right] inputs, [far, fixation, near] planes. Contrast algorithms that just compute disparity matches and let computer code build the surface eg (Marr, Poggio, etal 1974).
  • image p176fig04.53 The on-center off-surround network within position and across depth helps to explain why brighter Kanizsa squares look closer.
    || inhibition vs. depth. p176c1h0.25 "... to qualitatively understand how this example of proximity-luminance covariance works. It follows directly from the boundary pruning by surface contour feedback signals (Figure 4.51) that achieves complementary consistency and initiates figure-ground perception. ...". p176c1h0.45 "... these inhibitory sigals are part of an off-surround network whose strength decreases as the depth difference increases between the surface that generates the signal and its recipient boundaries. ...". p176c1h0.8 "... Within FACADE theory, the perceived depth of a surface is controlled by the boundaries that act as its filling-in generators and barriers (Figure 3.22), since these boundaries select the depth-sselective FIDOs within whin filling-in can occur, and thereby achieve surface capture. These boundaries, in turn, are themselves strengthened after surface-to-boundary contour feedback eliminates redundant boundaries that cannot support sucessful filling-in (Figure 4.51). These surface contour feedback signals have precisely the properties that are needed to explain why brighter Kanizsa squares look closer! ..."
  • image p178fig04.54 Initial steps in figure-ground separation. See the text for details.
    ||
  • topLeftrepeats the image in Figure 1.3
    topRightshows again the long-range cooperation and short-range compeition that are controlled by the bipole grouping process (Figure 4.43a middle panel)
    bottomLeftshows the end gaps that are caused by these bipole grouping mechanisms
    bottomRightshows how surface filling-in is contained within the closed horizontal rectangular boundary, but spills out of the end gaps formed in the other two rectangles
  • image p178fig04.55 Amodal completion of boundaries and surfaces in V2.
    || Separated V2 boundaries: near, far (amodal boundary completion); Separated V2 surfaces: ?horizonal, vertical? (amodal surface filling-in).
  • image p179fig04.56 Final steps in generating a visible, figure-ground separated, 3D surface representation in V4 of the unoccluded parts of opaque surfaces.
    || Visible surface perception.
    Boundary enrichment:nearfarasymmetry between near & far
    V4horizontal rectanglehorizontal & vertical rectanglescannot use these (overlapping?) boundaries for occluded object recognition
    V2horizontal rectanglevertical rectangleuse these boundaries for occluded object recognition
    Visible surface filling-in:filling-in of entire vertical rectanglepartial filling in of horizontal rectanglevisible percept of unoccluded [vertical] surface
  • image p181fig04.57 Percepts of unimodal and bistable transparency (top row) as well as of a flat 2D surface (bottom row, left column) can be induced just by changing the relative contrasts in an image with a fixed geometry.
    || X junction
  • image p182fig04.58 LAMINART model processing stage that are sufficient to explain many percepts of transparency, including those summarized in Figure 4.57.
    || [left, right] eye, [LGN, V1 [6, 4, 3B, 2/3 A], V2 [4, 2/3]], [mo, bi]nocular cart [simple, complex] cells, [excita, inhibi]tory cart [connection, cell]s.
  • image p186fig05.01 Humans and other autonomous adaptive intelligent agents need to be able to learn both many-to-one and one-to-many maps.
    || Learn many-to-one (compression, naming) and one-to-many (expert knowledge) maps
  • image p186fig05.02 Learning a many-to-one map from multiple visual fonts of a letter to the letter
  • image p186fig05.03 Many-to-one maps can learn a huge variety of kinds of predictive information.
    || Many-to-one map, two stage compression: IF-THEN rules: [symptom, test, treatment]s; length of stay in hospital
  • image p189fig05.04 The hippocampus is one of several brain regions that are important in learning and remembering about objects and events that we experience throughout life. The book will describe several hippocampal processes that contribute to this achievement in different ways.
    || hypothalmic nuclei, amygdala, hippocampus, cingulate gyrus, corpus callosum, thalamus
  • image p192fig05.05 ON and OFF cells in the LGN respond differently to the sides and ends of lines.
    || [ON, OFF]-center, [OFF, ON]-surround (respectively). OFF-center cells maximum response at line end (interior), ON-center cells maximum response along sides (exterior)
  • image p192fig05.06 Bottom-up and top-down circuits between the LGN and cortical area V1. The top-down circuits obey the ART Matching Rule for matching with bottom-up input patterns and focussing attention on expected critical features.
    || Model V1-LGN circuits, version [1, 2]. retina -> LGN relay cells -> interneurons -> cortex [simple, endstopped] cells -> cortex complex cells
  • image p193fig05.07 A more detailed description of the connections between retinal ganglion cells, the LGN, and V1.
    ||
  • image p193fig05.08 The patterns of LGN activation and inhibition on the sides and ends of a line without the top-down feedback (A) and with it (C). The top-down distribution of excitation (+) and inhibition (-) are shown in (B).
    ||
  • image p194fig05.09 A computer simulation of the percept (D) that is generated by feature contours (B) and boundary contours (C) in response to an Ehrenstein disk stimulus (A).
    ||
  • image p198fig05.10 A competitive learning circuit learns to transform distributed feature patterns into selective responses of recognition categories.
    || Competitive learning and Self-Organized Maps (SOMs). input patterns -> feature level (F1) -> adaptive filter (T=ZS) ->
  • image p199fig05.11 Instar learning enables a bottom-up adaptive filter to become selectively tuned to particular feature patterns. Such pattern learning needs adaptive weights that can either increase or decrease to match the featural activations that they filter.
    || Instar learning STM->LTM: need both increases and decreases in strength for the LTM pattern to learn the STM pattern
  • image p200fig05.12 The duality of the outstar and instar networks is evident when they are drawn as above.
    ||
  • image p200fig05.13 Instar and outstar learning are often used to learn the adaptive weights in the bottom-up filters and top-down expectations that occur in ART. The ART Matching Rule for object attention enables top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features.
    || Expectations focus attention: feature pattern (STM), Bottom-Up adaptive filter (LTM), Category (STM), competition, Top-Down expectation (LTM); ART Matching Rule: STM before top-down matching, STM after top-down matching (attention!)
  • image p200fig05.14 Outstar learning enables individual sampling cells to learn distributed spatial patterns of activation at the network of cells that they sample. Again, both increases and decreases in LTM traces must be possible to enable them to match the activity pattern at the sampled cells.
    || Outstar learning, need both increases and decreases in ????
  • image p201fig05.15 An outstar can learn an arbitrary spatial pattern of activation at its sampled nodes, or cells. The net pattern that is learned is a time average of all the patterns that are active at the sampled nodes when the sampling node is active.
    || Spatial learning pattern, outstar learning.
  • image p202fig05.16 In the simplest example of category learning, the category that receives the largest total input from the feature level is chosen, and drives learning in the adaptive weights that abut it. Learning in this "classifying vector", denoted by zi, makes this vector more parallel to the input vector from the feature level that is driving the learning (dashed red arrow).
    || Geometry of choice and learning
  • image p202fig05.17 This figure summarizes the simplest equations whereby the adaptive weights of a winning category learn the input pattern that drove it to win, or more generally a time-average of all the input patterns that succeeded in doing so.
    || Geometry of choice and learning, learning trains the closest LTM vector
  • image p205fig05.18 How catastrophic forgetting can occur in a competitive learning or self-organizing map model due to basic properties of competition and associative learning.
    || Learning from pattern sequences, practicing a sequence of spatial patterns can recode all of them! When is learning stable? Input patterns cannot be too dense relative to the number of categories; Either: not to many distributed inputs relative to the number of categories, or not too many input clusters
  • image p207fig05.19 The ART hypothesis testing and learning cycle. See the text for details about how the attentional system and orienting system interact in order to incorporate learning of novel categories into the corpus of already learned categories without causing catastophic forgetting.
    ||
  • image p211fig05.20 The PN and N200 event-related potentials are computationally complementary events that are computed within the attentional and orienting systems.
    || PN and N200 are complementary waves. PN [top-down, conditionable, specific] match; N200 [bottom-up, unconditionable, nonspecific] mismatch
  • image p211fig05.21 Sequences of P120, N200, and P300 event-related potentials occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    || ERP support for mismatch-mediated reset: event-related potentials: human scalp potentials. ART predicted correlated sequences of P120-N200-P300 Event Related Potentials during oddball learning. P120 mismatch; N200 arousal/novelty; P300 STM reset. Confirmed in (Banquet and Grossberg 1987)
  • image p213fig05.22 Suppose that a very different exemplar activates a category than the one that originally learned how to do this.
    || By prior learning, X1 at F1 is coded at F2, Suppose that X2 incorrectly activates the same F2 code. How to correct the error? The problem occurs no matter how you define an "error"
  • image p213fig05.23 A category, symbol, or other highly compressed representation cannot determine whether an error has occurred.
    || Compression vs error correction. past vs present. Where is the knowledge than an error was made? Not at F2! The compressed code cannot tell the difference! X2 is at F1 when (green right triangle GRT) is at F2 defines the error. There is a mismatch between X1 and X2 at F1. How does the system know this?
  • image p214fig05.24 Learning of a top-down expectation must occur during bottom-up learning in the adaptive filter in order to be able to match the previously associated feature pattern with the one that is currently active.
    || Learning top-down expectations. When the code (green right triangle GRT) for X1 was learned at F2, GRT learned to read-out X1 at F1. [Bottom-Up, Top-Down] learning
  • image p214fig05.25 The sequence of events whereby a novel input pattern can activate a category which, in turn, reads out its learned top-down expectation to be matched against the input pattern. Error correction thus requires the use of a Match Detector that has properties of the Processing Negativity ERP.
    || How is an error corrected. During bottom-up learning, top-down learning must also occur so that the pattern that is read out top-down can be compared with the pattern that is activated by bottom-up inputs. Match detector: Processing Negativity ERP. 1. top-down, 2. conditionable, 3. specific, 4. match
  • image p214fig05.26 When a big enough mismatch occurs, the orienting system is activated and sends a burst of nonspecific arousal to the category level. This Mismatch Detector has properties of the N200 ERP.
    || Mismatch triggers nonspecific arousal. Mismatch at F1 eleicits a nonspecific event at F2. Call this event nonspecific arousal. N200 ERP Naatanen etal: 1. bottom-up, 2. unconditionable, 3. nonspecific, 4. mismatch
  • image p215fig05.27 Every event activates both the attentional system and the orienting system. This text explains why.
    || Attentional and Orienting systems. Every event has a cue (specific) and an arousal (nonspecific) function
  • image p215fig05.28 How a mismatch between bottom-up and top-down input patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level.
    || Mismatch -> inhibition -> arousal -> reset. BU input orienting arousal, BU+TD mismatch arousal and reset. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
  • image p220fig05.29 Vigilance is a gain parameter on inputs to the orienting system that regulates whether net excitation from bottom-up inputs or inhibition from activated categories will dominate the orienting system. If excitation wins, then a memory search for a better matching will occur. If inhibition wins, then the orienting system will remain quiet, thereby enabling resonance and learning to occur.
    || Vigilance control [resonate and learn, reset and search]. ρ is a sensitivity or gain parameter
  • image p221fig05.30 When a predictive disconfirmation occurs, vigilance increases enough to drive a search for a more predictive category. If vigilance increases just enough to exceed the analog match between features that survive top-down matching and the entire bottom-up input pattern, then minimax learning occurs. In this case, the minimum amount of category generalization is given up to correct the predictive error.
    || Match tracking realizes minimax learning principle. Given a predictive error, vigilance increases just enough to trigger search and thus acrifices the minimum generalization to correct the error ... and enables expert knowledge to be incrementally learned. predictive error -> vigilance increase just enough -> minimax learning
  • image p221fig05.31 A system like Fuzzy ARTMAP can learn to associate learned categories in one ART network with learned categories in a second ART network. Because both bottom-up and top-down interactions occur in both networks, a bottom-up input pattern to the first ART network can learn to generate a top-down output pattern from the second ART network.
    || Fuzzy ARTMAP. Match tracking realizes minimax learning principle: vigilence increases to just above the match ratio of prototype / exemplar, thereby triggering search
  • image p224fig05.32 Learning the alphabet with two different levels of vigilance. The learning in column (b) is higher than in column (a), leading to more concrete categories with less abstract prototypes. See the text for details.
    ||
  • image p225fig05.33 Some early ARTMAP benchmark studies. These successes led to the use of ARTMAP, and many variants that we and other groups have developed, in many large-scale applications in engineering and technology that has not abated even today.
    || see Early ARTMAP benchmark studies
  • image p225fig05.34 ARTMAP was successfully used to learn maps of natural terrains with many advantages over those of mapping projects that used AI expert systems. The advantages are so great that many mapping projects started to use this technology.
    || AI expert system - 1 year: field identification of natural regions; derivation of ad hoc rules for each region by expert geographers; correct 80,000 of 250,000 site labels; 230m (site-level) scale. ARTMAP system - 1 day: rapid, automatic, no natural regions or rules; confidence map; 30m (pixel-level) scale can see roads; equal accuracy at test sites
  • image p226fig05.35 I had shown in 1976 how a competitive learning or self-organizing map model could undergo catastrophic forgetting if the input environment was sufficiently dense and nonstationary, as illustrated by Figure 5.18. Later work with Gail Carpenter showed how, if the ART Matching Rule was shut off, repeating just four input patterns in the correct order could also casue catastrophic forgetting by causing superset recoding, as illustrated in Figure 5.36.
    || Code instability input sequences. D C A; B A; B C = ; |D|<|B|<|C|; where |E| is the number of features in the set E. Any set of input vectors that satisfy the above conditions will lead to unstable coding if they are periodically presented in the order ABCAD and the top-down ART Matching Rule is shut off.
  • image p226fig05.36 Column (a) shows catastrophic forgetting when the ART Matching Rule is not operative. It is due to superset recoding. Column (b) shows how category learning quickly stabilizes when the ART Martching Rule is restored.
    || Stabel and unstable learning, superset recoding
  • image p228fig05.37 A macrocircuit of the neurotrophic Spectrally Timed ART, or nSTART, model. I developed nSTART with my PhD student Daniel Franklin. It proposes how adaptively timed learning in the hippocampus, bolstered by Brain Derived Neurotrophic Factor, or BDNF, helps to ensure normal memory consolidation.
    || habituative gates, CS, US, Thalamus (sensory cortex, category learning, conditioned reinforcer learning, adaptively time learning and BDNF), Amygdala (incentive motivation learning), Hippocampus (BDNF), Prefrontal Cortex (attention), Pontine nuclei, Cerebellum (adaptively timed motor learning)
  • image p230fig05.38 The Synchronous Matching ART, or SMART, model includes spiking neurons in a laminar cortical hierarchy. I developed SMART with my PhD student Massimiliano Versace. By unlumping LAMINART to include spiking neurons, finer details of neurodynamics, such as the existence of faster gamma oscillations during good enough matches, and slower beta oscillations during bad enough mismatches, could be shown as emergent properties of network interactions.
    || Second order thalamus -> specific thalamic nucleus -> Thalamic reticulate nucleus -> neocortical laminar circuit [6ll, 6l, 5, 2/3, 1] -> Higher order cortex. Similar for First order thalamus -> First order cortex, with interconnection to Second order, nonspecific thalamic nucleus
  • image p231fig05.39 The SMART hypothesis testing and learning cycle predicts that vigilance increases when a mismatch in subcortical regions like the nonspecific thanlamus activates the nucleus basalis of Meynert which, in turn, broadcasts a burst of the neurotransmitter acetylcholine, or ACh, to deeper cortical layers. Due to the way in which LAMINART proposes that cortical matching and mismatching occurs, this ACh burst can increase vigilance and thereby trigger a memory search. See the text for details.
    || [BU input, [, non]specific thalamic nucleus, thalamic reticulate nucleus, neocortical laminar circuit] cart [Arousal, Reset, Search, Vigilance]
  • image p232fig05.40 Computer simulation of how the SMART mode generates (a) gamma oscillations if a good enough match occurs, or (c) beta oscillations if a bad enough match occurs. See the text for details.
    || Brain oscillations during match/mismatch, data, simulation. (a) TD corticothalamic feedback increases synchrony (Sillito etal 1994) (b) Match increases γ oscillations (c) Mismatch increases θ,β oscillations
  • image p232fig05.41 (a)-(c). The sequence of interlaminar events that SMART predicts during a mismatch reset. (d) Some of the compatible neurophysiological data.
    || Mismatch causes layer 5 dendritic spike that trigger reset. (a) Arousal causes increase in nonspecific thalamic nuclei firing rate and layer 5 dendritic and later somatic spikes (Karkum and Zhu 2002, Williams and Stuart 1999) (b) Layer 5 spikes reach layer 4 via layer 6i and inhibitory neurons (Lund and Boothe 1975, Gilbert and Wiesel 1979) (c) habituative neurotransmitters in layer 6i shift the balance of active cells in layer 4 (Grossberg 1972, 1976) (d) Dendritic stimulation fires layer 5 (Larkum and Zhu 2002) stimulation apical dendrites of nonspecific thalamus
  • image p233fig05.42 Mismatch-induced beta oscillations have been reported in at least three parts of the brain: V1, V4, and hippocampus. Althpough there may be other reasons for beta oscillations in the brain, those that are caused by a mismatch should be studied in concert with the gamma oscillations that occur during a good enough match. See tyhe text for details.
    || Is there evidence for the [gamma, beta] prediction? Yes, in at least three parts of the brain, (Buffalo EA, Fries P, Ladman R, Buschman TJ, Desimone R 2011, PNAS 108, 11262-11267) Does this difference in average oscillation frequencies in the superficial and deep layers reflect layer 4 reset? Superficial recording γ (gamma), Deep recording β (beta) (Berke etal 2008, hippocampus; Buschman and Miller 2009, FEF)
  • image p236fig05.43 The activation of the nucleus basalis of Meynert, and its subsequent release of ACh into deeper layers of neocortex, notably layer 5, is assumed to increase vigilance by reducing afterhyperpolarization (AHP) currents.
    || Vigilance control: mismatch-mediated acetylcholine release (Grossberg and Versace 2008). Acetylcholine (ACh) regulation by nonspecific thalamic nuclei via nucleus basalis of Meynert reduces AHP in layer 5 and causes a mismatch/reset thereby increasing vigilance. HIGH vigilance ~ sharp code, LOW vigilance ~ coarse code
  • image p240fig05.44 When an algebraic exemplar model is realized using only local computations, it starts looking like an ART prototype model.
    || How does the model know which exemplars are in category A? BU-TD learning. How does a NOVEL test item access category A?
  • image p241fig05.45 The 5-4 category structure is one example of how an ART network learns the same kinds of categories as human learners. See the text for details.
    || 5-4 Category structure. A1-A5: closer to the (1 1 1 1) prototype; B1-B4: closer to the (0 0 0 0) prototype
  • image p242fig05.46 Computer simulations of how two variants of Distributed ARTMAP incrementally learn the 5-4 category structure. See the text for details.
    || Distributed ARTMAP with [self-supervised learning, post-training LTM noise]
  • image p245fig05.47 How long-range excitatory connections and short-range disynaptic inhibitory connections realize the bipole grouping law.
    || stimulus -> boundary representation -> layer 2/3
  • image p246fig05.48 Microcircuits of the LAMINART model that I developed with Rajeev Raizada. See the text for details of how they integrate bottom-up adaptive filtering, horizontal bipole grouping, and top-down attentional matching that satisfied the ART Matching Rule.
    ||
  • image p248fig05.49 This circuit of the LAMINART model helps to explain properties of Up and Down states during slow wave sleep, and how disturbances in ACh dynamics can disrupt them.
    ||
  • image p252fig06.01 A surface-shroud resonance begins to form when the surface representations of objects bid for spatial attention. In addition to these topographic excitatory inputs, there is long-range inhibition of the spatial attention cells that determines which inputs will attract spatial attention.
    || Bottom-up spatial attention competition. [more, less] luminous perceptual surfaces -> competition -> spatial attention
  • image p253fig06.02 After bottom-up surface inputs activate spatial attentional cells, they send top-down topographic excitatory signals back to the surface representations. This recurrent shunting on-center off-surround network contrast enhances larger attentional activities while approximately normalizing the total spatial attentional activity. A surface-shroud resonance hereby forms that selects an attentional shroud, enhances the perceived contrast of the attended surface (light blue region), and maintains spatial attention on it.
    || Surface-shroud resonance. perceptual surfaces -> competition -> spatial attention. (Carrasco, Penpeci-Talgar, and Eckstein 2000, Reynolds and Desimone 2003)
  • image p254fig06.03 These interactions of the ARTSCAN Search model enable it to learn to recognize and name invariant object categories. Interactions between spatial attention in the Where cortical stream, via surface-shroud resonances, and object attention in the What cortical stream, that obeys the ART Matching Rule, coordinate these learning, recognition, and naming processes.
    || Retinal image -> On & OFF cell contrast normalization (retina/LGN) -> polarity [, in]sensitive contrast enhancement (V1) -> object [boundary (V2), surface (V2/V4)] -> surface contour (V2); What stream categories: volition control (BG). object boundary (V2) <-> view (ITp) <-> view integrator (ITp) <-> object (ITa) <-> [object-value (ORB), value (Amyg)] <-> name (PFC)
  • image p255fig06.04 The ARTSCAN Search model can also search for a desired target object in a scene, thereby clarifying how our brains solve the Where
  • image p257fig06.05 A curve tracing task with monkeys was used by Roelfsema, Lamme, and Spekreijse in 1998 to demonstrate how spatial attention can flow along object boundaries. See the text for details.
    || Attention flows along curves: Roelfsema etal 1998: Macaque V1. fixation (300ms) -> stimulus (600ms RF - target curve, distractor) -> saccade. Crossed-curve condition: attention flows across junction between smoothly connected curve segments, Gestalt good continuation
  • image p258fig06.06 Neurophysiological data and simulation of how attention can flow along a curve. See the text for details.
    || Simulation of Roelfsema etal 1998, data & simulation. Attention directed only to far end of curve. Propagates along active layer 2/3 grouping to distal neurons.
  • image p258fig06.07 A top-down spotlight of attention can also be converted into a shroud. This process begins when the spotlight triggers surface filling-in within a region. Figure 6.8 shows how it is completed.
    || Reconciling spotlights and shrouds: top-down attentional spotlight becomes a shroud. spotlight of attention, surface filling-in
  • image p259fig06.08 The distributed ARTSCAN, or dARTSCAN, model includes spatial attention in both PPC and PFC, and both fast-acting attention, triggered by transient cells in Where cortical areas such as MT, and slower-acting surface-shroud resonances in What cortical areas such as V4 and PPC. See the text for details.
    || dARTSCN spatial attention hierarchy, Fast (Where stream) Slow (What stream) (Foley, Grossberg, and Mingolia 2012). [transient cells (MT) ->, object surfaces (V4) <->] [object shrouds (PPC) <-> spatial shrouds (PPC/PFC)]
  • image p260fig06.09 Crowding in the periphery of the eye can be avoided by expanding the size and spacing of the letters to match the cortical magnification factor.
    || Crowding: visible objects and confused recognition. Accurate target recogition requires increased flanker spacing at higher eccentricity
  • image p260fig06.10 The cortical magnification factor transforms (A) artesian coordinates in the retina into (B) log polar coordinates in visual cortical area V1.
    ||
  • image p261fig06.11 If the sizes and distances between the letters stays the same as they are received by more peripheral parts of the retina, then all three letters may be covered by a single shroud, thereby preventing their individual perception and recognition.
    || Crowding: visible objects and confused recognition. log compression and center-surround processing cause... input same eccentricity, surface, object shroud, crowding threshold. object shrouds merge!
  • image p261fig06.12 Pop-out of the L among T
  • image p265fig06.13 The basal ganglia gate perceptual, cognitive, emotional, and more processes through parallel loops.
    || [motor, ocularmotor, dorsolateral, ventral-orbital, anterior cingulate] vs. [Thalamus, pallidum-subs, nigra, Striatum, Cortex]
  • image p267fig06.14 Feedback from object surfaces to object boundaries uses surface contours. This feedback assures complementary consistency and enables figure-ground separation. A corollary discharge of the surface contours can be used to compite salient object feature positions.
    || Perceptual consistency and figure-ground separation.
  • image p268fig06.15 The largest salient feature signal is chosen to determine the next target position of a saccadic eye movement. This This target position signal self-inhibits to enable the next most salient position to be foveated. In this way, multiple feature combinations of the object can be foveated and categorized. This process clarifies how the eyes can explire even novel objects before moving to other objects. These eye movements enable invariant categories to be learned. Each newly chosen target position is, moreover, an "attention pointer" whereby attention shifts to the newly foveated object position.
    || How are saccades within an object determined? Figure-ground outputs control eye movements via V3AA! Support for prediction (Theeuwes, Mathot, and Kingstone 2010), More support: "attention pointers" (Cavanaugh etal 2010), Even more support (Backus etal 2001, Caplovitz and Tse 2006, Galletti and Battaglia 1989, Nakamura and Colby 2000)
  • image p270fig06.16 The same target position signal that can command the next saccade also updates a gain field that predictively maintains the attentional shroud in head-centered coordinates, even before the eye movement is complete. This process keeps the shroud invariant under eye movements, so that it can continue to inhibit reset of an emerging invariant category as t is associated with multiple object views, even while the conscious surface representation shifts with each eye movement in retinotopic coordinates. This pdating process is often called predictive re mapping.
    || Predictive remapping of eye movements! From V3A to LIP. [spatial attention, object attention, figure-ground separation, eye movement remapping, visual search]. (Beauvillaib etal 2005, Carlson-Radvansky 1999, Cavanaugh etal 2001, Fecteau & Munoz 2003, Henderson & Hollingworth 2003, Irwin 1991)
  • image p271fig06.17 Persistent activity in IT cells is just what is needed to enable view-invariant object category learning by ARTSCAN to be generalized to [view, position, size]-invariant category learning by positional ARTSCAN, or pARTSCAN. See the text for details.
    || Persistent activity in IT. Physiological data show that persistent activity exist in IT (Fuester and Jervey 1981, Miyashita and Chang 1988, Tomita etal 1999). Adapted from (Tomita etal 1999 Nature)
  • image p272fig06.18 The pARTSCAN model can learn [view, position, size]-invariant categories by adding view category integrator cells that have the properties of persistent neurons in IT. These integrator cells get reset with the invariant object category, not the view category.
    || pARTSCAN: positionally-invariant object learning. (Cao, Grossberg, Markowitz 2011). IT cells with persistent activities are modeled by view category integrators in ITp. View-specific category cells are RESET as the eyes move within the object. View category integrator cells are NOT RESET when the view-specific category is reset. They are RESET along with invariant object category cells when a spatial attention shift occurs.
  • image p272fig06.19 The various parts of this figure explain why persistent activity is needed in order to learn positionally-invariant object categories, and how this fails when persistent activity is not available. See the text for details.
    ||
  • image p273fig06.20 pARTSCAN can simulate the IT cell recoding that Li and DiCarlo reported in their swapping experiments because the swapping procedure happens without causing a parietal reset burst to occur. Thus the originally activated invariant category remains activated and can get associated with the swapped object features.
    || Simulation of Li and DiCarlo swapping data. data (Li and DiCarlo 2008), model (Cao, Grossberg, Markowitz 2011). normalized response vs. exposure (swaps and/or hours)
  • image p274fig06.21 pARTSCAN can also simulate the trade-off in IT cell responses between position invariance and selectivity that was reported by Zoccolan etal 2007. This trade-off limits the amount of position invariance that can be learned by a cortical area like V1 that is constrained by the cortical magnification factor.
    || Trade-off in IT cell response properties. Inferotemporal cortex cells with greater position invariance respond less selectively to natural objects. invariance-tolerance, selectivity-sparseness. data (Zoccolan etal 2007) model (Grossberg, Markowitzm, Cao 2011). position tolerance (PT, degrees) vs sparseness (S)
  • image p274fig06.22 pARTSCAN can simulate how IT cortex processes image morphs, when it learns with high vigilance. See the text for details.
    || Akrami etal simulation: a case of high vigilance. tested on morphs between image pairs
  • image p275fig06.23 Data from (Akrami etal 2009) and our simulation of it. See the text for details.
    || IT responses to image morphs. data vs model
  • image p275fig06.24 Left and right eye stereogram inputs are constructed to generate percepts of objects in depth. These percepts include the features of the objects, not only their relative depths, a property that is not realized in some other models of steriopsis. See the text for details.
    || Sterogram surface percepts: surface lightnesses are segregated in depth (Fand, Grossberg 2009). Contrast algorithms that just compute disparity matches and let computer code build the surface, eg (Marr, Poggio 1974)
  • image p276fig06.25 In addition to the gain field that predictively maintains a shroud in head-centered coordinates during saccades, there are gain fields that predictively maintain binocular boundaries in head-centered coordinates so that they can maintain binocular fusion during saccades and control the filling-in of surfaces in retinotopic coordinates.
    || Surface-shroud resonance.
  • image p277fig06.26 Gain fields also enable predictive remapping that maintain binocular boundary fusion as the eyes move betweeen objects. See the text for details.
    || Predictive remapping maintains binocular boundary fusion even as eyes move between objects. retinotopic boundary -> invariant boundary (binocular)
  • image p278fig06.27 A surface-shroud resonance through the Where stream enables us to consciously see an object while a feature-category resonance into the What stream enables us to recognize it. Both kinds of resonances can synchronize via visual cortex so that we can know what an object is when we see it.
    || What kinds of resonances support knowing vs seeing? What stream [knowing, feature-prototype resonance], Where stream [seeing, surface-shroud resonance]
  • image p278fig06.28 If the feature-category resonances cannot form, say due to a lesion in IT, then a surface-shroud resonance can still support conscious seeing of an attended object, and looking at or reaching for it, even if the individual doing so knows nothing about the object, as occurs during visual agnosia. The surface-shroud resonance supports both spatial attention and releases commands that embody the intention to move towards the attended object.
    || What kinds of resonances support knowing vs seeing? visual agnosia: reaching without knowing Patient DF (Goodale etal 1991). Attention and intention both parietal cortical functions (Anderson, Essick, Siegel 1985; Gnadt, Andersen 1988; Synder, Batista, Andersen 1997, 1998)
  • image p283fig07.01 The usual boundary processing stages of [simple, complex, hypercomplex, bipole] cells enable our brains to correct uncontrolled persistence of previously excited cells just by adding habituative transmitter gates, or MTM traces, at appropriate places in the network.
    || Boundary processing with habituative gates. spatial competition with habituative gates, orientational competition: gated dipole, bipole grouping
  • image p284fig07.02 Psychophysical data (top row) and simulation (bottom row) of how persistence decreases with flash illuminance and duration.
    || Persistence data and simulations. (Francis, Grossberg, Mingolia 1994 Vision Research, 34, 1089-1104). Persistence decreases with flash illuminance and duration (Bowen, Pola, Matin 1974; Breitmeyer 1984; Coltheart 1980). Higher luminance or longer duration habituates the gated dipole ON channel more. Causes larger and faster rebound in the OFF channel to shut persisting ON activity off.
  • image p285fig07.03 Persistence decreases with flash illuminance and duration due to the way in which habituative transmitters regulate the strength of the rebound in response to offset of a stimulating input, and how this rebound inhibits previously activated bipole cells.
    || Persistence data and simulations (Francis, Grossberg, Mingolia 1994 Vision Research, 34, 1089-1104). Persistence decreases with flash illuminance and duration. Horizontal input excites a horizontal bipole cell, which supports persistence. Offset of the horizontal input causes a rebound of activity in the vertical pathway, which inhibits the horizontal bipole cell, thereby terminating persistence.
  • image p286fig07.04 Illusory contours persist longer than real contours because real contours have more inducers whose rebound at contour offset can cause faster boundary reset. Illusory contours also take longer to form than real contours, which explains the increasing portion of the curve.
    || Persistence data and simulations (Meyer, Ming 1988; Reynolds 1981). Increasing portion of curve is due to formation time of the illusory contour. Longer persistence is due to fewer bottom-up inducers of an illusory contour that has the same length as a real contour: only illuminance-derived edges generate reset signals. When bottom-up inducers are inhibited by OFF cell rebounds, their offset gradually propagates to the center of the illusory contour.
  • image p286fig07.05 This figure shows the propagation through time of illusory contour offset from the rebounded cells that got direct inputs to the center of the contour.
    || Persistence data and simulations. Illusory contours persist longer than real contours (Meyer, Ming 1988; Reynolds 1981). When bottom-up inducers are inhibited by OFF cell rebounds, their offset gradually propagates to the center of the illusory contour.
  • image p287fig07.06 The relative durations of persistence that occur due to an adaptation stimulus of the same or orthogonal orientation follow from the properties of the habituative gated dipoles that are embedded in the boundary completion system.
    || Persistence data and simulations. Change in persistence depends on whether adaptation stimulus has same or orthogonal orientation as test grating (Meyer, Lawson, Cohen 1975). If adaptation stimulus and test stimulus have the same orientation, they cause cumulative habituation, which causes a stronger reset signal, hence less persistence. When they are orthogonal, the competition on the ON channel is less, hence more persistence.
  • image p287fig07.07 Persistence increases with distance between a target and a masking stimulus due to weakening of the spatial competition in the first competitive stage of hypercomplex cells.
    || Persistence data and simulations. Persistence increases with distance between a target and a masking stimulus (Farrell, Pavel, Sperling 1990). There is less spatial competition from the masker to the target when they are more distant, hence the target is more persistent.
  • image p290fig08.01 Motion in a given direction pools all possible contrast-sensitive sources of information that are moving in that direction.
    ||
  • image p291fig08.02 Complex cells can respond to motion in opposite directions and from features with opposite contrast polarities.
    ||
  • image p292fig08.03 The MacKay and waterfall illusion aftereffects dramatically illustrate the different symmetries that occur in the orientational form stream and the directional motion stream.
    || Form and motion aftereffects. different inhibitory symmetries govern orientation and direction. illusions: [Form- MacKay 90°, Motion- waterfall 180°]. stimulus, aftereffect percept
  • image p293fig08.04 Most local motion signals on a moving object (red arrows) may not point in the direction of the object
  • image p295fig08.05 The perceived direction of an object is derived either from a small subset of feature tracking signals, or by voting among ambiguous signals when feature tracking signals are not available.
    || Aperture problem. Barberpole illusion (Wallach). How do sparse feature tracking signals capture so many ambiguous motion signals to determine the perceived motion direction?
  • image p296fig08.06 In the simplest example of apparent motion, two dots turning on and off out of phase in time generate a compelling percept of continuous motion between them.
    || Simplest long-range motion paradigm. ISI- interstimulus interval, SOA- stimulus onset synchrony
  • image p296fig08.07 When two flashes turn on and off out of phase with the correct range of interstimulus intervals, and not too far from one another, then either beta motion of phi motion are perceived.
    || Beta and Phi motion percepts. Beta motion: percepts of continuous motion of a well-defined object across empty intervening space. Phi motion: sense of "pure" motion without a concurrent percept of moving object. (Exner 1875) http://www.yorku.ca/eye/balls.htm
  • image p297fig08.08 When a second flash is more intense than the first flash, then apparent motion may occur from the second to the first flash.
    || Delta motion: motions from the second to the first flash. Data: (Kolers 1972; Korte 1915). Simulation: (Grossberg, Rudd 1992). This occurs when the luminance or contrast of the second flash is large compared to that of the first flash. Sustained and transient cells obey shunting dynamics whose averaging rates speed up with output intensity. The first flash to wane is the one that will be the source of the G-wave.
  • image p297fig08.09 Simulation of motion in opposite directions that is perceived when two later flashes occur on either side of the first flash.
    || Split motion. Data: (H.R. Silva 1926), Simulation: (Grossberg, Rudd 1992)
  • image p298fig08.10 Simulation of the motion speed-up that is perceived when flash duration decreases.
    || "The less you see it, the faster it moves". Data: (Giaschi, Anstis 1989), Simulation: (Grossberg, Rudd 1992). ISI = 0, flash duration decreases; SOA = constant, flash duration decreases
  • image p298fig08.11 This formotion percept is a double illusion due to boundary completion in the form stream followed by long-range apparent motion using the completed bioundaries in the motion stream.
    || Form-motion interactions. Apparent motion of illusory contours (Ramachandran 1985). Double illusion! Illusory contour is created in form stream V1-V2. Apparent motion of illusory contours occurs in motion stream due to a V2-MT interaction.
  • image p300fig08.12 A single flash activates a Gaussian receptive field across space whose maximum is chosen by a winner-take-all recurrent on-center off-surround network.
    || Gaussian receptive fields are sufficient! (Grossberg, Rudd 1992). Single flash. Suppose that a single flash causes a narrow peak of activity at the position where it occurs. It generates output signals through a Gaussian filter that produces a Gaussian activity profile at the next processing stage., A recurrent on-center off-surround network chooses the maximum activity and suppresses samaller activities. Winner-take-all
  • image p300fig08.13 As a flash waxes and wanes through time, so too do the activities of the cells in its Gaussian receptive field. Because the maximum of each Gaussian occurs at the same position, nothing is perceived to move.
    || Temporal profile of a single flash. Suppose that a single flash quickly turns on to maximum activity, stays there for a short time, and then shuts off. It causes an increase in activity, followed by an exponential decay of activity. The corresponding Gaussian profile waxes and wanes through time. Since the peak position of the Gaussian does not change through time, nothing moves.
  • image p300fig08.14 Visual inertia depicts how the effects of a flash decay after the flash shuts off.
    || Inertia (%) vs ISI (msec)
  • image p301fig08.15 If two flashes occur in succession, then the cell activation that is caused by the first one can be waning while the activation due to the second one is waxing.
    || Temporal profile of two flashes. Of two flashes occur in succession, the waning of the activity due to the first flash may overlap with the waxing of the activity due to the second flash.
  • image p301fig08.16 The sum of the waning Gaussian activity profile due to the first flash and the waxing Gaussian activity profile due to the second flash has a maximum that moves like a travelling wave from the first to the second flash.
    || Travelling wave (G-wave): long-range motion. If the Gaussian activity profiles of two flashes overlap sufficiently in space and time, then the sum of Gaussians produced by the waning of the first flash added to the Gaussian produced by the waxing of the second flash, can produce a single-peaked travelling wave from the position of the first flash to that of the second flash. The wave is then processed through a WTA choice network (Winner Take All). The resulting continuous motion percept is both long-range and sharp.
  • image p302fig08.17 An important constraint on whether long-rang apparent motion occurs is whether the Gaussian kernel is broad enough to span the distance between successive flashes.
    || Motion speed-up with increasing distance: For a fixed ISI, how does perceived velocity increase with distance between the flashes? Gaussian filter : Gp = exp{ -(j-i)^2 / (2*K^2) }. The largest separation, L_crit, for which sufficient spatial overlap between two Gaussians centered at locations i and j will exist to support a travelling wave of summed peak activity is : L_crit = 2*K
  • image p302fig08.18 This theorem shows how far away (L), given a fixed Gaussian width, two flashes can be to generate a wave of apparent motion between them.
    || G-wave properties (Grossberg 1977). Let flashes occur at positions i=0 and i=L. Suppose that d[dt: x0] = -A*x0 + J0; d[dt: xL] = -A*xL + JL; Define G(w,t) ...; Theorem 1 max_w G(w,t) moves continuously through time from w=0 to w=L if and only if L <= 2*K.
  • image p303fig08.19 The dashed red line divides combinations of flash distance L and Gaussian width K into two regions of no apparent motion (above the line) and apparent motion (below the line).
    || No motion vs motion at multiple scales.
  • image p303fig08.20 The G-wave speeds up with the distance between flashes at a fixed delay, and has a consitent motion across multiple spatial scales.
    || G-wave properties (Grossberg 1977). Theorem 2 (Equal half-time property) The time at which the motion signal reaches position w=L/2. Apparent motion speed-up with distance: this half-time is independent of the distance L between the two flashes. Consistent motion across scales: half-time is independent of the scale size K. Method of proof: elementary algebra and calculus (Grossberg, Rudd 1989 appendix)
  • image p304fig08.21 A computer simulation of the equal half-time property whereby the apparent motions within different scales that respond to the same flashes all reach the half-way point in the motion trajectory at the same time.
    || Equal half-time property: how multiple scales cooperate to generate motion percept. Travelling waves from Gaussian filters of different sizes bridge the same distance in comparable time. The time needed to bridge half the distance between flashes is the same.
  • image p304fig08.22 Data (top image) and simulation (bottom image) of Korte
  • image p305fig08.23 Despite its simplicity, the Terus display can induce one of four possible percepts, depending on the ISI.
    || Ternus motion. ISI [small- stationary, intermediate- element, larger- group] motion http://en.wikipedia.org/wiki/Ternus_illusion
  • image p305fig08.24 When each stimulus has an opposite contrast relative to the background, element motion is eliminated and replaced by group motion at intermediate values of the ISI.
    || Reverse-contrast Ternus motion. ISI [small- stationarity, intermediate- group (not element!), larger- group] motion.
  • image p306fig08.25 The Motion BCS model can explain and simulate all the long-range apparent motion percepts that this chapter describes.
    || Motion BCS model (Grossberg, Rudd 1989, 1992) Level 1: discount illuminant; Level 2: short-range filter, pool sustained simple cell inputs with like-oriented receptive fields aligned in a given direction. Sensitive to direction-of-contrast; Level 3: Transient celss with unoriented receptive field. Sensitive to direction-of-change
  • image p306fig08.26 The 3D FORMOTION model combines mechanisms for determining the relative depth of a visual form with mechanisms for both short-range and long-range motion filtering and grouping. A formotion interaction from V2 to MT is predicted to enable the motion stream to track objects moving in depth.
    || 3D Formotion model (Chey etal 1997; Grossberg etal 2001; Berzhanskaya etal 2007). Form [LGN contours -> simple cells orientation selectivity -> complex cells (contrast pooling, orientation selectivity, V1) -> hypercomplex cells (end-stopping, spatial sharpening) <-> bipole cells (grouping, cross-orientation competition) -> depth-separated boundaries (V2)], Motion: [LGN contours -> transient cells (directional stability, V1) -> short-range motion filter -> spatial competition -> long-range motion filter and boundary selection in depth (MT) <-> directional grouping, attentional priming (MST)]
  • image p307fig08.27 The distribution of transients through time at onsets and offsets of Ternus display flashes helps to determine whether element motion or group motion will be perceived.
    || Ternus motion. Element motion: zero or weak transients at positions 2 and 3; Group motion: strong transients at positions 2 and 3. Conditions that favor visual persistence and thus perceived stationarity of element (2,3) favor element motion (Braddick, Adlard 1978; Breitmeyer, Ritter 1986; Pantle, Peteresik 1980)
  • image p308fig08.28 The Gaussian distributions of activity that arise from the three simultaneous flashes in a Ternus display add to generate a maximum value at their midpoint. The motion of this group gives rise to group motion.
    || Ternus group motion simulation. If L < 2*K, Gaussian filter of three flashes forms one global maximum.
  • image p310fig08.29 When the individual component motions in (A) and (B) combine into a plaid motion (C), both their perceived direction and speed changes.
    ||
  • image p311fig08.30 The data of (Castet etal 1993) in the left image was simulated in the right image by the 3D FORMOTION model that I developed with my PhD student Jonathan Chey. These data provide insight into how feature tracking signals propagate from the ends of a line to its interior, where they capture consistent motion directional signals and inhibit inconsistent ones.
    || Solving the aperture problem. A key design problem: How do amplified feature tracking signals propagate within depth to select the cirrect motion directions at ambiguous positions? This propagation from feature tracking signals to the line interior determines perceived speed in Castet etal data, which is why speed depends on line tilt and length. Data: (Castet etal 1993), Simulation: (Chey etal 1997)
  • image p311fig08.31 Processing stages of the Motion BCS convert locally ambiguous motion signals from transient cells into a globally coherent percept of object motion, thereby solving the aperture problem.
    || Why are so many motion processing stages needed? change sensitive receptors -> directional transient cells -> directional short-range filter -> spatial and directional competition -> directional long-range filter (MT) <-> Directional grouping network
  • image p312fig08.32 Schematic of motion filtering circuits.
    || Level 1: Change sensitive units -> Level 2: transient cells -> Level 3: short-range spatial filters -> Level 4: intra-scale competition -> Level 5: inter-scale competition
  • image p312fig08.33 Processing motion signals by a population of speed-tuned neurons.
    ||
  • image p314fig08.34 The VISTARS model for visually-based spatial navigation. It uses the Motion BCS as a front end and feeds it output signals into two computationally complementary cortical processing streams for computing optic flow and target tracking information.
    || VISTARS navigation model (Browning, Grossberg, Mingolia 2009). Use FORMOTION model as front end for higher level navigational circuits: input natural image sequences -> estimate heading (MT+)-MSTd -> additive processing -> estimate object position (MT-)-MSTv direction and speed subtractive processing -> Complementary Computing. [optic flow navigation, object tracking]
  • image p315fig08.35 The output signals from the directional grouping network obey the ART Matching Rule. They thereby select consistent motion directional signals while suppressing inconsistent ones, and do not distort that the spared cells code. The aperture problem is hereby solved by the same mechanism that dynamically stabilizes the learning of directional grouping cells.
    || How to select correct direction and preserve speed estimates? Prediction: Feedback from MSTv to MT- obeys ART Matching Rule; Top-down, modulatory on-center, off-surround network (Grossberg 1976, 1980; Carpenter, Grossberg 1987, 1991); Explains how directional grouping network can stably develop and how top-down directional attention can work. (Cavanough 1992; Goner etal 1986; Sekuer, Ball 1977; Stelmach etal 1994). Directional grouping network (MSTv) <-> Directional long-range filter (MT). Modulatory on-center selects chosen direction and preserves speed. Off-surround inhibits incompatible directions.
  • image p316fig08.36 How the directional grouping network, notably properties of the ART Matching Rule, enables a small set of amplified feature tracking signals at the ends of a line to select consistent directions in the line interior, while suppressing inconsistent directions.
    || Motion capture by directional grouping feedback. Directional grouping network (MSTv) <-> Directional long-range filter (MT). It takes longer to capture ambiguous motion signals in the line interior as the length of the line increases cf (Castet etal 1993)
  • image p317fig08.37 Processing stages that transform the transient cell inputs in response to a tilted moving line into a global percept of the object
  • image p319fig08.38 The neurophysiological data from MT (left image) confirms the prediction embodied in the simulation of MT (right image) concerning the fact that it takes a long time for MT to compute an object
  • image p320fig08.39 Simulation of the barberpole illusion direction field at two times. Note that the initial multiple directions due to the feature tracking signals at the contiguous vertical and horizontal sides of the barberpole (upper image) get supplanted by the horizontal direction of the two horizontal sides (lower image).
    || Barberpole illusion (one line) simulation
  • image p321fig08.40 Visible occluders capture the boundaries that they share with moving edges. Invisible occluders do not. Consequently, the two types of motions are influenced by different combinations of feature tracking signals.
    || Motion grouping across occluders (J. Lorenceau, D. Alais 2001). Rotating contours observed through apertures. Determine direction of a circular motion. [, in]visible occluders http://persci.mit.edu/demos/square/square.html
  • image p322fig08.41 A percept of motion transparency can be achieved by using motion grouping feedback that embodies the "asymmetry between near and far" along with the usual opponent competition between opposite motion directions.
    || Motion transparency. near: big scale; far: small scale MSTv, "Asymmetry between near and far" Inhibition from near (large scales) to far (small scales) at each position
  • image p323fig08.42 The chopsticks illusion not only depends upon how feature tracking signals are altered by visible and invisible occluders, but also upon how the form system disambiguates the ambiguous region where the two chopsticks intersect and uses figure-ground mechanisms to separate them in depth.
    || Chopsticks: motion separation in depth (Anstis 1990). [, in]visible occluders [display, percept]
  • image p324fig08.43 Attention can flow along the boundaries of one chopstick and enable it to win the orientation competition where the two chopsticks cross, thereby enabling bipole grouping and figure-ground mechanisms to separate them in depth within the form cortical stream.
    || The ambiguous X-junction. motion system. Attention propagates along chopstick and enhances cell activations in one branch of a chopstick. MT-MST directional motion grouping helps to bridge the ambiguous position.
  • image p325fig08.44 Attentional feedback from MST-to-MT-to-V2 can strengthen one branch of a chopstick (left image). Then bipole cell activations that are strengthened by this feedback can complete that chopstick
  • image p325fig08.45 The feedback loop between MT/MST-to-V1-to-V2-to-MT/MST enables a percept of two chopsticks sliding one in front of the other while moving in opposite directions.
    || Closing formotion feedback loop. [formotion interaction, motion grouping] V1 -> V2 -> (MT <-> MST) -> V1
  • image p326fig08.46 How do we determine the relative motion direction of a part of a scene when it moves with a larger part that determines an object reference frame?
    || How do we perceive relative motion of object parts?
  • image p327fig08.47 Two classical examples of part motion in a moving reference frame illustrate the general situation where complex objects move while their multiplie parts may move in different directions relative to the direction of the reference frame.
    || Two kinds of percepts and variations (Johansson 1950). Symmetrically moving inducers: each do moves along a straight path, each part contributes equally to common motion; Duncker wheel (Duncker 1929): one dot moves on a cycloid, the other dot (the "center") moves stright, unequal contributipon from parts; If the dot is presented alone: seen as cycloid; if with center: seen as if it were on the rim of a wheel.
  • image p328fig08.48 How vector subtraction from the reference frame motion direction computes the part directions.
    || How vector decomposition can explain them. Common motion subtracted from retinal motion gives part motion: [retinal, common, part] motion
  • image p328fig08.49 A directional peak shift in a directional hypercolumn determines the part directions relative to a moving reference frame.
    || What is the mechanism of vector decomposition? (Grossberg, Leveille, Versace 2011). Prediction: directional peak shift! ...specifically, a peak shift due to Gaussian lateral inhibition. [retinal, part, common, relative] motion. shunting dynamics, self-normalization, contrast gain control
  • image p329fig08.50 The common motion direction of the two dots builds upon illusory contours that connect the dots as they move through time. The common motion directin signal can flow along these boundaries.
    || How is common motion direction computed? retinal motion. Bipole grouping in the form stream creates illusory contours between the dots. V2-MT formotion interaction injects the completed boundaries into the motion stream where they capture consistent motion signals. Motion of illusory contours is computed in the motion stream: cf. Ramanchandran
  • image p329fig08.51 Large and small scale boundaries differentially form illusory contours between the dots and boundaries that surround each of them respectively. These boundaries capture the motion signals that they will support via V2-to-MT formotion interaction. The MST-to-MT directional peak shift has not yet occurred.
    || Large scale: near. Can bridge gap between dots to form illusory contours. Spatial competition inhibits inner dot boundaries.; Small scale: far. Forms boundaries around dots.
  • image p330fig08.52 Direction fields of the object frame (left column) and of the two dot "parts" (right column) show the correct motion directions after the peak shift top-down expectation acts.
    || Simulation of motion vector decomposition. [Larger scale (nearer depth), Small scale (farther depth)] vs [Down, Up]
  • image p330fig08.53 Simulation of the various directional signals of the left dot through time. Note the amplification of the downward directional signal due to the combined action of the short-range and long-range directional signals.
    ||
  • image p331fig08.54 The simulated part directions of the rotating dot through time after the translational motion of the frame does its work via the top-down peak shift mechanism.
    || Cycloid. Motion directions of a single dot moving slowly along a cycloid curve through time.
  • image p331fig08.55 The rightward motion of the dot that determines the frame propagates along the illusory contour between the dots and thereby dominates the motion directions along the rim as well, thereby setting the stage for the peak shift mechanism.
    || Duncker Wheel: large scale. [cyc;oid, center] velocity -> rightward common velocity. Stable rightward motion at the center captures motion at the rim.
  • image p332fig08.56 Simulation of the Duncker Wheel motion through time. See the text for details.
    || Duncker Wheel: small scale. Temporal procession of activity in eight directions. Wheel motion as seen when directions are collapsed.
  • image p332fig08.57 The MODE model uses the Motion BCS as its front end, followed by a saccadic target selection circuit in the model LIP region that converts motion directions into movement directions. These movement choices are also under basal ganglia (BG) control. More will be explained about the BG in Chapters 13 and 15.
    || MODE (MOtion DEcision) model (Grossberg, Pilly 2008, Vision Research). Change sensitive receptors -> directional transient cells -> directiponal short-range filter -> spatial and directional competition -> directional long-range filter (MT) <-> directional grouping network (MSTv) -> saccadic target selection <-> gsting mechanism (BG). Representation of problem that solves the aperture problem (change sensitive receptors (CSR) -> directional grouping network (DGN, MSTv)). Gated movement choice (saccadic target selection & gating mechanism)
  • image p333fig08.58 Neurophysiological data (left image) and simulation (right image) of LIP data during correct trials on the RT task. See the text for details.
    || LIP responses during RT task correct trials (Roltman, Shadlen 2002). More coherence in favored direction causes faster cell activation. More coherence in opposite direction causes faster cell inhibition. Coherence stops playing a role in the final stages of LIP firing.
  • image p334fig08.59 Neurophysiological data (left column) and simulations (right column) of LIP responses for the FD task during both [correct, error] trials. See the text for details.
    || LIP responses for the FD task during both [correct, error] trials (Shadlen, Newsome 2001). LIP encodes the perceptual decision regardless of the true direction of the dots. Predictiveness of LIP responses on error trials decreases with increasing coherence.
  • image p334fig08.60 Behavioral data (left image) and simulation (right image) about accuracy in both the RT and FD tasks. See text for details
    || Behavioral data: % correct vs % coherence (Mazurek etal 2003; Roltman, Shadien 2002). More coherence in the motion causes more accurate decisions. RT task accuracy at weaker coherence levels is slightly better than FD task accuracy.
  • image p335fig08.61 Behavioral data (left image) and simulation (right image) about speed in correct and error trials of the RT task. See text for details.
    || Behavioral data: speed, correct and error trials (RT task) (Roltman, Shadien 2002). More coherence in the motion causes faster reaction time.
  • image p335fig08.62 More remarkable simulation fits (right column) to LIP neurophysiology data (left column) about where and when to move the eyes.
    || LIP encodes not only where, but also when, to move the eyes. ...No Bayes(Roltman, Shadien 2002). Firing rate (sp/s) vs time (ms). Slope of firing rate (sp/s^2) vs % correct.
  • image p338fig09.01 The brain regions that help to use visual information for navigating in the world and tracking objects are highlighted in yellow.
    || How does a moving observer use optic flow to navigate while tracking a moving object? [What ventral, Where dorsal] retina -> many locations -> PFC
  • image p338fig09.02 Heading, or the direction of self-motion (green dot), can be derived from the optic flow (red arrows) as an object, in this case an airplane landing, moves forward.
    || Heading and optic flow (Gibson 1950). Optic flow: scene motion generates a velocity field. Heading: direction of travel- self-motion direction. Heading from optic flow, focus of expansion (Gibson 1950). Humans determine heading accurately to within 1-2 degrees.
  • image p339fig09.03 When an observer moves forward, an expanding optic flow is caused. Eye rotations cause a translating flow. When these flows are combined, a spiral flow is caused. How do our brains compensate for eye rotations to compute the heading of the expanding optic flow?
    || Optic flow during navigation (adapted from Warren, Hannon 1990) [observer, retinal flow]: [linear movement, expansion], [eye rotation, translation], [combined motion, spiral]
  • image p339fig09.04 This figure emphasizes that the sum of the expansion and translation optic flows is a spiral optic flow. It thereby raises the question: How can the translation flow be subtracted from the spiral flow to recover the expansion flow?
    || Eye rotations add a uniform translation to an flow field. Resulting retinal patterns are spirals. Expansion + translation = spiral
  • image p340fig09.05 An outflow movement command, also called efference copy or corollary discharge, is the souce ot the signals whereby the commanded eye movement position is subtracted from spiral flow to recover expansion flow and, with it, heading.
    || Subtracting efference copy. Many experiments suggest that the brain internally subtracts the translational component due to eye movements. Efference copy subtracts the translational component using pathways that branch from outflow movement commands to the eye muscles.
  • image p340fig09.06 Corollary discharges are computed using a branch of the outflow movement commands that move their target muscles.
    ||
  • image p340fig09.07 Log polar remapping from the retina to cortical area V1 and beyond converts expansion, translation, and spiral flows on the retina into parallel flows, with different orientations, on the cortical map.
    || Log polar remapping of optic flow. retina -> cortex. Any combination of expansion and circular motion centered on the fovea maps to cortex as a single direction. Retinal Cartesian coordinates (x,y) map to cortical polar coordinates (r,theta). This makes it easy to compute directional receptive fields in the cortex!
  • image p341fig09.08 How the various optic flows on the retina are mapped through V1m MT, and MSTd to then compute heading in parietal cortex was modeled by (Grossberg, Mingolia, Pack 1999), using the crucial transformation via V1 log polar mapping into parallel cortical flow fields.
    || MSTd model (Grossberg, Mingolia, Pack 1999). Retinal motion -> V1 log polar mapping -> Each MT Gaussian RF sums motion in preferred direction -> Each MSTd cell sums MT cell inputs with same log polar direction -> Efference copy subtracts rotational flow from MSTd cells.
  • image p341fig09.09 Responses of MSTd cells that are used to compute heading. See the text for details.
    || Cortical area MSTd (adapted from Graziano, Anderson, Snowden 1994). MSTd cells are sensitive to spiral motion as combinations of rotation and expansion.
  • image p342fig09.10 Model simulations of how the peak of MSTd cell activation varies with changes of heading.
    || Heading in log polar space: Retina -> log polar -> MSTd cell. Log polar motion direction correlates with heading eccentricity.
  • image p342fig09.11 Psychophysical data (left panel) and computer simulation (right column) of the importance of efference copy in real movements. See the text for details.
    || Heading: move to wall and fixate stationary object (adapted from Warren, Hannon 1990). Inaccurate for simulated eye rotation, accurate for real eye rotation, need confirmation by efference copy!
  • image p343fig09.12 Transforming two retinal views of the Simpsons into log polar coordinates dramatizes the problem that our brains need to solve in order to separate, and recognize, overlapping figures.
    || View 1 cortical magnification. View 2 How do we know if we are still fixating on the same object?!
  • image p343fig09.13 When one scans the three different types of pears in the left image, as illustrated by the jagged blue curve with red movement end positions, and transforms the resulting retinal images via the cortical magnification factor, or log polar mapping, the result is the series of images in the right column. How do our brains figure out from such confusing data which views belong to which pear?
    || View-invariant object learning and recognition Three pears: Anjou, Bartlett, Comice. Which is the Bartlett pear? During unsupervised scanning and learning about the world, no one tells the brain what views belong to which objects while it learns view-invariant object categories. Cortical magnificantion in V1.
  • image p344fig09.14 (top row, left column) By fitting MT tuning curves with Gaussian receptive fields, a tuning width of 38° is estimated, and leads to the observed standard spiral tuning of 61° in MSTd. (bottom row, left column) The spiral tuning estimate in Figure 9.16 maximizes the position invariant of MSTd receptive fields. (top row, right column) Heading sensitivity is not impaired by these parameter choices.
    || [Spiral tuning (deg), position invariance (deg^(-1)), heading sensitivity] versus log polar direction tuning σ (deg)
  • image p345fig09.15 Double opponent directional receptive fields in MT are capable of detecting the motion of objects relative to each other and their backgrounds.
    || Motion opponency in MT (Born, Tootell 1992). Motion opponent (Grossberg etal), Differential motion (Royden etal), Subtractive motion cells (Neumann etal). ON center directionally selective: [excit, inhibit]ed by motion in [one, opponent] direction. OFF surround directionally selective: [excit, inhibit]ed by motion in [opponent, center] direction.
  • image p346fig09.16 A macrocircuit of some of the main brain regions that are used to move the eyes. Black boxes denote areas belonging to the saccadic eye movement systes (SAC), white boxes the smooth pursuit eye system (SPEM), and gray boxes, both systems. The abbreviations for the different brain regions are: LIP - Lateral Intra-Parietal area; FPA - Frontal Pursuit Area; MST - Middle Superior Temporal area; MT - Middle Temporal area; FEF - Frontal Eye Fields; NRPT - Nucleus Reticularis Tegmenti Pontis; DLPN - Dorso-Lateral Pontine Nuclei; SC - Superior Colliculus; CBM - CereBelluM; MVN/rLVN - Medial and Rostro-Lateral Vestibular Nucleii; PPRF - a Peri-Pontine Reticular Formation; TN - Tonic Neurons
    ||
  • image p347fig09.17 The leftward eye movement control channel in the model that I developed with Christopher Pack. See the text for details.
    || retinal image -> MT -> MST[v,d] -> pursuit
  • image p347fig09.18 These circuits between MSTv and MSTd enable predictive target tracking to be achieved by the pursuit system, notably when the eyes are successfully foveating a moving target. Solid arrows depict excitatory connections, dashed arrows depict inhibitory connections.
    ||
  • image p348fig09.19 How a constant pursuit speed that is commanded by MSTv cells starts by using target speed on the retina and ends by using backgound speed on the retina in the reverse direction during successful predictive pursuit.
    || target speed on retina, background speed on retina, pursuit speed command by MSTV cells
  • image p349fig09.20 Using virtual reality displays (left image), (Fajen, Warren 2003) collected data (right two images) about how observers avoid obstacles (open circular disks) as a function of their distance and angular position as they navigate towards a fixed goal (x). These data illustrate how goals act as attractors while obstacles act as repellers.
    || Steering from optic flow (Fajen, Warren 2003). goals are attractors, obstacles are repellers. Damped spring model explains human steering data.
  • image p349fig09.21 How attractor-repeller dynamics with Gaussians change the net steering gradient as the goal is approached.
    || Steering dynamics: goal approach. body-centered coordinates [obstacle, goal, heading] -> steering
  • image p350fig09.22 How the negative Gaussian of an obstacle causes a peak shift to avoid the obstacle without losing sight of how to reach the goal.
    || Steering dynamics: obstacle avoidance. body-centered coordinates [obstacle, goal, heading] -> steering
  • image p350fig09.23 Unidirectional transient cells respond to changes in all image contours as an auto navigates and urban scene while taking a video of it.
    || Unidirectional transient cells (Baloch, Grossberg 1997; Berzhanskaya, Grossberg, Mingolia 2007). Transient cells respond to leading and trailing boundaries. Transient cells response, driving video
  • image p351fig09.24 Directional transient cells respond most to motion in their preferred directions.
    || Directional transient cells. 8 directions, 3 speeds
  • image p351fig09.25 By the time MT+ is reached, directional transient cells and directional filters have begun to extract more global directional information from the image.
    || M+ computes global motion estimate. Estimate global motion from noisy local motion estimates.
  • image p352fig09.26 The final stage of the model computes a beautiful expansion optic flow that permits an easy estimate of the heading direction, with an accuracy that matches that of human navigators.
    || The model generates accurate heading (Warren, Hannon 1990; Royden, Crowell, Banks 1994). Maximally active MSTd cell = heading estimate. Accuracy matches human data. Random dots [mean +-1.5°, worst +-3.8°], Random dots with rotation [accurate with rotations <1°/s, rotation increases, error decreases], OpenGL & Yosemite benchmark +-1.5°, Driving video +-3°.
  • image p354fig10.01 The lamiar cortical circuit that realizes how we pay attention to an object sends signals from layer 6 of a higher cortical level to layer 6 of a lower cortical level and then back up to layer 4. This "folded feedback" circuit realizes a top-down, modulatory on-center, off-surround circuit that realizes the ART Matching Rule.
    || Top-down attention and folded feedback. Attentional signals also feed back into 6-to-4 on-center off-surround. 1-to-5-to-6 feedback path: Macaque (Lund, Booth 1975) cat (Gilbert, Wiesel 1979). V2-to-V1 feedback is on-center off-surround and affects layer 6 of V1 the most (Bullier etal 1996; Sandell, Schiller 1982). Attended stimuli enhanced, ignored stimuli suppressed. This circuit supports the predicted ART Matching Rule! [LGN, V[1,2][6->1]]
  • image p355fig10.02 Distinguishing processes of seeing vs knowing has been difficult because they interact so strongly.
    || Seeing vs. Knowing. Seeing and knowing [operate at different levels of the brain, use specialized circuits], but they [interact via feedback, use similar cortical designs, feedback is needed for conscious perception]. Cerebral Cortex: Seeing [V1-V4, MS-MST], Knowing [IT, PFC].
  • image p356fig10.03 Laminar computing achieves at least three basic properties of visual processing that have analogs in all biologically intelligent behaviors. These properties may be found in all cortical circuits in specialized form.
    || What does Laminar Computing achieve? 1. Self-stabilizing development and learning; 2. Seamless fusion of a) pre-attentive automatic bottom-up processing, b) attentive task-selective top-down processing; 3. Analog coherence: Solution of Binding Problem for perceptual grouping without loss of analog sensitivity. Even the earliest visual cortical stages carry out active adaptive information processing: [learn, group, attention]ing
  • image p357fig10.04 Laminar Computing achieves its properties by computing in a new way that sythesizes the best properties of feedforward and feedback interactions, analog and digital computations, and preattentive and attentive learning. The property of analog coherence enables coherent groupings and decisions to form without losing sensitivity to the amount of evidence that supports them.
    || Laminar Computing: a new way to compute. 1. Feedforward and feedback: a) Fast feedforward processing when data are unambiguous (eg Thorpe etal), b) slower feedback chooses among ambiguous alternatives [self-normalizing property, real-time probabiligy theory], c) A self-organizing system that trades certainty against speed: Goes beyond Bayesian models! 2. Analog and Digital: Analog Coherence combines the stability of digital with the sensitivity of analog. 3. Preattentive and Attentive Learning: Reconciles the differences of (eg) Helmholtz and Kanizsa, "A preattentive grouping is its own
  • image p359fig10.05 Activation of V1 is initiated, in part, by direct excitatory signals from the LGN to layer 4 of V1.
    || How are layer 2/3 bipole cells activated? Direct bottom-up activation of layer 4. LGN -> V1 layer 4. Strong bottom-up LGN input to layer 4 (Stratford etal 1996; Chung, Ferster 1998). Many details omitted.
  • image p359fig10.06 Another, albeit indirect, pathway from LGN exists that can also excite layer 4 of V1. Why are not these two pathways redundant? The answer, ultimately, how to do with how cortex learns, as well as with how it pays attention. See the text for details.
    || Another bottom-up input to layer 4: Why?? Layer 6-to-4 on-center off-surround (Grieve, Sillito 1991, 1995; Ahmedetal 1994, 1997). LGN projects to layers 6 and 4. Layer 6 excites spiny stellates in column above it. Medium range connections onto inhibitory neurons. 6-t-4 path acts as on-center off-curround.
  • image p359fig10.07 The two bottom-up pathways from LGN to layer 4 of V1 can together activate layer 4 and contrast-normalize layer 4 responses.
    || Bottom-up contrast normalization (Grossberg 1968, 1973; Sperling, Sondhi 1968; Heeger 1992; Douglas etal 1995; Shapley etal 2004). Together, direct LGN-to-4 path and 6-to-4 on-center off-surround provide contrast normalization if cells obey shunting or membrane equation dynamics.
  • image p360fig10.08 The bottom-up on-center off-surround from LGN-to-6-to-4 has a modulatory on-center because of its role in realizing the ART Matching Rule and, with it, the ability of the cortex to dynamically stabilize its learned memories.
    || Modulation of priming by 6-to-4 on-center (Stratford etal 1996; Callaway 1998). On-center 6-to-4 excitation is inhibited down to being modulatory (priming, subthreshold). On-center 6-to-4 excitation cannot activate layer 4 on its own. Clarifies need for direct path. Prediction: plays key role in stable grouping, development and learning. ART Matching Rule!
  • image p360fig10.09 Perceptual grouping is carried out in layer 2/3 by long-range horizontal excitatory recurrent connections, supplemented by short-range disynaptic inhibitory connections that together realize the bipole grouping properties that are diagrammed in Figure 10.10.
    || Grouping starts in layer 2/3. LGN-> 6-> 4-> 2/3: 1. Long-range horizontal excitation links collinear, coaxial receptive fields (Gilbert, Wiesel 1989; Bosking etal 1997; Schmidt etal 1997) 2. Short-range disynaptic inhibition of target pyramidal via pool of intraneurons (Hirsch, Gilbert 1991) 3. Unambiguous groupings can form and generate feedforward outputs quickly (Thorpe etal 1996).
  • image p361fig10.10 Bipole grouping is achieved by long-range horizontal recurrent connections that also give rise to short-range inhibitory interneurons which inhibit nearby bipole cells as well as each other.
    || Bipole property controls perceptual grouping. Collinear input on both sides. Excitatory inputs summate. Inhibitory inputs normalize, Shunting inhibition! Two-against-one. Cell is excited.
  • image p362fig10.11 Feedback between layer 2/3 to the layer 6-to-4-to-2/3 feedback loop chooses the strongest grouping in cases where there is more than one. If only one grouping exists, then the circuit can function very quickly in a feedforward manner. When multiple groupings exist, the cortex "runs as fast as it can" to select the one with the most evidence to support it using the self-normalizing inhibition in the layer 6-to-4 off-surround.
    || How is the final grouping selected? Folded feedback LGN-> 6-> 4-> 2/3. 1. Layer 2/3 groupings feed back into 6-to-4 on-center off-surround: a) direct layer 2/3 -to-6 path; b) can also go via layer 5 (Blasdel etal 1985; Kisvarday etal 1989). 2. Strongest grouping enhanced by its on-center. 3. Inputs to weaker groupings suppressed by off-surround. 4. Interlaminar feedback creates functional columns. Activities of conflicting groupings are reduced by self-normalizing inhibition, slowing processing; intracortical feedback selects and contrast-enhances the winning grouping, speeding processing.
  • image p363fig10.12 The same laminar circuit design repeats in V1 and V2, albeit with specializations that include longer horizontal grouping axoms and figure-ground separation interactions.
    || V2 repeats V1 circuitry at larger spatial scale, LGN-> V1[6,4,2/3]-> V2[6,4,2/3]. V2 layer 2/3 horizontal axons longer-range than in V1 (Amir etal 1993). Therefore, longer-range groupings can form in V2 (Von der Heydt etal 1984)
  • image p364fig10.13 The bottom-up adaptive filter, intracortical grouping circuit, and intercortical top-down attentional circuit all use the same competitive decision circuit between layers 6 and 4, called the attention-preattention interface, with which to select the featural patterns that will be processed.
    || Bottom-up filters and intracortical grouping feedback use the same 6-to-4 decision circuit, LGN-> Vx[6,4,2/3]. competitive decision circuit, modulatory on-center off-surround network. Top-down intercortical attention also uses the same 6-to-4 decision circuit!
  • image p364fig10.14 This figure emphasizes how preattentive intracortical groupings and top-down intercortical attention share the same modulatory on-center, off-surround layer 4-to-6 decision circuit.
    || Explanation: grouping and attention share the same modulatory decision circuit. Layer 6-6-4-2/3 pathway shown; also a layer 6-1-2/3 path. intercortical attention, both act via a modulatory on-center off-surround decision circuit, intracortical feedback from groupings
  • image p367fig10.15 Data (left column) and simulation (right column) of how attention prevents a masking stimulus from inhibiting the response to the on-center of the cell from which the recording was made.
    || Attention protects target from masking stimulus (Reynolds etal 1999; Grossberg, Raizada 2000).
  • image p367fig10.16 Neurophysiological data (left image) and simulation (right image) of how a low-contrast target can be facilitated if it is surrounded by a paid (31May2023 Howell - is word correct?) of collinear flankers, and suppresssed by them if it has high contrast.
    || Flankers can enhance or suppress targets (Polat etal 1998; Grossberg, Raizada 2000). target alone, target + flankers, flankers alone.
  • image p368fig10.17 Neurophysiological data (left image) and simulation (right image) showing that attention has a greater effect on low contrast than high contrast targets.
    || Attention has greater effect on low contrast targets (DeWeerd etal 1999; Raizada, Grossberg 2001). Threshold increase (deg) vs Grating contrast (%), [no, with] attention
  • image p368fig10.18 Neurophysiological data (left image) and simulation (right image) of relative on-cell activities when the input to that cell may also be surroubded by iso-orientation or perpendicular textures.
    || Texture reduces response to a bar: iso-orientation suppression (Knierim, van Essen 1992), perpendicular suppression (Raizada, Grossberg 2001)
  • image p369fig10.19 Data from (Watanabe etal 2001) showing perceptual learning of the coherent motion direction, despite the lack of extra-foveal attention and awareness of the moving stimuli.
    || Unconscious perceptual learning of motion direction, % correct for two tests, compared to chance level results.
  • image p371fig11.01 FACADE theory explains how the 3D boundaries and surfaces are formed with which we see the world in depth.
    || 3D Vision and figure-ground perception (Grossberg 1987, 1994, 1997). How are 3D boundaries and 3D surfaces formed? How the world looks without assuming naive realism. Form And Color And DEpth theory (FACADE). Prediction: Visible figure-ground-separated Form-And-Color-And-DEpth are represented in cortical area V4.
  • image p372fig11.02 FACADE theory explains how multiple depth-selective boundary representations can capture the surface lightnesses and colors at the correct depths. The fact that both surface qualia and depth are determined by a single process implies that, for example, a change in brightness can cause a change in depth.
    || 3D surface filling-in. From filling-in of surface lightness and color to filling-in of surface depth. Prediction: Depth-selective boundary-gated filling-in defines the 3D surfaces that we see. Prediction: A single process fills-in lightness, color, and depth. Can a change in brightness cause a change in depth? YES! eg proximity-luminance covariance (Egusa 1983, Schwartz, Sperling 1983). Why is depth not more unstable when lighting changes? Prediction: Discounting the illuminant limits variability.
  • image p373fig11.03 Both contrast-specific binocular fusion and contrast-invariant boundary perception are needed to properly see the world in depth.
    || How to unify contrast-specific binocular fusion with contrast-invariant boundary perception? Contrast-specific binocular fusion: [Left, right] eye view [, no] binocular fusion. Contrast-invariant boundary perception: contrast polarity along the gray square edge reverses; opposite polarities are pooled to form object boundary.
  • image p374fig11.04 The three processing stages of monocular simple cells, and complex cells accomplish both contrast-specific binocular fusion and contrast-invariant boundary perception.
    || Model unifies contrast-specific binocular fusion and contrast-invariant boundary perception (Ohzawa etal 1990; Grossberg, McLoughlin 1997). [Left, right] eye V1-4 simple cells-> V1-3B simple cells-> V1-2/3A complex cells. Contrast-specific stereoscopic fusion by disparity-selective simple cells. Contrast-invariant boundaries by pooling opposite polarity binocular simple cells at complex cells layer 2/3A.
  • image p374fig11.05 The brain uses a contrast constraint on binocular fusion to help ensure that only contrasts which are derived from the same objects in space are binoculary matched.
    || Contrast constraint on binocular fusion. Left and right input from same object has similar contrast, Percept changes when one contrast is different. Fusion only occurs between bars of similar contrast (McKee etal 1994)
  • image p375fig11.06 The contrast constraint on binocular fusion is realized by obligate cells in layer 3B of cortical area V1.
    || Model implements contrast constraint on binocular fusion (cf. "obligate" cells Poggio 1991). An ecological constraint on cortical development. [left, right] eye cart V1-[4 monocular simple, 3B binocular simple, complex2/3A] cells. Inhibitory cells (red) ensure that fusion occurs when contrasts in left and right eye are approximately equal.
  • image p375fig11.07 The 3D LAMINART model uses both monocular and binocular simple cells to binocularly fuse like image contrasts. The remainder of the model generates 3D boundary and surface representations of multiple kinds of experiments as well as of natural scenes.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A], V2 thin stripe [4->2/3A], V4]. V1 blob [V1-4 monocular, V1 interior binocular] simple cells. [complex, simple, inhibitory] cells, on-center off-surround
  • image p376fig11.08 The contrast constraint on binocular fusion is not sufficient to prevent many of the false binocular matches that satisfy this constraint.
    || How to solve the correspondance problem? How does the brain inhibit false matches? Contrast constraint is not enough. [stimulus, multiple possible binocular matches] - Which squares in the two retinal images must be fused to form the correct percept?
  • image p376fig11.09 The disparity filter in V2 helps to solve the correspondence problem by eliminating spurious contrasts using line-of-sight inhibition.
    || Model V2 disparity filter solves the correspondence problem. An ecological constraint on cortical development. [left, right] eye view: False matches (black) suppressed by line-of-sight inhibition (green lines). "Cells that fire together wire together".
  • image p376fig11.10 The 3D LAMINART model shows how the disparity filter can be integrated into the circuit that completes 3D boundary representations using bipole grouping cells. It also explains how surface contours can strengthen boundaries that succeed in generating closed filling-in domains.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A] surface contour, V2 thin stripe (monocular surface) [4->2/3A], V2 interior [disynaptic inhibitory interneurons, bipole grouping cells, disparity filter, V4 binocular surface]. [complex, simple, inhibitory] cells, on-center off-surround
  • image p377fig11.11 DaVinci stereopsis phenomena occur when only one eye can receive visual inputs from part of a 3D scene due to occlusion by a nearer surface.
    || How does monocular information contribute to depth perception? DaVinci steropsis (Gillam etal 1999). Only by utilizing monocular information can visual system create correct depth percept. [left, right] eye view
  • image p378fig11.12 How monocular and binocular information are combined in V1 and V2 in the 3D LAMINART model.
    || Model utilizes monocular information. [left, right] eye cart V1-[4 monocular simple, 3B binocular simple, complex2/3A [mo,bi]nocular] cells, V2-4 binocular complex cells. black = monocular cells, blue = binocular cells. In V2, monocular inputs add to binocular inputs along the line of sight and contribute to depth perception.
  • image p379fig11.13 How the 3D LAMINART model explains DaVinci stereopsis. All the stages of boundary and surface formation are color coded to clarify their explanation. Although each mechanism is very simple, when all of them act together, the correct depthful surface representation is generated. See the text for details.
    || DaVinci stereopsis (Nakayama, Shimojo 1990). An emergent property of the previous simple mechanisms working together. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] pair [Binocular match: boundaries of thick bar -> Add monocular boundaries along lines-of-sight -> Line-of-sight inhibition kills weaker vertical boundaries -> 3D surface percept not just a disparity match!] pair [Binocular match: right edge of thin and thick bars -> Strongest boundaries: binocular and monocular boundaries add -> Vertical boundaries from monocular left edge of thin bar survive -> Filling-in contained by connected boundaries]. cart [very near, near, fixation plane, far, very far]
  • image p380fig11.14 The model explanation of DaVinci stereopsis when the input stimuli have opposite contrast polarities.
    || Polarity-reversed Da Vinci stereopsis (Nakayama, Shimojo 1990). Same explanation! (... as Figure 11.13 ...) [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p381fig11.15 The same model mechanisms explain the surface percept that is generated by the variant of DaVinci stereopsis that Gillam, Blackburn, and Nakayama studied in 1999.
    || DaVinci stereopsis (Gillam, Blackburn, Nakayama 1999). same model mechanisms. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p382fig11.16 The version of DaVinci steropsis wherein three narrow rectangles are binocularly matched with one thick rectangle can also be explained is a similar way.
    || DaVinci stereopsis of [3 narrow, one thick] rectangles (Gillam, Blackburn, Nakayama 1999). [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p383fig11.17 The bars in the left and right images that are in the same positions are marked in red to simplify tracking how they are processed at subsequent stages.
    || The Venetian blind effect (Howard, Rogers 1995). Every second bar on L in same position as every third bar on R. These bars are marked in red; see them match in Fixation Plane. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p384fig11.18 Surface and surface-to-boundary surface contour signals that are generated by the Venetian blind image.
    || Venetian blind effect (Howard, Rogers 1995). Every second bar on L in same position as every third bar on R. PERCEPT: 3-bar ramps sloping up from L to R with step returns. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p385fig11.19 Dichoptic masking occurs when the bars in the left and right images have sufficiently different contrasts.
    || Dichoptic masking (McKee, Bravo, Smallman, Legge 1994). [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p387fig11.22 Simulation of the boundaries that are generated by the Julesz stereogram in Figure 4.59 (top row) without (second row) and with (third row) surface contour feedback.
    || Boundary cart [V2-2, V2, V1] cart [near, fixation, far]
  • image p388fig11.23 Simulation of the surface percept that is seen in response to a sparse stereogram. The challenge is to assign large regions of ambiguous white to the correct surface in depth.
    || [left, right] retinal input. Surface [near, fixation, far] V4
  • image p388fig11.24 Boundary groupings capture the ambiguous depth-ambiguous feature contour signals and lift them to the correct surface in depth.
    || [surface, boundary] cart [near, fixation, far] V2.
  • image p389fig11.25 Boundaries are not just edge detectors. If they were, a shaded ellipse would look flat, and uniformly gray.
    || 3D vision and figure-ground separation. Multiple-scale, depth-selective boundary webs. [dark-light, light-dark] boundaries -> complex cells! If boundaries were just edge detectors, there would be just a bounding edge of the ellipse. After filling-in, it would look like this:.
  • image p390fig11.26 Although larger scales sometimes look closer (left image), that is not always true, as the right image of (Brown, Weisstein 1988) illustrates. The latter percept is, moreover, bistable. These images show the importance of interactions between groupings and multiple scales to determine perceived surface depths.
    || Multiple-scale depth-selective groupings determine perceived depth (Brown, Weisstein 1988). As an object approaches, it gets bigger on the retina. Does a big scale (RF) always signal NEAR? NO! The same scale can signal either near or far. Some scales fuse more than one disparity.
  • image p391fig11.27 (left image) Each scale can binocularly fuse a subset of spatial scales, with larger scales fusing more scales and closer ones than small scales. (right image) Cortical hypercolumns enable binocular fusion to occur in a larger scale even as rivalry occurs in a smaller scale.
    || Multiple-scale grouping and size-disparity correlation. Depth-selective cooperation and competition among multiple scales determines perceived depth: a) Larger scales fuse more depth; b) Simultaneous fusion and rivalry. Boundary prining using surface contours: Surface-to-boundary feedback from the nearest surface that is surrounded by a connected boundary eliminates redundant boundaries at the same position and further depths.
  • image p391fig11.28 (left image) Ocular dominance columns respond selectively to inputs from one eye or the other. (right image) Inputs from the two eyes are mapped into layer 4C of V1, among other layers.
    || Cortex V1[1, 2/3, 4A, 4B, 4C, 5, 6], LGN
  • image p392fig11.29 Boundary webs of the smallest scales are closer to the boundary edge of the ellipse, and progressively larger scale webs penetrate ever deeper into the ellipse image, due to the amount of evidence that they need to fire. Taken together, they generate a multiple-scale boundary web with depth-selective properties that can capture depth-selective surface filling-in.
    || 3D vision and figure-ground separation. Multiple-scale, depth-selective boundary webs. Instead, different size detectors generate dense boundary webs at different positions and depths along the shading gradient. Small-far, Larger-nearer, Largest-nearest. Each boundary web captures the gray shading in small compartments at its position and depths. A shaded percept in depth results.
  • image p392fig11.30 Multiple scales interact with bipole cells that represent multiple depths, and conversely. See the text for details.
    || How multiple scales vote for multiple depths. Scale-to-depth and depth-to-scale maps. Smallest scale projects to, and receives feedback from, boundary groupings that represent the furthest depths. Largest scale connects to boundary groupings that represent all depths. multiple-[depth, scale] dot [grouping, filter] cells. [small <-> large] vs [far <-> near]
  • image p393fig11.31 (Todd, Akerstrom 1987) created a series of 2D images from discrete black patches on a white disk and showed how the perceived depth varies with the factors summarized in the figure. The LIGHTSHAFT model quantitatively simulated their data.
    || Factors determining depth-from-texture percept. Perceived depth varies with texture element width, but only when elements are elongated and sufficiently aligned with one another to form long-range groupings. Data of (Todd, Akerstrom 1987) simulated by the LIGHTSHAFT model of (Grossberg, Kuhlmann 2007). [HP, LP, CCE, CCS, RO]
  • image p393fig11.32 Kulikowski stereograms involve binocular matching of out-of-phase (a) Gaussians or (b) rectangles. The latter can generate a percept of simultaneous fusion and rivalry. See the text for why.
    ||
  • image p394fig11.33 The Kaufman stereogram also creates a percept of simultaneous fusion and rivalry. The square in depth remains fused and the perpendicular lines in the two images are pervceived as rivalrous.
    || 3D groupings determine perceived depth, stereogram (Kaufman 1974). Vertical illusory contours are at different disparities than those of bounding squares. Illusory square is seen in depth. Vertical illusory contours are binocularly fused and determine the perceived depth of the square. Thin, oblique lines, being perpendicular, are rivalrous: simultaneous fusion and rivalry.
  • image p395fig11.34 A comparison of the properties of other rivalry models with those of the 3D LAMINART model (surrounded by red border). Significantly, only 3D LAMINART explains both stable vision and rivalry (green border).
    || Comparison of rivalry models
  • image p396fig11.35 Three properties of bipole boundary grouping in V2 can explain how boundaries oscillate in response to rivalry-inducing stimuli. Because all boundaries are invisible, however, these properties are not sufficient to generate a conscious percept of rivalrous surfaces.
    || 3 V2 boundary properties cause binocular rivalry. 1. Bipole grouping, 2. Orientational competition, 3. Actovity-dependent habituation
  • image p397fig11.36 Simulation of the temporal dynamics of rivalrous, but coherent, boundary switching.
    || Simulation of 2D rivalry dynamics. [Inputs, Temporal dynamics of V2 layer 2/3 boundary cells] cart [left, right]
  • image p398fig11.37 Simulation of the no swap baseline condition of (Logothetis, Leopold, Sheinberg 1996).
    || [Binocular, [left, right] eye] activity
  • image p399fig11.38 Simulation of the swap condition of (Logothetis, Leopold, Sheinberg 1996).
    || [Binocular, [left, right] eye] activity
  • image p399fig11.39 Simulation of the eye rivalry data of (Lee, Blake 1999).
    || [Binocular, [left, right] eye] activity
  • image p400fig11.40 When planar 2D parallelograms are justaposed, the resultant forms generate 3D percepts that are sensitive to the configuration of angles and edges in the fugure. See the text for why.
    || 3D representation of 2D images, Monocular cues (eg angles) can interact together to yield 3D interpretation. Monocular cues by themselves are often ambiguous. Same angles and shapes, different surface slants. How do these ambiguous 2D shapes contextually define a 3D object form?
  • image p401fig11.41 The 3D LAMINART model proposes how angle cells and disparity-gradient interact through learning to generate 3D representations of slanted objects.
    || 3D LAMINART model. [LGN, V1, V2, V4] Four key additions: 1. Angle cells - tuned to various angles; 2. Disparity-gradient cells - tuned to disparity gradients in the image; 3. weights from [angle to disparity-gradient] cells - learned while viewing 3D image; Colinear grouping between [angle to disparity-gradient] cells - disambiguates ambiguous groupings.
  • image p401fig11.42 A hypothetical cortical hypercolumn structure proposes how angle cells and disparity-gradient cells, including bipole cells that stay within a given depth, may self-organize during development.
    || Hypercolumn representation of angles [leftm right] cart [far-to-near, zero, near-to-far]
  • image p402fig11.43 A pair of disparate images of a scene from the University of Tsukuba. Multiview imagre database.
    || input [left, right]
  • image p402fig11.44 3D scenic reconstruction of the image in Figure 11.43 by the 3D LAMINART model.
    || Disparity [5, 6, 8, 10, 11, 14]: images of objects in common depth planes
  • image p403fig11.45 The multiple boundary and surface scales that were used to simulate a reconstruction of the SAR image in Figure 3.24.
    || SAR processing by multiple scales. [boundaries before completion, boundaries after completion, surface filling-in] versus scale [small, medium, large]. large scale bipole
  • image p405fig12.01 A What ventral cortical stream and Where/How dorsal cortical stream have been described for audition, no less than for vision.
    || Partietal lobe: where; Temporal lobe: what. V1-> [[what: IT], [where: PPC-> DLPFC]]. A1-> [[what: [ST-> VLPFC], VLPFC], [where: [PPC-> DLPFC], DLPFC]].
  • image p407fig12.03 Neurophysiological data showing how motor cortical cells code different vectors that are sensitive to both the direction of the commanded movement and its length.
    || (a) Single primary motor cortex neuron, onset of movement -> on..., radial architecture... (b) Motor cortex neuronal population, radial architecture...
  • image p409fig12.04 (top half) Neurophysiological data of vector cell responses in motor cortex. (bottom half) VITE model simulations of a simple movement in which the model
  • image p410fig12.05 VITE simulation of velocity profile invariance if the same GO signal gates shorter (a) or longer (b) movements. Note the higher velocities in (b).
    || [[short, long] cart [G, dP/dt]] vs time. G = GO signal, dP/dt = velocity profile.
  • image p411fig12.07 The left column simulation by VITE shows the velocity profile when the GO signal (G) starts with the movement. The right signal column shows that the peak velocity is much greater if a second movement begins when the GO signal is already positive.
    || Higher peak velocity due to target switching. VITE simulation of higher peak speed if second target rides on first GO signal. [[first, second] target cart [G, dP/dt]] vs time. Second target GO is much higher. G = GO signal, dP/dt = velocity profile.
  • image p411fig12.08 Agonist-antagonist opponent organization of difference vector (DV) and present position vector (PPV) processing stages and how GO signals gate them.
    ||
  • image p412fig12.09 How a Vector Associative Map, or VAM, model uses mismatch learning during its development to calibrate inputs from a target position vector (T) and a present position vector (P) via mismatch learning of adaptive weights at the difference vector (D). See the text for details.
    || Vector Associative Map model (VAP). During critical period, Endogenous Random Generator (ERG+) tirns on, activates P, and causes random movements that sample workspace. When ERG+ shuts off, posture occurs. ERG- then turns on (rebound) and opens Now Print (NP) gate, that dumps P into T. Mismatch learning enables adaptive weights between T and D to change until D (the mismatch) appoaches 0. Then T and P are both correctly calibrated to represent the same positions.
  • image p413fig12.10 Processing stages in cortical areas 4 and 5 whereby the VITE model combines outflow VITE trajectory formation signals with inflow signals from the spinal cord and cerebellum that enable it to carry out movements with variable loads and in the presence of obstacles. See the text for details.
    || area 4 (rostral) <-> area 5 (caudal).
  • image p414fig12.11 Neurophysiological data from cortical areas 4 and 5 (every other column) and simulations thereof (other columns) during a reach.
    || activation vs time. (a) area 4 phasic RT (IFV) (b) area 4 tonic (OPV) (c) area 4 phasic-tonic (OFPV) (d) area 4 phasic MT (DVV) (e) area 5 phasic (DV) (f) area 5 tonic (PPV)
  • image p415fig12.12 The combined VITE, FLETE, cerebellar, and multi-joint opponent muscle model for trajectory formation in the presence of variable forces and obstacles.
    ||
  • image p416fig12.13 The DIRECT model learns, using a circular reaction that is energized by an Endogenous Random Generator, or ERG, to make motor-equivalent volitionally-activated reaches. This circular reaction learns a spatial representation of a target in space. It can hereby make accurate reaches with clamped joints and on its first try using a tool under visual guidance; see Figure 12.16.
    || DIRECT model (Bulloch, Grossberg, Guenther 1993). learns by circular reaction. learns spatial reresentation to me4diate between vision and action. motor-equivalent reaching. can reach target with clamped joints. can reach target with a TOOL on the first try under visual guidance. How did tool use arise?!
  • image p416fig12.14 Computer simulations of DIRECT reaches with (b) a tool, (c) a clamped elbow, and (d) with a blindfold, among other constraints.
    || Computer simulationsd of direct reaches [unconstrained, with TOOL, elbow clamped at 140°, blindfolded]
  • image p417fig12.15 The DIRECT and DIVA models have homologous circuits to learn and control motor-equivalent reaching and speaking, with tool use and coarticulation resulting properties. See the text for why.
    || From Seeing and Reaching to Hearing and Speaking, Circular reactions (Piaget 1945, 1951, 1952). Homologous circuits for development and learning of motor-equivalent REACHING and SPEAKING. DIRECT TOOL use (Bullock, Grossberg, Guenther 1993), DIVA Coarticulation (Guenther 1995)
  • image p418fig12.16 Anatomical interpretations of the DIVA model processing stages.
    || [Feedforward control system (FF), Feedback control subsystem (FB)]. Speech sound map (Left Ventral Premotor Cortex (LVPC)), Cerebellum, Articulatory velocity and position maps (Motor Cortex (MC)), Somatosensory Error Map (Inferior Parietal Cortex (IPC)), Auditory Error Map (Superior Temporal Cortex (STC)), Auditory State Map (Superior Temporal Cortex)), Somatosensory State Map (Inferior Parietal Cortex)), articulatory musculature via subcortical nuclei, auditory feedback via subcortical nuclei
  • image p419fig12.17 The auditory continuity illusion illustrates the ART Matching Rule at the level of auditory streaming. Its "backwards in time" effect of future context on past conscious perception is a signature of resonance.
    || Auditory continuity illusion. input, percept. Backwards in time - How does a future sound let past sound continue through noise? Resonance! - It takes a while to kick in. After it starts, a future tone can maintain it much more quickly. Why does this not happen if there is no noise? - ART Matching Rule! TD harmonic filter is maodulatory without BU input. It cannot create something out of nothing.
  • image p420fig12.18 The ARTSTREAM model explains and simulates the auditory continuity illusion as an example of a spectral-pitch resonance. Interactions of ART Matching Rule and asymmetric competition mechanisms in cortical strip maps explain how the tone selects the consistent frequency from the noise in its own stream while separating the rest of the noise into another stream.
    || ARTSTREAM model (Grossberg 1999; Grossberg, Govindarajan, Wyse, Cohen 2004). SPINET. Frequency and pitch strips. Bottom Up (BU) harmonic sieve. Top Down (TD) harmonic ART matching. Exclusive allocation. Learn pitch categories based on early harmonic processing. A stream is a Spectral-Pitch Resonance!
  • image p422fig12.19 The ARTSTREAM model includes mechanisms for deriving streams both from pitch and from source direction. See the text for details.
    || [left, right] cart Peripheral processing = [input signal-> outer & middle ear preemphasis-> basilar membrane gammatone filterbank-> energy measure]. Spectral stream layer-> spectral summation layer-> delays-> [f-, tau] plane-> pitch stream layer-> pitch summation layer.
  • image p423fig12.20 The Spatial Pitch Network, or SPINET, model shows how a log polar spatial representation of the sound frequency spectrum can be derived from auditory signals occuring in time. The spatial representation allows the ARTSTREAM model to compute spatially distinct auditory streams.
    || SPINET model (Spatial Pitch Network) (Cohen, Grossberg, Wyse 1995). 1. input sound 2. Gamma-tone filter bank 3. Shaort-term average energy spectrum 4. MAP transfer function 5. On-center off-surround and rectification 6. Harmonic weighting 7. Harmonic summation and competition -> PITCH
  • image p424fig12.21 One of the many types of data about pitch processing that are simulated by the SPINET model. See the text for details.
    || Pitch shifts with component shifts (Patterson, Wightman 1976; Schouten 1962). Pitch vs lowest harmonic number.
  • image p424fig12.22 Decomposition of a sound (bottom row) in terms of three of its harmonics (top three rows).
    ||
  • image p425fig12.23 ARTSTREAM simulations of the auditory continuity illusion and other streaming properties (left column, top row). When two tones are separated by silence (Input), a percept of silence also separates them in a spectral-pitch resonance. (left column, bottom row). When two tones are separated by broadband noise, the percept of tone continues through the noise in one stream (stream 1) while the remainder of the noise occurs in a different stream (stream 2). (right column) Some of the other streaming properties that have been simulated by the ARTSTREAM model.
    || Auditory continuity does not occur without noise. Auditory continuity in noise. Other simulated streaming data.
  • image p426fig12.24 Spectrograms of /ba/ and /pa/ show the transient and sustained parts of their spectrograms.
    ||
  • image p428fig12.25 (left architecture) Auditory-articulatory feedback loop whereby babbled sounds active learning in an imitative map that is later used to learn to reproduce the sounds of other speakers. An articulatory-to-auditory expectation renders learning possible by making the auditory and motor data dimensionally consistent, as in the motor theory of speech. (right architecture) Parallel streams in the ARTSPEECH model for learning speaker-independent speech and language meaning, including a mechanism for speaker normalization (right cortical stream) and for learning speaker-dependent vocalic qualities (left cortical stream).
    || left: Speaker-dependent vocalic qualities; right: Speaker-independent speech and language meaning
  • image p430fig12.26 The NormNet model shows how speaker normalization can be achieved using specializations of the same mechanisms that create auditory streams. See the text for how.
    || [Anchor vs Stream] log frequency map. -> diagonals-> Speaker-independent acoustic item information-> [BU adaptive filter, TD learned expectation]-> leaned item recognition categories
  • image p431fig12.27 The strip maps that occur in ARTSTREAM and NormNet are variants of a cortical design that aalso creates ocular dominance columns in the visual cortex.
    || Adult organization of V1 (Grinvald etal http://www.weizmann.ac.il/brain/images/cubes.html). (1) Occular dominance columns (OCDs): Alternating strips of cortex respond preferentially to visual inputs of each eye (R/L corresponds to Right and Left eye inputs in the figure); Orientation columns: A smooth pattern of changing orientation preference within each ODC. Organized in a pinwheel like fashion.
  • image p432fig12.28 (left image) The SpaN model simulates how spatial representations of numerical quantities are generated in the parietal cortex. (right image) Behavior numerosity data and SpaN model simulations of it.
    || (Left) preprocessor-> spatial number map-> Comparison wave. (Right) data axis: number of lever presses; model axis: node position in the spatial number axis
  • image p433fig12.29 Learning of place-value number maps language categories in the What cortical stream into numerical strip maps in the Where cortical stream. See the text for details.
    || (1) spoken word "seven"-> (2) What processing stream- learned number category <-> (3) What-Where learned assoociations <- (4) Where processing stream- spatial number map <-(5) visual clues of seven objects
  • image p436fig12.30 The conscious ARTWORD, or cARTWORD, laminar cortical speech model simulates how future context can disambiguate noisy past speech sounds in such a way that the completed percept is consciously heard to proceed from past to future as a feature-item-list resonant wave propagates through time.
    || cARTWORD: Laminar cortical model macrocircuit (Grossberg, Kazerounian 2011) Simulates PHONEMIC RESTORATION: Cognitive Working Memory (processed item sequences) - [Excitatory-> inhibitory-> habituative-> adaptive filter-> adaptive filter-> adaptive filter with depletable synapse-> Acoustic [item, feature]
  • image p436fig12.31 Working memories do not store longer sequences of events in the correct temporal order. Instead, items at the beginning and end of the list are oftem called first, and with the highest probability.
    || Working memory. How to design a working memory to code "Temporal Order Information" in STM before it is stored in LTM. Speech, language, sensory-motor control, cognitive planning. eg repeat a telephone number unless you are distracted first. Temporal order STM is often imperfect, eg Free Recall. [probability, order] of recall vs list position. WHY?
  • image p437fig12.32 Data from a free recall experiment illustrate the bowed serial position curve.
    || Serial position function for free recall Data: (Murdock 1962 JEP 64, 482-488). % correct vs position of word on a 40-word list. Primacy gradient can be a mixture of STM and LTM read-out.
  • image p437fig12.33 Item and Order working memory models explain free recall data, as well as many other psychological and neurobiological data, by simulating how temporal series of events are stored as evolving spatial patterns of activity at content-addressable item categories. The categories with the largest activities are rehearsed first, and self-inhibit their activity as they do so in order to prevent tem from being rehearsed perseveratively. The laws whereby the items are stored in working memory obey basic design principles concerning list categories, or chunks, of sequences of stored items can be stably remembered.
    || Working memory models: item and order, or competitive queuing (Grossberg 1978; Houghton 1990; Page, Norris 1998). Event sequence in time stored as an evolving spatial pattern of activity. Primacy gradient of working memory activation stores correct temporal order at content-addressable cells. Maximally activated cell populations is performed next when a rehearsal wave is turned on. Output signal from chosen cell population inhibits its own activity to prevent perseveration: inhibition of return. Iterate until entire sequence is performed.
  • image p438fig12.34 The LTM Invariance Principle insists that words being stored in working memory for the first time (eg MYSELF) do not cause catastrophic forgetting of the categories that have already been learned for their subwords (eg MY, SELF, and ELF) or other subset linguistic groups.
    || LTM invariance principle. unfamiliar STM -> LTM familiar. How does STM storage of SELF influence STM storage of MY? It should not recode LTM of either MY or SELF!
  • image p439fig12.35 The Normalization Rule insists that the total activity of stored items in working memory has an upper bound that is approximately independent of the number of items that are stored.
    || Normalization Rule (Grossberg 1978). Total STM activity has a finite bound independent of the number of items (limited capacity of STM). Activity vs Items for [slow, quick] asymptotic energy growth.
  • image p439fig12.36 (1) Inputs to Item and Order working memories are stored by content-addressable item categories. (2) The relative activities of the item categories code the temporal order of performance. (3) In addition to excitatory recurrent signals from each working memory cell (population) to itself, there are also inhibitory recurrent signals to other working memory cells, in order to solve the noise-saturation dilemma. (4) A nonspecific rehearsal wave allows the most active cell to be rehearsed first. (5) As an item is being rehearsed, it inhibits its own activity using a feedback inhibitory interneuron. Persevervation performance is hereby prevented.
    || Item and order working memories. (1) Content-addressable item codes (2) Temporal order stored as relative sizes of item activities (3) Competition between working memory cells: Competition balances the positive feedback that enables the cells to remain active. Without it, cell activities may all saturate at their maximal values-> Noise saturation dilemma again! (4) Read-out by nonspecific reheasal wave- Largest activity is the first out (5) STM reset self-inhibition prevents perseveration: [input/self-excitatory, rehearsal wave]-> [output, self-inhibition]
  • image p440fig12.37 Simulation of a primacy gradient for a short list (left image) being transformed into a bowed gradient for a longer list (right image). Activities of cells that store the longer list are smaller die to the Normalization Rule, which follows from the shunting inhibition in the working memory network.
    || Primacy bow as more items stored. [activities, final y] (Left) Primacy gradient 6 items (Right) Bowed gradient 20 items
  • image p441fig12.38 The LTM Invariance Principle is realized if the relative sizes of the inputs to the list chunk level stay the same as more items are stored in working memory. This property, in turn, follows from shunting previously stored working memory activities when a ne4w item occurs.
    || LTM Invariance principle. Choose STM activities so that newly stored STM activities may alter the size of old STM activities without recoding their LTM patterns. In particular: New events do not change the relative activities of past event sequences, but may reduce their absolute activites. Why? Bottom-up adaptive filtering uses dot products: T(j) = sum[i=1 to n: x(i)*z(i,j) = total input to v(j). The relative sizes of inputs to coding nodes v(j) are preserved. x(i) -> w*x(i), 0 < w <= 1, leaves all past ratios T(j)/T(k) unchanged.
  • image p442fig12.39 (left column, top row) How a shunt plus normalization can lead to a bow in the stored working memory spatial pattern. Time increases in each row as every item is stored with activity 1 before it is shunted by w due to each successive item
  • image p442fig12.40 Given the hypothesis in Figure 12.39 (right column, bottom row) and a generalized concept of steady, albeit possibly decreasing, attention to each item as it is stored in working memory, only a primacy, or bowed gradient of activity across the working memory items can be stored.
    || LTM Invariance + Normalization. (... given conditions ...) Then the x(i) can ONLY form: [primacy gradient, recency gradient, unimodal bow]
  • image p443fig12.41 Neurophysiological data from the Averbeck etal sequential copying experiments show the predicted primacy gradient in working memory and the self-inhibition of activity as an item is stored. When only the last item remains stored, it has the highest activity becasuse it has been freed from inhibition by earlier items.
    || Neurophysiology of sequential copying
  • image p444fig12.42 The LIST PARSE laminar cortical model of working memory and list chunking that I published with Lance Pearson in 2008 simulated the Averbeck etal data in Figure 12.41, as in the left column of the figure. It also simulated cognitive data about working memory storage by human subjects. See the text for details.
    || LIST PARSE: Laminar cortical model of working memory and list chunking (Grossberg, Pearson 2008). Simulates data about: [immediate, delayed, continuous] distractor free recall; immediate serial recall; and variable-speed sequential perfomance of motor acts. [velocity, acceleration] vs time (ms) from recall cue.
  • image p445fig12.43 The LIST PARSE laminar cortical Cognitive Working Memory circuit, that is proposed to occur in ventrolateral prefrontal cortex, is homologous to the LAMINART circuit circuit that models aspects of how visual cortex sees. The Motor Working Memory, VITE Trajectory Generator, and Variable-Rate Volitional Control circuits model how other brain regions, including dorsolateral prefrontal cortex, motor cortex, cerebellum, and basal ganglia, interact with the Cognitive Working Memory to control working memory storage and variable-rate performance of item sequences.
    || List parse circuit diagram. Connectivity convention. sequence chunks [<- BU filter, TD expectation ->] working memory. Working memory and sequence chunking circuit is homologous to visual LAMINART circuit!
  • image p446fig12.44 (left column, top row) LIST PARSE can model linguistic data from human subjects. In this figure, model parameters are fixed to enable a close fit to data about error-type distributions in immediate free recall experiments, notably transposition errors. (right column, top row) Simulation and data showing bowing of the serial position curve, including an extended primacy gradient. (left column, bottom row) The simulation curve overlays data about list length effects, notably the increasing recall difficulty of longer lists during immediate serial recall (ISR). (right column, bottom row) Simulation (bottom image) and data (top image) of the limited temporal extent for recall.
    || (1. TL) Error-type distributions in immediate serial recall (Hanson etal 1996). % occurence vs serial position. Graph convention: Data- dashed lines; Simulations- solid lines. Six letter visual ISR. Order errors- transpositions of neighboring items are the most common. Model explanation: Noisy activation levels change relative oorder in primacy gradient. Similar activation of neighboring items most susceptible to noise. Model paramenters fitted on these data. (2. TR) Bowing of serial position curve (Cowan etal 1999). % correct vs serial position. Auditory ISR with various list lengths (graphs shifted rightward): For [, sub-]span lists- extended primacy, with one (or two) item recency; Auditory presentation- enhanced performance for last items. LIST PARSE: End effects- first and last items half as many members; Echoic memory- last presented item retained in separate store. (3. BL) List length effects, circles (Crannell, Parrish 1968), squares (Baddeley, Hitch 1975), solid line- simulation. % list correct vs list length. Variable list length ISR: longer lists are more difficult to recall. LIST PARSE: More items- closer activation levels and lower absolute activity level with enough inputs; Noise is more likely to produce order errors, Activity levels more likely to drop below threshold;. (4. BR) Limited temporal extent for recall (Murdoch 1961). % recalled vs retention interval (s). ISR task with distractor-filled retention intervals (to prevent rehersal): Increasing retention interval - decreases probabilty of recalling list correctly; Load dependence- longer list more affected by delays; Performance plateau- subjects reach apparent asymptote. LIST PARSE: Increase convergence of activities with time; loss of order information;.
  • image p447fig12.45 (left column) LIST PARSE simulations of the proportion of order errors as a function of serial position for 6 item lists with (a) an extended pause of 7 time units between the third and fourth items, and (b) pauses of 5 time units (solid curve) and 10 time units (dashed curve) between all items. (right column) Simulations (solid curves) and data (dashed curves) illustrating close model fits in various immediate free recall tasks.
    || (Left) Temporal grouping and presentation variability. Temporal grouping: Inserting an extended pause leads to inter-group bowing; Significantly different times of integration and activity levels across pause, fewer interchanges. (Right) Immediate free recall, and [delayed, continuous] distractor-free recall. Overt rehersal IFR task with super-span (ie 20 item) lists: Extended recency- even more extended with shorter ISIs; Increased probability of recall with diminished time from last rehearsal; Early items in list rehearsed most;. LIST PARSE (unique) for long lists: Incoming items from recency gradient; Rehearsal (re-presentation) based upon level of activity.
  • image p448fig12.46 A Masking Field working memory is a multiple-scale self-similar recurrent shunting on-center off-surround network. It can learn list chunks that respond selectively to lists of item chunks of variable length that are stored in an item working memory at the previous processing stage. Chunks that code for longer lists (eg MY vs MYSELF) are larger, and give rise to stronger recurrent inhibitory neurons (red arrows).
    || How to code variable length lists? MASKING FIELDS code list chunks of variable length (Cohen, Grossberg 1986, 1987; Grossberg, Kazerounian 2011, 2016; Grossberg, Meyers 2000; Grossberg, Pearson 2008). Multiple-scale self-similar WM: Masking field, adaptive filter. Variable length coding- Masjking fields select list chunks that are sensitive to WM sequences of variable length; Selectivity- Larger cells selectively code code longer lists; Assymetric competition- Larger cells can inhibit smaller cells more than conversely MAgic Number 7! Temporal order- different list chunks respond to the same items in different orders eg LEFT vs FELT;.
  • image p449fig12.47 This figure illustrates the self-similarity in a Masking Field of both its recurrent inhibitory connections (red arrows) and its top-down excitatory priming signals (green arrows) to the item chunk working memory.
    || Both recurrent inhibition and top-down excitatory priming are self-similar in a masking field. MYSELF <-> [MY, MYSELF]
  • image p452fig12.48 (left column) In experiments of (Repp etal 1978), the silence duration between the words GRAY and SHIP was varied, as was the duration of the fricative noise in S, with surprising results. (right column) The red arrow directs our attention to surprising perceptual changes as silence and noise durations increase. See the text for details.
    || Perceptual integration of acoustic cues, data (Repp etal 1978). GRAY-> silence duration-> SHIP (noise duration from start of word). Noise duration vs silence duration: GRAY SHIP <-> [GREAT SHIP <-> GRAY CHIP] <-> GREAT CHIP.
  • image p453fig12.49 The ARTWORD model that I published in 2000 with my PhD student Christopher Myers simulates data such as the (Repp etal 1978) data in Figure 12.68. See the text for details.
    || ARTWORD model (Grossberg, Myers 2000). Input phonetic features-> Phonemic item working memory-> Masking Field unitized lists-> Automatic gain control-> Phonemic item working memory. [habituative gate, adaptive filter]s.
  • image p453fig12.50 The ARTWORD perception cycle shows how sequences of items activate possible list chunks, which compete among each other and begin to send their top-down expectations back to the item working memory. An item-list resonance develops through time as a result.
    || ARTWORD perception cycle. (a) bottom-up activation (b) list chunk competition (c) item-list resonance (d) chunk reset due to habituative collapse.
  • image p454fig12.51 (left column) Even as a resonance with the list chunk GRAY begins to develop, if the delay between "gray" and "chip" is increased, greater habituation of this resonance may allow the GREAT chunk to begin to win, thereby smoothly transfering the item-list resonance from GRAY to GREAT through time. (right column) Simulation of a resonant treansfer from GRAY to GREAT, and back again as the silence interval between the words {gray" and "chip" increases. The red region between GRAY and GREAT curves calls attention to when GREAT wins. See the text for details.
    || Resonant transfer, as silence interval increases. (left) Delay GRAY resonance weakens. A delayed additional item can facilitate perception of a longer list. (right) GRAY-> GREAT-> GRAY.
  • image p455fig12.52 Simulation of cARTWORD dynamics in response to the complete list /1/-/2/-/3/. The relevant responses are surrounded by a red box.
    || Presentation of a normal sequence: input /1/-/2/-/3/. |c(i,1)-5| vs time (msec). List chunks select most predictive code. Order stored in WM layers. Resonant activity of /1/-/2/-/3/ in item and feature layers corresponds to conscious speech percept.
  • image p456fig12.53 Simulation of cARTWORD dynamics in response to the partial list /1/-silence-/3/ with /2/ replaced by silence. Only the representations of these items can be seen in the red box.
    || Presentation with silence duration: input /1/-silence-/3/. |c(i,1)-5| vs time (msec). List chunks select most predictive code. Order stored in WM layers. Gap in resonant activity of /1/-silence-/3/ in item and feature layers corresponds to perceived silence.
  • image p456fig12.54 Item /2/ is restored in the correct list position in response to the list /1/-noise-/3/.
    || Presentation with noise: input /1/-noise-/3/. |c(i,1)-5| vs time (msec). List chunks select the most predictive code. Order restored in WM layers. Resonant activity of /1/-/2/-/3/ in item and feature layers corresponds to restoration of item /2/ replaced by noise in input.
  • image p457fig12.55 Item /4/ is restored in the correct list position in response to the list /1/-noise-/5/. This and the previous figure show how future context can disambiguate past noisy sequences that are otherwise identical.
    || Presentation with noise: input /1/-noise-/5/. |c(i,1)-5| vs time (msec). List chunks select the most predictive code. Order restored in WM layers. Resonant activity of /1/-/4/-/3/ in item and feature layers corresponds to restoration of item /4/ replaced by noise in input.
  • image p459fig12.56 (Grossberg, Pearson 2008) proposed that the ability of working memories to store repeated items in a sequence represents rank information about the position of an item in a list using numerical hypercolumns in the prefrontal cortex (circels with numbered sectors: 1,2,3,4). These numerical hypercolumns are conjointly activated by inputs from item categories and from the analog spatial representation of numerosity in the parietal cortex. Thes parietal representations (overlapping Gausian activity profiles that obey a Weber Law) had earlier been modeled by (Grossberg, Repin 2003). See the text for details.
    || Item-order-rank working memory, rank information from parietal numerosity cicuit (Grossberg, Peaarson 2008; Grossberg, Repin 2003). [Sensory working memory-> adaptive filter-> list chunk-> attentive prime-> Motor working memory]-> [large, small] numbers-> transfer functions with variable thresholds and slopes-> uniform input-> integrator amplitude-> number of transient sensory signals.
  • image p460fig12.57 The lisTELOS architecture explains and simulates how sequences of saccadic eye movement commands can be stored in a spatial working memory and recalled. Multiple brain regions are needed to coordinate these processes, notably three different basal ganglia loops to replace saccade storage, choice, and performance, and the supplementary eye fields (SEF) to choose the next saccadic command from a stored sequence. Because all working memories use a similar network design, this model can be used as a prototype for storing and recalling many other kinds of cognitive, spatial, and motor information. See the text for details.
    || lisTELOS model- Spatial working memory (Silver, Grossberg, Bulloch, Histed, Miller 2011). Simulates how [PPC, PFC, SEF, FEF, SC] interact with 3 BG loops to learn and perform sequences of saccadic eye movements.
  • image p461fig12.58 The lisTELOS model built upon key processes that were earlier modeled by the TELOS model. See the text for details.
    || TELOS model (Brown, Bulloch, Grossberg 1999, 2004). shows [BG nigro-[thalamic, collicular], FEF, ITa, PFC, PNR-THAL, PPC, SEF, SC, V1, V4/ITp, Visual Cortex input] and [GABA].
  • image p462fig12.59 The TELOS model clarifies how reactive vs. planned eye movements may be properly balanced against one another, notably how a fast reactive movement is prevented from occuring in response to onset of a cue that requires a different, and more contextually appropriate, response, even if the latter response takes longer to be chosen and performed. The circuit explains how "the brain knows it before it knows" what this latter response should be by changing the balance of excitation to inhibition in the basal ganglie (BG) to keep the reactive gate stays shut until the correct target position can be chosen by a frontal-parietal resonance.
    || Balancing reactive vs. planned movements (Brown, Bulloch, Grossberg 2004). (a) shows [FEF, PPC]-> [BG, SC], and BG-> SC. (b) FTE vs time (msec) for [fixation, saccade, overlap, gap, delayed saccade] tasks.
  • image p463fig12.60 Rank-related activity in prefrontal cortex and supplementary eye fields from two different experiments. See the text for details.
    || Rank-related activity in PFC and SEF. Prefrontal cortex (Averbeck etal 2003) [sqare, inverted triangle]. Supplementary eye field (Isoda, Tanju 2002).
  • image p464fig12.61 (left column) A microstimulating electrode causes a spatial gradient of habituation. (right column) The spatial gradient of habituation that is caused by microstimulation alters the order of saccadic performance of a stored sequence, but not which saccades are performed, using interactions between the prefrontal cortex (PFC) working memory and the supplemental eye field (SEF) saccadic choice.
    || (left) Microstimulation causes habituation (Grossberg 1968). Stimulation caused habituation. Cells close to the stimulation site habituate most strongly. (right) Stimulation biases selection PFC-> SEF-> SEF. PFC Activity gradient in working memory, SEF Microstimulation causes habituation, During selection habituated nodes are less likely to win this competition.
  • image p464fig12.62 The most habituated positions have their neuronal activites most reduced, other things being equal, as illustrated by the gradient from deep habituation (red) to less habituation (pink). The saccadic performance orders (black arrows) consequently tend to end in the most habituated positions that have been stored.
    || The most habituated position is foveated last. For each pair of cues, the cue closest to the stimulation site is most habituated -- and least likely to be selected. Because stimulation spreads in all directions, saccade trajectories tend to converge.
  • image p465fig12.63 Neurophysiological data (left image) and lisTELOS stimulation (right figure) showing how microstimulation biases saccadic performance order but not the positions to which the saccades will be directed. See the text for details.
    || Saccade trajectories converge to a single location in space. Microstimulation biased selection so saccade trajectories converged toward a single location in space. [Data, model] contra <-> Ipsi (msec)
  • image p467fig12.64 Some of the auditory cortical regions that respond to sustained or transient sounds. See text for details.
    || Some auditory cortical regions. Core <-> belt <-> parabelt. [Belt, Core, ls, PAi, Parabelt, PGa, TAs, TE, TP, TPO, st s].
  • image p468fig12.65 Linguistic properties of the PHONET model and some of the data that it simulates. The upper left image summarizes the asymmetric transient-to-sustained gain control that helps to create invariant intraword ratios during variable-rate speech. The lower left image summarizes the rate-dependent gain control of the ARTPHONE model that creates rate-invariant working memory representations in response to sequences of variable-rate speech. The right image summarizes the kind of paradoxical VC-CV category boundary data of (Repp 1980) that ARTPHONE simulates. See the text for details.
    || (left upper) [transient, sustained] [working memory, filter, category]. (left lower) phone inputs-> [input rate estimate, features], Features w <- habituative transmitter gates -> categories-> rate invariant phonetic output, input rate estimate-> gain control-> [features, categories] rate-dependent integration of categories and features. (right) % 2-stop vs VC-CV silent interval (msec): [ib-ga, ib-ba, iga, iba].
  • image p469fig12.66 (left column) A schematic of how preserving relative duration, as in the first and third images, of consonant and vowel pairs can preserve a percept, in this case of /ba/, but not doing so, as in the first and second images, can cause a change in percept, as from /ba/ to /wa/, as in the data of (Miller, Liberman 1979) that PHONET simulates. (right column) Changing frequency extent can also cause a /ba/ - /wa/ transition, as shown in data of (Schwab, Sawusch, Nusbaum 1981) that PHONET also simulates.
    || (left image) Maintaining relative duration as speech speeds up preserves percept (Miller, Liberman 1979). frequency vs time- [/ba/, /wa/, /ba/] (right image) Changing frequency extent causes /b/-/wa/ transition (Schwab, Sawusch, Nusbaum 1981). frequency vs time- [/ba/, /wa/] Dt extent.
  • image p469fig12.67 PHONET contains transient and sustained cells that respond to different kinds of sounds, notably the transients of certain consonants and the sustained sounds of certain vowels. It then uses the transient working memory to gain contol the integration rate of the sustained working memory to which these different detectors input.
    || Phonetic model summary. (left) Acoustic tokens [consonant, vowel]. (middle) Acoustic detectors [transient (sensitive to rate), Sustained (sensitive to duration)]. (right) Working memory, Spatially stored transient pattern (extent) + gain control-> spatially stored sustained pattern.
  • image p471fig12.68 A mismatch reset of /b/ in response to the /g/ in [ib]-[ga] can rapidly shut off the [ib] percept, leading to the percept of [ga] after an interval of silence. In contrast, resonant fusion of the two occurences of /b/ in [ib]-[ba] can cause a continuous percept of sound [iba] to occur during times at which silence is heard in response to [ib]-[ga].
    || Mismatch vs resonant fusion
  • image p473fig12.69 Error rate and mean reaction time (RT) data from the lexical decision experiments of (Schvanevelt, McDonald 1981). ART Matching Rule properties explain these data in (Grossberg, Stone 1986).
    || (left) Error rate vs type of prime [R, N, U], [non,] word. (right) Mean RT (msec) vs type of prime [R, N, U], [non,] word.
  • image p474fig12.70 The kind of model macrocircuit that was used in (Grossberg, Stone 1986) to explain lexical decision task data.
    || inputs-> A1 <-> A2 oconic sensory features <-> A3 item and order in sensory STM <-> A4 list parsing in STM (masking field) <-> A5 semantic network (self-feedback). [A4, A5] <-> V* visual object recognition system. M1-> [outputs, A1]. M1 <-> M2 iconic motor features <-> M3 item and order in motor STM. A2-> M2. A3-> M3.
  • image p476fig12.71 Word frequency data of (Underwood, Freund 1970) that were explained in (Grossberg, Stone 1986).
    || percent errors vs frequency of old words [L-H to H-H, L-L to H-L].
  • image p481fig13.01 Macrocircuit of the functional stages and anatomical interpretations of the Cognitive-Emotional-Motor, or CogEM, model.
    || Drive-> hypothalamus value categories <-> amygdala incentive motivational learning-> Orbitofrontal cortex- object-value categories <-> sensory cortex- invariant object categories- conditioned reinforcer learning-> amygdala-> hypothalamus.
  • image p483fig13.02 The object-value categories in the orbitofrontal cortex require converging specific inputs from the sensory cortex and nonspecific incentive motivational inputs from the amygdala in order to fire. When the orbitofrontal cortex fires, it can deliver top-down ART Matching Rule priming signals to the sensory cortical area by which it was activated, thereby helping to choose the active recognition categories there that have the most emotional support, while suppressing others, leading to attentional blocking of irrelevant cues.
    || Cognitive-Emotional-Motor (CogEM) model. Drive-> amygdala incentive motivational learning-> orbitofrontal cortex- need converging cue and incentive iputs to fire <-> sensory cortex- conditioned renforcer learning-> amygdala. CS-> sensory cortex. Motivated attention closes the cognitive-emotional feedback loop, focuses on relevant cues, and causes blocking of irrelevant cues.
  • image p483fig13.03 The predicted processing stages of CogEM have been supported by anatomical studies of connections between sensory cortices, amygdala, and orbitofrontal cortex.
    || Adapted from (Barbas 1995). sensory cortices = [visual, somatosensory, auditory, gustatory, olfactory]. sensory cortices-> amygdala-> orbital prefrontal cortex. sensory cortices-> orbital prefrontal cortex. [visual cortex, amygdala]-> lateral prefrontal cortex.
  • image p484fig13.04 The top-down feedback from the orbitofrontal cortex closes a feedback loop that supports a cognitive-emotional resonance. If this resonance can be sustained long enough, it enables us to have feelings at the same time that we experience the categories that caused them.
    || Cognitive-Emotional resonance. Basis of "core consciousness" and "the feeling of what happens". (Damasio 1999) derives heuristic version of CogEM model from his clinical data. Drive-> amygdala-> prefrontal cortex-> sensory cortex, resonance around the latter 3. How is this resonance maintained long enough to become conscious?
  • image p484fig13.05 Classical conditioning is perhaps the simplest kind of associative learning.
    || Classical conditioning (nonstationary prediction). Bell (CS)-> (CR), Shock (US)-> Fear (UR), associative learning.
  • image p485fig13.06 (left column) An inverted-U occurs in conditioned reinforcer strength as a function of the ISI between the CS and the US. Why is learning attenuated at 0 ISI? (right column) Some classical conditioning data that illustrate the inverted-U in conditioning as a function of the ISI.
    || InterStimulus Interval (ISI) effect. Data from (Dmith etal 1969; Schneiderman, Gormezano 1964).
  • image p485fig13.07 The paradigm of secondary conditioning. See the text for details.
    || Secondary conditioning (Advertising!). [CS1, C2] become conditioned reinforcers.
  • image p486fig13.08 The blocking paradigm illustrates how cues that do not predict different consequences may fail to be attended.
    || Blocking- minimal adaptive prediction. Phase [I, II] - CS2 is irrelevant.
  • image p486fig13.09 Equally salient cues can be conditioned in parallel to an emotional consequence.
    || Parallel processing of equally salient cues vs overshadowing (Pavlov).
  • image p486fig13.10 Blocking follows if both secondary conditioning and attenuation of conditioning at a zero ISI occur.
    || Blocking = ISI + secondary conditioning.
  • image p487fig13.11 The three main properties of CogEM that help to explain how attentional blocking occurs.
    || CogEM explanation of attentional blocking. Internal drive input <-> Conditioned reinforcer learning (self-recurrent) <-> Competition for STM <- Motor learning. 1. Sensory representations compete for limited capacity STM. 2. Previously reinforced cues amplify their STM via positive feedback. 3. Other dues lose STM via competition.
  • image p488fig13.12 (left column) How incentive motivational feedback amplifies activity of a sensory cortical cell population. (right column) A sensory cortical cell population whose activity is amplified by incentive motivational feedback can suppress the activities of less activated populations via self-normalizing recurrent competitive interactions.
    || Motivational feedback and blocking. (left) sensory input CS, STM activity without motivational feedback, STM activity with motivational feedback. (right) STM suppressed by competition, STM amplified by (+) feedback.
  • image p489fig13.13 (top row) If a positive ISI separates onset of a CS and US, then the CS can sample the consequences of the US during the time interval before it is inhibited by it. (bottom row) A CogEM simulation of the inverted-U in conditioning as a function of the ISI betweeen CS and US.
    || Positive ISI and conditioning.
  • image p490fig13.14 In order for conditioning to work properly, the sensory representation needs to have at least two successive processing stages. See the text for why.
    || Model of Cognitive-Emotional circuit. Drive-> Drive representation-> ??? <-> Sensory STM <-CS
  • image p490fig13.15 The CogEM circuit is an ancient design that is found even in mollusks like Aplysia. See the text for details.
    || Aplysia (Buononamo, Baxter, Byrne, Neural Networks 1990; Grossberg, Behavioral and Brain Sciences 1983). Facilitator neuron ~ drive representation.
  • image p492fig13.16 (left column) In order to satisfy all four postulates, there needs to be UCS-activated arousal of polyvalent CS-activated sampling neuron. (right column) The arousal needs to be nonspecific in order to activate any of the CSs that could be paired with the UCS.
    || Polyvalent CS sampling and US-activated nonspecific arousal.
  • image p493fig13.17 (top row) Overcoming the ostensible contradiction that seems to occur when attempting to simultaneously realize hypotheses (3) and (4). (bottom row) The problem is overcome by assuming the existence of US-activated drive representation to which CSs can be associated, and that activate nonspecific incentive motivational feedback to sensory representations.
    || Learning nonspecific arousal and CR read-out. (top) Learning to control nonspecific arousal, Learning to read-out the CR (bottom) Drive representation, Incentive motivation.
  • image p494fig13.18 Realizing the above constraints favor one particular circuit. Circuits (a) and (b) are impossible. Circuit (d) allows previously occurring sensory cues to be stored in STM. Circuit (e) in addition enables a CS can be stored in STM without initiating conditioning in the absence of a US.
    || Learning to control nonspecific arousal and read-out of the CR: two stages of CS. (d) & (e) polyvalent cells.
  • image p494fig13.19 (left column, top row) Secondary conditioning of both arousal and a specific response are now possible. (bottom row) The CogEM circuit may be naturally extended to include multiple drive representations and inputs. (right column, top row) The incentive motivational pathways is also conditionable in order to enable motivational sets to be learned.
    || Secondary conditioning. Homology: conditionable incentive motivation. Multiple drive representations and inputs.
  • image p496fig13.20 (top image) A single avalanche sampling cell can learn an arbitrary space-time pattern by sampling it as a temporally ordered series of spatial patterns using a series of outstars. Once an avalanche
  • image p497fig13.21 (left column) An early embodiment of nonspecific arousal was a command cell in such primitive animals as crayfish. (right column) The songbird pattern generator is also an avalanche. This kind of circuit raises the question of how the connections self-organize through developmental learning.
    || Nonspecific arousal as a command cell. Crayfish swimmerets (Stein 1971). Songbird pattern generator (Fee etal 2002)+. Motor-> RA-> HVC(RA).
  • image p498fig13.22 (left column, top row) Adaptive filtering and conditioned arousal are both needed to regulate what cues can learn to activate particular space-time patterns. These developments lead inexorably to basic cognitive abilities, as embodied in the 3D LAMINART models for 3D vision and figure-ground perception (Chapter 11) and the 3D ARTSCAN SEARCH model for invariant object learning, recognition, and 3D search (Chapter 6). (right column, top row) Conditioned arousal enables only emotionally important cues to activate a motivationally relevant space-time pattern. (bottom row) Conditioned arousal and drive representations arise naturally from the unlumping of avalanche circuits to make them selective to motivationally important cues. The MOTIVATOR model is a natural outcome of this unlumping process (this chapter).
    || (top) Adaptive filtering and Conditioned arousal. Towards Cognition: need to filter inputs to the command cell. Towards Emotion: important signals turn arousal ON and OFF. (bottom) Conditioned arousal and Drive representations. Competition between conditioned arousal sources at drive representations, eg amygdala.
  • image p499fig13.23 (left column) Self-organization in avalanches includes adaptive filtering by outstars [?instars?], serial learning of temporal order, and learned read-out of spatial patterns by outstars. (right column) Serial learning of temporal order occurs in recurrent associative networks.
    || (left) Self-organizing avalanches [instars, serial learning, outstars]. (right) Serial list learning.
  • image p500fig13.24 Both primary excitatory and inhibitory conditioning can occur using opponent processes and their antagonistic rebounds.
    || Opponent processing. Cognitive drive associations. Primary associations: excitatory [CS, US, Fear], inhibitory [CS, US, Fear, Relief rebound].
  • image p501fig13.25 When an unbiased transducer is embodied by a finite rate physical process, mass action by a chemical transmitter is the result.
    || Unbiased transducer (Grossberg 1968). S=input, T=output, ?SB?=SB B is the gain. Suppose T is due to release of chemical transmitter y at a synapse: release rate T = S*y (mass action); Accumulation y ~= B.
  • image p501fig13.26 A simple differential equation describes the processes of transmitter accumulation and release that do their best, at a finite rate, to carry out unbiased transduction.
    || Transmitter accumulation and release. Transmitter y cannot be restored at an infinite rate: T = S*ym y ~= B, Differential equations: d[dt: y] = A*(B - y) - S*y = accumulate - release. Transmitter y tries to recover to ensure unbiased transduction. What if it falls behind? Evolution has exploited the good properties that happen then.
  • image p502fig13.27 Despite the fact that less transmitter y is available after persistent activation by a larger input signal S, the gated output signal S*y is larger die to the mass action gating of S by y.
    || Minor mathematical miracle. At equilibrium: 0 = d[dt: y] = A*(B - y) - S*y. Transmitter y decreases when input S increases: y = A*B/(A + S). However, output S*y increases with S!: S*y = S*A*B/(A + S) (gate, mass action).
  • image p502fig13.28 Fast increments and decrements in an input S lead to slow habituation of the habituative gate, or medium-term memory, transmitter y. The output T is a product of these fast and slow variables, and consequently exhibits overshoots, habituation, and undershoots in its response.
    || Habituative transmitter gate: Input; Habituative gate d[dt: y] = A*(B - y) - S*y; Output [overshoot, habituation, undershoot]s Weber Law.
  • image p503fig13.29 The ON response to a phasic ON input has Weber Law properties due to the divisive terms in its equilibrium response, which are due to the habituative transmitter.
    || ON-response to phasic ON-input. S1 = f(I+J): y1 = A*B/(A+S1), T1 = s1*y1 = A*B*S1/(A+S1); S2 = f(I): y2 = A*B/(A+S2), T2 = s2*y2 = A*B*S2/(A+S2);. ON = T1 - T2 = (A^2*B*(f(I+J)-f(I)) / (A+f(I)) / (A+f(I+J)) Note Weber Law. When f has a threshold, small I requires larger J to fire due to numerator, but makes suprathreshold ON bigger due to denominator. When I is large, quadratic in denominator and upper bound of f make ON small.
  • image p504fig13.30 OFF rebound occurs when the ON-input shuts off due to the imbalance that is caused by the ON input in the habituation of the transmitters in the ON and OFF channels. The relative sizes of ON responses and OFF rebounds is determined by the arousal level I.
    || OFF-rebound due to phasic input offset. Shut off J (Not I!). Then: S1 = f(I), S2 = f(I); y1 ~= A*B/(A+f(I+J)) < y2 ~= A*B/(A+f(I)) y1 and y2 are SLOW; T1 = S1*y1, T2 = S2*y2, T1 < T2;. OFF = T2 - T1 = A*B*f(I)*(f(I+J) - f(I)) / (A+f(I)) / (A + f(I+J)), Note Weber Law due to remembered previous input. Arousal sets sensitivity of rebound: OFF/ON = f(I)/A. Why is the rebound transient? Note equal f(I) inputs.
  • image p504fig13.31 Behavioral contrast can occur during reinforcement learning due to decreases in either positive or negative reinforcers. See Figure 13.32 for illustrative operant conditioning data.
    || Behavioral contrast: rebounds! Shock level vs trials. 1. A sudden decrease in frequency or amount of food can act as a negative reinforcer: Frustration. 2. A sudden decrease in frequency or amount of shock can act as a positive reinforcer: Relief.
  • image p505fig13.32 Response suppression and the subsequent antagonist rebounds are both calibrated by the inducing shock levels.
    || Behavioral contrast (Reynolds 1968). Responses per minute (VI schedule) vs Trial shock level.
  • image p505fig13.33 An unexpected event can disconfirm ongoing processing by triggering a burst of nonspecific arousal that causes antagonistic rebounds in currently active gated dipoles, whether cognitive or affective.
    || Novelty reset: rebound to arousal onset. 1. Equilibrate to I and J: S1 = f(I+J); y1 = A*B/(A+S1); S2 = f(I+J); y2 = A*B/(A+S2);. 2. Keep phasic input J fixed; increase arousal I to I* = I + ∆I: (a) OFF reaction if T1 < T2; OFF = T2 - T1 = f(I*+J)*y2 - f(I*)*y1 = { A*B*(f(I*) - f(I*+J)) - B*(f(I*)*f(I+J) - f(I)*f(I*+J)) } / (A+f(I)) / (A + f(I+J)). 3. How to interpret this complicated equation?
  • image p506fig13.34 With a linear signal function, one can prove that the rebound increases with both the previous phasic input intensity J and the unexpectedness of the disconfirming event that caused the burst of nonspecific arousal.
    || Novelty reset: rebound to arousal onset.
  • image p506fig13.35 A shock, or other reinforcing event, can have multiple cognitive and emotional effects on different brain processes.
    || Multiple functional roles of shock. 1. Reinforcement sign reversal: An isolated shock is a negative reinforcer; In certain contexts, a shock can be a positive reinforcer. 2. STM-LTM interaction: Prior shock levels need to be remembered (LTM) and used to calibrate the effect of the present shock (STM). 3. Discriminative and situational cues: The present shock level is unexpected (novel) with respect to the shock levels that have previously been contingent upon experimental cues: shock as a [1.reinforcer, 2. sensory cue, 3. expectancy].
  • image p509fig13.36 How can life-long learning occur without passive forgetting or associative saturation?
    || Associative learning. 1. Forgetting (eg remember childhood experiences): forgetting [is NOT passive, is Selective]; 2. Selective: larger memory capacity; 3. Problem: why doesn
  • image p510fig13.37 A disconfirmed expectation can cause an antagonistic rebound that inhibits prior incentive motivational feedback, but by itself is insufficient to prevent associative saturation.
    || Learn on-response. 1. CS-> ON, disconfirmed expectation-> antagonistic rebound, OFF-channel is conditioned 2. CS-> [ON, OFF]-> net, zero net output. What about associative saturation?
  • image p510fig13.38 Dissociation of the read-out of previously learned adaptive weights, or LTM traces, and of the read-in of new weight values enables back-propagating dendritic action potentials to teach the new adaptive weight values.
    || Dissociation of LTM read-out and read-in. Backpropagating dendritic action potentials as teaching signals. 1. LTM Denditic spines (Rall 1960
  • image p510fig13.39 Shunting competition and informational noise suppression in affective gated dipoles, plus back-propagating action potentials for teaching signals, enable the net normalized adaptive weights to be learned. They never saturate!
    || Learn net dipole output pattern. Opponent "decision" controls learning. Cf. competitive learning. Learning signal, opponent extinction.
  • image p512fig13.40 A conditioning paradigm that illustrates what it means for conditioned excitators to extinguish.
    || Conditioned excitor extinguishes. 1. Learning phase: CS1 bell-> US, CS1-> Fear(-). 2. Forgetting phase: CS1 bell-> Forgetting. 3. The expectation of shock is disconfirmed.
  • image p513fig13.41 A conditioning paradigm that illustrates what it means for conditioned inhibitors not to extinguish.
    || Conditioned inhibitor does not extinguish. 1. Learning phase: CS1 light-> shock, CS1-> Fear(-); Forgetting phase: n/a;. 2. Learning phase : CS1 + CS bell-> no shock; CS2-> relief;. Forgetting phase: CS2 bell- no forgetting. SAME CS could be used! SAME "teacher" in forgetting phase! Something else must be going on , or else causality would be violated!
  • image p513fig13.42 A conditioned excitor extinguishes because the expectation that was learned of a shock during the learning phase is disconfirmed during the forgetting phase.
    || Conditioned excitor extinguishes. Learning phase: CS1 bell-> US; CS1-> Fear(-); CS1-> shock; CS1 is conditioned to an expectation of shock. Forgetting phase: CS2 bell-> forgetting;. The expectation of shock is disconfirmed.
  • image p513fig13.43 A conditioned inhibitor does not extinguish because the expectation that was learned of no shock during the learning phase is not disconfirmed during the forgetting phase.
    || Conditioned excitor extinguishes. 1. Learning phase: CS1 light-> Shock; CS1-> Fear(-);. Forgetting phase: n/a;. 2. Learning phase: CS1 bell + CS2-> NO shock; CS2-> relief(+); CS2-> no shock;. Forgetting phase: CS2 bell!-> no forgetting;. The expectation that "no shock" follows CS2 is NOT disconfirmed!
  • image p514fig13.44 Analog of the COgEM model in Figure 6.1 of (Damasio 1999).
    || (a) map of object X-> map of proto-self at inaugural instant-> [, map of proto-self modified]-> assembly of second-order map. (b) map of object X enhanced-> second-order map imaged.
  • image p519fig14.01 Coronal sections of prefrontal cortex. Note particulary the areas 11, 13, 14, and 12o.
    ||
  • image p520fig14.02 Macrocircuit of the main brain regions, and connections between them, that are modelled in the unified predictive Adaptive Resonance Theory (pART) of cognitive-emotional and working memory dynamics. Abbreviations in red denote brain regions used in cognitive-emotional dynamics. Those in green denote brain regions used in working memory dynamics. Black abbreviations denote brain regions that carry out visual perception, learning and recognition of visual object categories, and motion perception, spatial representation and target tracking. Arrow denote non-excitatory synapses. Hemidiscs denote adpative excitatory synapses. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown. Also not shown are output signals from cortical areas to motor responses. V1: striate, or primary, visual cortex; V2 and V4: areas of prestriate visual cortex; MT: Middle Temporal cortex; MST: Medial Superior Temporal area; ITp: posterior InferoTemporal cortex; ITa: anterior InferoTemporal cortex; PPC: Posterior parietal Cortex; LIP: Lateral InterParietal area; VPA: Ventral PreArculate gyrus; FEF: Frontal Eye Fields; PHC: ParaHippocampal Cortex; DLPFC: DorsoLateral Hippocampal Cortex; HIPPO: hippocampus; LH: Lateral Hypothalamus; BG: Basal Ganglia; AMYG: AMYGdala; OFC: OrbitoFrontal Cortex; PRC: PeriRhinal Cortex; VPS: Ventral bank of the Principal Sulcus; VLPFC: VentroLateral PreFrontal Cortex. See the text for further details.
    ||
  • image p523fig14.03 (a) The MOTIVATOR neural model generalizes CogEM by also including the basal ganglia. It can hereby explain and simulate complementary functions of the amygdala and basal ganglia (SNc) during conditioning and learned performance. The basal ganglia generate Now Print signals in response to unexpected rewards. These signals modulate learning of new associations in many brain regions. The amygdala supports motivated attention to trigger actions that are expected to occur in response to conditioned or unconditioned stimuli. Object Categories represent visual or gustatory inputs in anterior inferotemporal (ITA) and rhinal (RHIN) cortices, respectively. Value Categories represent the value of anticipated outcomes on the basis of hunger and satiety inputs, in amygdala (AMYG) and lateral hypothalamus (LH). Object-Value Categories resolve the value of competing perceptual stimuli in medial (MORB) and lateral (ORB) orbitofrontal cortex. The Reward Expectation Filter detects the omission or delivery of rewards using a circuit that spans ventral striatum (VS), ventral pallidum (VP), striosomal delay (SD) cells in the ventral striatum, the pedunculopontine nucleus (PPTN) and midbrain dopaminergic neurons of the substantia nigra pars compacta/ventral tegmental area (SNc/VTA). The circuit that processes CS-related visual information (ITA, AMTG, ORB) operates in parallel with a circuit that processes US-related visual and gustatory information (RHIN, AMYG, MORB). (b) Reciprocal adaptive connections between hypothalamus and amygdala enable amygdala cells to become learned value categories. The bottom region represents hypothalmic cells, which receive converging taste and metabolite inputs whereby they become taste-drive cells. Bottom-up signals from activity patterns across these cells activate competing value category, or US Value Representations, in the amygdala. A winning value category learns to respond selectively to specific combinations of taste-drive activity patterns and sends adaptive top-down priming signals back to the taste-drive cells that activated it. CS-activated conditioned reinforcer signals are also associatively linked to value categories. Adpative connections end in (approximately) hemidiscs. See the text for details.
    ||
  • image p524fig14.04 (a) Model basal ganglia circuit for the control of dopaminergic Now Print signals from the substantia nigra pars compacta, or SNc, in response to unexpected rewards. Cortical inputs (Ii), activated by conditioned stimuli, learn to excite the SNc via a multi-stage pathway from the vantral striatum (S) to the ventral pallidum and then on to the PPTN (P) and the SNc (D). The inputs Ii excite the ventral striatum via adaptive weights WIS, and the ventral striatum excites the SNc with strength W_PD. The striosomes, which contain an adaptive spectral timing mechanism [xij, Gij, Yij, Zij], learn to generate adaptively timed signals that inhibit reward-related activation of the SNc. Primary reward signals (I_R) from the lateral hypothalamus both excite the PPTN directly (with strength W_RP) and act as training signals to the ventral striatum S (with strength W_RS) that trains the weights W_IS. Arrowheads denote excitatory pathways, circles denote inhibitory pathways, and hemidiscs denote synapses at which learning occurs. Thick pathways denote dopaminergic signals.
    ||
  • image p530fig14.05 Displays used by (Buschman, Miller 2007) in their visual search experiments. See the text foir details.
    || Fixation 500 ms-> Sample 1000 ms-> Delay 500 ms-> Visual [pop-out, search]- reaction time.
  • image p531fig14.06 Classification of scenic properties as texture categories by the ARTSCENE model. See the text for details.
    || Image-> Feature extraction (texture principal component rankings)-> Learning feature-to-scene mapping (texture category principal component rankings)<- scene class. Large-to-small attentional shrouds as principle component higher.
  • image p531fig14.07 Voting in the ARTSCENE model achieves even better prediction of scene type. See the text for details.
    || Image-> Feature extraction (texture principal component rankings)-> Learning feature-to-scene mapping (texture category principal component rankings)-> evidence accumulation (sum)-> scene class winner-take-all inference. Large-to-small attentional shrouds as principle component higher.
  • image p532fig14.08 Macrocircuit of the ARTSCENE Search neural model for learning to search for desired objects by using the sequences of already experienced objects and their locations to predict what and where the desired object is. V1 = First visual area or primary visual cortex; V2 = Second visual area; V4 = Fourth visual area; PPC = Posterior Parietal Cortex; ITp = posterior InferoTemporal cortex; ITa = anterior InferoTemporal cortex; MTL = Medial Temporal Lobe; PHC = ParaHippoCampal cortex; PRC = PeriRhinal Cortex; PFC = PreFrontal Cortex; DLPFC = DorsoLateral PreFrontal Cortex; VPFC = Ventral PFC; SC = Superior Colliculus.
    ||
  • image p533fig14.09 Search data and ARTSCENE Search simulations of them in each pair of images from (A) to (F). See the text for details.
    || 6*[data vs simulation], [Response time (ms) versus epoch].
  • image p540fig15.01 The timing of CS and US inputs in the delay and trace conditioning paradigms.
    || Delay and trace conditioning paradigms. [CS, US] vs [Delay, Trace]. To perform an adaptively timed CR, trace conditioning requires a CS memory trace over the Inter-Stimulus Interval (ISI).
  • image p541fig15.02 The neurotrophic Spectrally Timed Adaptive Resonance Theory, or nSTART, model of (Franklin, Grossberg 2017) includes hippocampus to enable adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between CS and US.
    || Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. [CS, US] -> Sensory Cortex (SC) <- motivational attention <-> category learning -> Prefrontal Cortex (PFC). SC conditioned reinforcement learning-> Amygdala (cannot bridge the temporal gap) incentive motivational learning-> PFC. SC adaptively timer learning and BDNF-> Hippocampus (can bridge the temporal gap) BDNF-> PFC. PFC adaptively timed motor learning-> cerebellum.
  • image p541fig15.03 Stages in the processing of adaptively timed conditioning, leading to timed responses in (d) that exhibit both individual Weber laws and an inverted U in conditioning as a function of ISI. See the text for details.
    || Curves of [Response vs ISI].
  • image p542fig15.04 Conditioning data from (Smith 1968; Millenson etal 1977). The former shows the kind of Weber Law and inverted U that were simulated in Figure 15.3. The latter shows that, if there are two ISIs during an experiment, then the animals learn to adaptively time their responses with two properly scaled Weber laws.
    || (left) One ISI (Smith 1968) [mean membrane extension (mm) versus time after CS onset (msec)]. (right) Two ISIs (Millenson etal 1977) [200, 100] msec CS test trials, [mean momentary CS amplitude (mm) vs time after CS onset (msec)]. (bottom) Conditioned eye blinks, made with nictitating membrane and/or eyelid, are adaptively timed: peak closure occurs at expected time(s) of arrival of the US following the CS and obeys a Weber Law.
  • image p543fig15.05 Simulation of conditioning with two ISIs that generate their own Weber Laws, as in the data shown in Figure 15.4.
    || Learning with two ISIs: simulation: R = sum[all: f(xi)*yi*xi] vs msec. Each peak obeys Weber Law! strong evidence for spectral learning.
  • image p543fig15.06 The circuit between dentate granule cells and CA1 hippocampal pyramid cells seems to compute spectrally timed responses. See the text for details.
    || Hippocampal interpretation. 1. Dentate granule cells (Berger, Berry, Thompson 1986): "increasing firing...in the CS period...the latency...was constant". 2. Pyramidal cells: "Temporal model" Dentate granule cells-> CA3 pyramids. 3. Convergence (Squire etal 1989): 1e6 granule cells, 1.6e5 CA3 pyramids. 80-to-1 (ri).
  • image p544fig15.07 In response to a step CS and sustained storage by I_CS of that input, a spectrum of responses xi at different rates ri develops through time.
    || Spectral timing: activation. CS-> I_CS-> All xi. STM sensory representation. Spectral activation d[dt: xi] = ri*[-A*xi + (1 - B*xi)*I_CS].
  • image p544fig15.08 The spectral activities xi generate sigmoid signals f(xi) before the signals are, in turn, gated by habituative transmitters yi.
    || Habituative transmitter gate. transmitter.
  • image p544fig15.09 As always, the habituative transmitter gate yi increases in response to accumulation and decreases due to gated inactivation, leading to the kinds of transmitter and output responses in the right hand column.
    || Habituative transmitter gate (Grossberg 1968). 1. d[dt: yi] = c*(1-yi) - D*f(xi)*yi, C-term - accumulation, D-term gated inactivation. 2. Sigmoid signal f(xi) = xi^n / (B^n + xi^n). 3. Gated output signal f(xi)*yi.
  • image p545fig15.10 When the activity spectrum xi generates a spectrum of sigmoidal signals (f(xi), the corresponding transmitters habituate at different rates. The output signals f(xi)*yi therefore generate a series of unimodal activity profiles that peak at different times, as in Figure 15.3a.
    || A timed spectrum of sampling intervals. [f(xi) activation, yi habituation, f(xi)*yi gated sampling] spectra. gated = sampling intervals.
  • image p545fig15.11 The adaptive weight, or LTM trace , zi learns from the US input I_US at times when the sampling signal f(xi)*yi is on. It then gates the habituative sampling signal f(xi)*yi to generate a doubly gated response f(xi)*yi*zi.
    || Associative learning, gated steepest descent learning (Grossberg 1969). d[dt: zi] = E*f(xi)*yi*[-zi + I_US], E-term read-out of CS gated signal, []-term read-out of US. Output from each population: f(xi)*yi*zi doubly gated signal.
  • image p546fig15.12 The adaptive weights zi in the spectrum learn fastest whose sampling signals are large when the US occurs, as illustrated by the green region in this simulation of (Grossberg, Schmajuk 1989).
    || Computer simulation of spectral learning. (left) fast (right) slow. Constant ISI: 6 cells fast to slow, 4 learning trials, 1 test trial.
  • image p546fig15.13 The total learned response is a sum R of all the doubly gated signals in the spectrum.
    || Adaptive timing is a population property. Total output signal: R = sum[i: f(xi)*yi*zi]. Adaptive timing is a collective property of the circuit. "Random" spectrum of rates achieves good collective timing.
  • image p547fig15.14 An individual
  • image p547fig15.15 Expected non-occurences do not prevent the processing of sensory events and their expectations. Rather, they prevent mismatches of those expectations from triggering orienting reactions.
    || Expected non-occurrence of goal. Some rewards are reliable but delayed in time. Does not lead to orienting reactions: How? Both expected and unexpected nonoccurrences are diue to mismatch of a sensory event with learned expectations. Expected non-occurrences do not inhibit sensory matching: eg a pigeon can see an earlier-than-usual food pellet. Hypothesis: Expected non-occurrences inhibit the process whereby sensory mismatch activates orienting reactions. Mismatch not-> orient.
  • image p548fig15.16 Homologous recognition learning and reinforcement learning macrocicuits enable adaptively timed conditioning in the reinforcement learning circuit to increase inhibition of the orienting system at times when a mismatch in the recognition system would have reduced inhibition of it.
    || Homolog between ART and CogEM model, complementary systems. [Recognition, Reinforcement] learning vs [Attentional, Orienting] system. Reinforcement: timing, drive representation.
  • image p548fig15.17 The timing paradox asks how inhibition of an orienting response (-) can be spread throughout the ISI, yet accurately timed responding can be excited (+) at the end of the ISI.
    || Timing paradox. [CS light, US shock] vs t. ISI = InterStimulus Interval = expected delay of reinforcer. Want timing to be accurate. Want to inhibit exploratory behaviour throught ISI.
  • image p549fig15.18 The Weber Law solves the timing paradox by creating an adaptively timed response throughout the ISI that peaks at the ISI. Within the reinforcement learning circuit, this response can maintain inhibition of the orienting system A at the same time as it generates adaptively timed incentive motivation to the orbitofrontal cortex.
    || Weber Law: reconciling accurate and distributed timing. Resolution: Output can inhibit orienting, peak response probability. What about different ISIs? Standard deviation = peak time. Weber law rule.
  • image p549fig15.19 How the adaptively timed hippocampal spectrum T inhibits (red arrow) the orienting system A as motivated attention in orbitofrontal cortex Si(2) peaks at the ISI.
    || Conditioning, Attention, and Timing circuit. Hippocampus spectrum-> Amgdala orienting system-> neocortex motivational attention. Adaptive timing inhibits orienting system and maintains adaptively timed Motivated Attention on the CS.
  • image p550fig15.20 Adaptively timed conditioning of Long Term Depression, or LTD, occurs in the cerebellum at synapses between parallel fibres and Purkinje cells, thereby reducing inhibition of subcortical nucleus cells and enabling them to express their learned movement gains within the learned time interval. Also see Figure 15.21.
    || [CS-Activated input pathways parallel fibres, US-Activated climbing fibres]-> [Subcortical nucleus (gain control), Cerebella cortex- Purkinje cells (timing)].
  • image p551fig15.21 The most important cells types and circuitry of the cerebellum: Purkinje cells (PC) receive excitatory inputs from the climbing fibres (CF) that originate in the inferior olive (IO) and from parallel fibres (PF), which are the axons for granule cells (GC). GCs, in turn, receive inputs from the mossy fibres (MF) coming from the precerebellar nuclei (PCN). The PF also inhibit PC via basket cells (BC), thereby helping to select the most highly activated PC. The PC generate inhibitory outputs from the cerebellum cortex to the deep cerebellar nuclei (DCN), as in Figure 15.20. Excitatory signals are denoted by (+) and inhibitory signals by (-). Other notations: GL- granular layer; GoC- golgi cells; ML- molecular layer; PCL- Purkinje cell layer; SC- stellate cell; WM- white matter.
    ||
  • image p551fig15.22 Responses of a retinal cone in the turtle retina to brief flashes of light of increasing intensity.
    || response vs msc.
  • image p552fig15.23 Cerebellar biochemistry that supports the hypothesis of how mGluR supports adaptively timed conditioning at cerebellar Purkinje cells. AMPA, Amino-3-hydroxy-5-methyl4-isoxazole priopionic acid-sensitive glutamate receptor; cGMP, cyclic guanosine monophosphate; DAG, diacylglycerol; glu, glutamate; GC, guanylyl cyclase; gK, Ca+-dependent K+ channel protein; GTP, guanosine triphosphate; IP 3
  • image p556fig15.24 (a) Data showing normally timed responding (solid curve) and short latency responses after lesioning cerebellar cortex (dashed curve). (b) computer simulation of short latency response after ablation of model cerebellar cortex.
    ||
  • image p557fig15.25 Computer simulations of (a) adaptively timed long term depression at Purkinje cells, and (b) adaptively timed activation of cereballar nuclear cells.
    || response vs time (msec)
  • image p557fig15.26 Brain regions and processes that contribute to autistic behavioral symptoms when they become imbalanced in prescribed ways.
    || Basal Gamglia prolonged gate opening <-> { Amygdala emotionally depressed-> [hippocampus- hyperspecific learning; Cerebellum- adaptive timing fails; hypofrontal blocking fails, no Theory of Mind]-> Neocortex; Neocortex- rewards not received-> Amygdala}.
  • image p559fig15.27 Brain regions and processes that contribute to the release of dopaminergic Now Print signals by the substantia nigra pars compacta, or SNc, in response to unexpected reinforcing events. See the text for details.
    || Model of spectrally timed SNc learning (Brown, Bulloch, Grossberg 1999). Delayed inhibitory expectations of reward. Dopamine cells signal an error in reqard prediction timing or magnitude. Immediate excitatory predictions of reward. Lateral hypothalamus (Primary Reward Input)-> [(+)ventral striatum <-> ventral pallidium (+)-> PPTN(+)-> SNc]. SNc-> [dopamine signal -> ventral striatum, Striosomal cells]. Conditioned Stimuli (CS)(+)-> [ventral striatum, striosomal cells]. Striosomal cells(-)-> SNc.
  • image p559fig15.28 Neurophysiological data (left column) and model simulations (right column) of SNc responses. See the text for details.
    || membrane potential vs time
  • image p560fig15.29 Excitatory pathways that support activation of the SNc by a US and the conditioning of a CS to the US.
    || Excitatory pathway. Primary reward (apple juice) briefly excites lateral hypothalamus. Hypothalamic-PPTN excitation causes SNc dopamine burst. Hypothalamic activity excites ventral striatum for training. Active CS working memory signals learn to excite ventral striatum. Lateral hypothalamus (Primary Reward Input)-> [(+)ventral striatum <-> ventral pallidium(+)-> PPTN(+)-> SNc]. SNc-> [dopamine signal -> ventral striatum. Conditioned Stimuli working memory trace (CS)(+)-> ventral striatum.
  • image p560fig15.30 The inhibitory pathway from striosomal cells to the SNc is able to inhibit the SNc when a reward occurs with expected timing and magnitude.
    || Inhibitory pathway. Learning: CS-striosomal LTP occurs due to a three-way coincidence [An active CS working memory input, a Ca2+ spike, a dopamine burst]; Signaling: The delayed Ca2+ spike facilitates striosomal-SNc inhibition;. Striosomal cells learn to predict both timing and magnitude of reward signal to cancel it: reward expectation;. Conditioned stimuli (CS) LTP-> Striosomal cells <- dopamine | (-)-> SNc->.
  • image p561fig15.31 The CS activates a population of striosomal cells that respond with different delays in order to enable adaptively timed inhibition of the SNc.
    || Expectation timing (Fiala, Grossberg, Bulloch 1996; Grossberg, Merrill 1992, 1996; Grossberg, Schmajuk 1989). How do cells bridge hundreds of milliseconds? Timing spectrum (msec). 1. CS activates a population of cells with delayed transient signals: MGluR. 2. Each has a different delay, so that the range of delays covers the entire interval. 3. Delayed transients gate both learning and read-out of expectations.
  • image p561fig15.32 The SNc can generate both dopamine bursts and dips in response to rewards whose amplitude is unexpectedly large or small.
    || Inhibitory pathway: expectation magnitude. 1. If reward is greater than expected, a dopamine burst causes striosomal expectation to increase. 2. If reward is less than expected, a dopamine dip causes striosomal expectation to decrease. 3. This is a negative feedback control system for learning. Conditioned stimuli (CS)-> Striosomal cells <- dopamine | (-)-> SNc->.
  • image p563fig15.33 The basal ganglia gate neural processing in many parts of the brain. The feedback loop through the lateral orbitofrontal cortex (blue arrow, lateral orbitofrontal) is the one that MOTIVATOR models.
    || MOTIVATOR models one of several thalamocortical loops through basal ganglia (Adapted from Fundamental Neuroscience. 2002 Copyright Elsevier). [cortex-> striatum-> pallidum S. nigra-> thalamus] vs [motor, oculomotor, dorsolateral prefrontal, lateral orbitofrontal, anterior cingulate]. thalamus-> [striatum, cortex].
  • image p563fig15.34 The colored regions are distinct parts of the basal ganglia in the loops depicted in Figure 15.33.
    || Distinct basal ganglia zones for each loop (Adapted from Fundamental Neuroscience. 2002 Copyright Elsevier).
  • image p564fig15.35 (a) A pair of recurrent shunting on-center off-surround networks for control of the fore limbs and hind limbs. (b) Varying the GO signal to these networks can trigger changes in movement gaits. See the text for details.
    ||
  • image p565fig15.36 (a) The FOVEATE model circuit for the control of saccadic eye movements within the peri-pontine reticular formation. (b) A simulated saccade staircase. See the text for details.
    || [left, right] eye FOVEATE model. [vertical vs horizontal] position (deg).
  • image p566fig15.37 Steps in the FOVEATE model
  • image p567fig15.38 (a) The Gated Pacemaker model for the control of circadian rythms is a recurrent shunting on-center off-surround network whose excitatory feedback signals are gated by habituative transmitters. Tonic arousal signals energize the pacemaker. Diurnal (left) and nocturnal (right) pacemakers are determined by whether phasic light signals turn the pacemaker on or off. An activity-dependent fatigue signal prevents the pacemaker from becoming overly active for too long. (b) Two simulations of circadian activity cycles during different schedules of light (L) and dark (D). See the text for details.
    || sourceOn-> on-cells (recurrent) <-(-) (-)> off-cells (recurrent) <-sourceOff. on-cells-> activity-> off-cells. off-cells-> fatigue. Diurnal: sourceOn=[light, arousal]; sourceOff=arousal;. Nocturnal: sourceOn=arousal; sourceOff=[arousal, light];.
  • image p568fig15.39 Circuits of the MOTIVATOR model that show hypothalamic gated dipoles.
    || [inputs, -> [object, value] categories-> object-value categories-> [reward expectation filter, [FEF, EAT] outputs]. reward expectation filter [DA dip, arousal burst]-> alpha1 non-specific arousal-> value categories. Msi drive inputs-> value categories.
  • image p569fig15.40 The direct and indirect basal ganglia circuits that control GO and STOP movement signals. See the text for details.
    || [Direct path GO(+), Indirect path STOP(+), dopamine from SNc(+-)]-> striatum. GO-> GPi/SNr-> Thalamus (VA/Vlo) <-> frontal cortex. STOP-> GPe <-> STN-> GPi/SNr. NAc-> GPi/SNr.
  • image p573fig16.01 The experimental chamber (A) and neurophysiological recordings from a rat hippocampus (B) that led to the discovery of place cells. See the text for details.
    ||
  • image p574fig16.02 Neurophysiological recordings of 18 different place cell receptive fields. See the text for details.
    ||
  • image p575fig16.03 As a rat navigates in its experimental chamber (black curves), neurophysiological recordings disclose the firing patterns (in red) of (a) a hippocampal place cell and (b) an entrorhinal grid cell.
    ||
  • image p578fig16.04 Cross-sections of the hippocampal regions and the inputs to them. See the text for details.
    || EC-> CA1-> CA3-> DG. Layers [V/V1, II, II].
  • image p580fig16.05 Macrocircuit of the GridPlaceMap model, which can learn both 2D grid cells and place cells in response to realistic trajectories of navigating rats using a hierarchy of SOMs with identical equations.
    || GridPlaceMap model: rate-based and spiking (Pilly, Grossberg 2012). Pre-wired 1D stripe cells, learns both 2D frid and place cells! Same laws for both; both select most frequent and energetic inputs. Place cells emerge gradually in response to developing grid cells. [place-> grid-> stripe] cells-> path integration-> vestibular signals
  • image p581fig16.06 The learning of hexagonal grid cell receptive fields as an animal navigates an open field is a natural consequence of simple trigonometric properties of the positions at which the firing of stripe cells that are tuned to different directions will co-occur.
    || The Trigonometry of spatial navigation. Coactivation of stripe cells.
  • image p582fig16.07 Stripe cells were predicted in (Mhatre, Gorchetchnikov, Grossberg 2012) to convert linear velocity signals into the distances travelled in particular directions. They are modeled by directionally-sensitive ring attractors, which help to explain their periodic activation as an animal continues to move in a given direction. See the text for details.
    || Stripe cells. Stripe cells are predicted to exist in (or no later than) EC layer (III, V/VI). Linear path integrators: represent distance traveled using linear velocity modulated with head direction signal. Ring attractor circuit: the activity bump represents distance traveled, stripe cells with same spatial period and directional preference fire with different spatial phases at different ring positions. Distance is computed directly, it does not require decoding by oscillatory interference. Periodic stripe cell activation due to ring anatomy: periodic boundary conditions. Stripe firing fields with multiple orientations, phases and scales.
  • image p582fig16.08 Some experimental evidence for stripe-like cell receptive fields has been reported. The band cells posited by Neil Burgess also exhibit the one-dimensional firing symmetry of stripe cells, but are modeled by oscillatory intererence. See the text for details.
    || Evidence for stripe-like cells. Entorhinal cortex data (Sargolini, Fyhn, Hafting, McNaughton, Witter, Moser, Moser 2006; Krupic, Burgess, O
  • image p583fig16.09 The GRIDSmap model used algorithmically defined stripe cells to process realistic rat trajectories. The stripe cell outputs then formed inputs to the adaptive filter of a self-organizing map which learned hexagonal grid cell receptive fields.
    || GRIDSmap. Self-organizing map receives inputs from stripe cells and learns to respond to most frequent co-activation patterns. Stripe cells combine speed and head direction to create a periodic 1D position code. Virtual rat navigated using live rat trajectories from Moser Lab. Speed and head direction drives stripe cells.
  • image p583fig16.10 The GRIDSmap model is embedded into a more complete representation of the processing stages from receipt of angular head velocity and linear velocity signals to this learning of place cells.
    || GRIDSmap. Pre-wired 2D stripe cells, learns 2D grid cells. vestibular cells [angular head velocity-> head direction cells, linear velocity]-> stripe cells- small scale 1D periodic spatial code (ECIII)-> SOM grid cells entorhinal cortex- small scale 2D periodic spatial scale-> SOM place cells hippocampal cortex- large scale 2D spatial code (dentate/CA3). Unified hierarchy of SOMs.
  • image p584fig16.11 GRIDSmap simulation of the learning of hexagonal grid fields. See the text for details.
    || Simulation results. Multiple phases per scale. response vs lenght scale (0.5m+).
  • image p584fig16.12 Temporal development of grid cell receptive fields on successive learning trials (1,3,5,7,25,50,75,100).
    || Temporal development of grid fields. Cells begin to exhibit grid structure by 3rd trial. Orientations of the emergent grid rotate to align with each other over trials.
  • image p585fig16.13 Hexagonal grid cell receptive fields develop if their stripe cell directional preferences are separated by 7, 10, 15, 20, or random numbers degrees. The number and directional selectivities of stripe cells can thus be chosen within broad limits without undermining grid cell development.
    ||
  • image p585fig16.14 Superimposing firing of stripe cells whose directional preferences differ by 60 degrees supports learning hexagonal grid cell receptive fields in GRIDSmap.
    || GRIDSmap: from stripe cells to grid cells. Grid-cell Regularity from Integrated Distance through Self-organizing map. Superimposing firing of stripe cells oriented at intervals of 60 degrees. Hexagonal grid!
  • image p586fig16.15 Superimposing stripe cells oriented by 45 degrees does not lead to learning of rectangular grids in GRIDSmap, but it does in an oscillatory inference model.
    || Why is a hexagonal grid favored? Superimposing firing of stripe cells oriented at intervals of 45 degrees. Rectangular grid. This and many other possibilities do not happen in vivo. They do happen in the oscillatory inference model. How are they prevented in GRIDSmap?
  • image p586fig16.16 In the place cell learning model of (Gorchetnikov, Grossberg 2007), three populations of five cells each of entorhinal grid cells (only two are shown) with different spatial periods input to the model
  • image p587fig16.17 A finer analysis of the 2D trigonometry of spatial navigation showed that both the frequency and amplitude of coactivations by stripe cells determine the learning of hexagonal grid fields.
    || A refined analysis: SOM amplifies most frequent and energetic coactivations (Pilly, Grossberg 2012). [linear track, 2D environment]. (left) Stripe fields separated by 90°. 25 coactivations by 2 inputs. (right) Stripe fields separated by 60°. 23 coactivations by 3 inputs.
  • image p588fig16.18 Simulations of coordinated learning of grid cell receptive fields (second row) and unimodal place cell receptive fields (third row) by the hierarchy of SOMs in the GridPlaceMap model. Note the exquisite regularity of the hexagonal grid cell firing fields.
    || [stripe, grid, place] cells vs [spikes on trajectory, unsmoothed rate map, smoothed rate map].
  • image p589fig16.19 Neurophysiological data showing the smaller dorsal grid cell scales and the larger ventral grid cell scales.
    || Spatial scale of grid cells increase along the MEC dorsoventral axis (Hafting etal 2005; Sargolini etal 2006; Brun etal 2008). [dorsal (left), ventral (right)] cart [rate map, autocortelogram]. How does the spatial scale increase along the MEC dorsoventral axis?
  • image p590fig16.20 Integration rate of grid cells decreases along the dorsoventral gradient of the Medial Entorhinal Cortex, or MEC.
    || Dorsoventral gradient in the rate of synaptic integration of MEC layer II stellate cells (Garden etal 2008). Cross-section of [Hp, CC, LEC, MEC. (A left column) [dorsal, ventral] mV? vs msec. (B center column) [half width (ms), rise time (ms), amplitude (mV)] vs location (μm). (C right upper) responses (D right lower) width (ms) vs loacation (μm).
  • image p590fig16.21 Frequency of membrane potential oscillations in grid cells decreases along the dorsoventral gradient of the MEC.
    || Dorsoventral gradient in the frequency of membrane potential oscillations of MEC layer II stellate cells (Giocomo etal 2007). (C left column) Oscillation (Hz) vs distance from dorsal surface (mm). (D right upper) [dorsal, ventral oscillations 5mV-500ms. (E right lower) [dorsal, ventral oscillations 100ms. Both membrane potential oscillation frequency and resonance frequency decrease from the dorsal to ventral end of MEC.
  • image p591fig16.22 Time constants and duration of afterhyperpolarization currents of grid cells increase along the dorsoventral gradient of the MEC.
    || Dorsoventral gradient in afterhyperpolarization (AHP) kinetics of MEC layer II stellate cells (Navratilova etal 2012). [mAHP time constant (ms), Half-width (mm)] vs distance from the dorsal surface (mm), at [-55, -50, -45] mV. Time constants and duration of AHP increase from the dorsal to the ventral end of MEC layer II. Effectively, the relative refractory period is longer for ventral stellate cells in MEC layer II.
  • image p591fig16.23 The Spectral Spacing Model uses a rate gradient to learn a spatial gradient of grid cell receptive field sizes along the dorsoventral gradient of the MEC.
    || Spectral spacing model. Map cells responding to stripe cell inputs of multiple scales. Grid cells: MEC layer II (small scale 2D spatial code). Stripe cells: PaS / MEC deep layer (small scale 1D spatial code). Path Integration. Vestibular signals- linear velocity and angular head velocity. SOM. How do entorhinal cells solve the scale selection problem?
  • image p592fig16.24 Parameter settings in the Spectral Spacing Model that were used in simulations.
    || Simulation settings. Activity vs distance (cm). Learning trials: 40.
  • image p593fig16.25 Spectral Spacing Model STM, MTM, and LTM equations. The rate spectrum that determines the dorsoventral gradient of multiple grid cell properties is defined by μm.
    || Spectral Spacing Model equations. [STM, MTM, LTM]. μm = rate spectrum.
  • image p593fig16.26 Data (left column) and simulations (right column) of the gradient of increasing grid cell spacing along the dorsoventral axis of MEC.
    || Gradient of grid spacing along dorsoventral axis of MEC (Brun etal 2008). data-[Distance (m?), Median grid spacing (m?)] simulations-[Grid spacing (cm), Grid spacing (cm)] vs response rate.
  • image p594fig16.27 Data (left column) and simulations (right column) of the gradient of increasing grid cell field width along the dorsoventral axis of MEC.
    || Gradient of field width along dorsoventral axis of MEC (Brun etal 2008). data-[Distance (m?), Width autocorr peak (m?)] simulations-[Grid field width (cm), Width autocorr peak (cm)] vs response rate.
  • image p595fig16.28 Data (left column) and simulations (right column) about peak and mean grid cell response rates along the dorsoventral axis of MEC.
    || Peak and mean rates at different locations along DV axis of MEC (Brun etal 2008). Peak rate (Hz) vs [data- DV quarter, simulations- Response rate].
  • image p596fig16.29 Data (top row) and simulations (bottom row) showing decreasing frequency of subthreshold membrane potential oscillations along the DV axis of MEC.
    || Subthreshold membrane potential oscillations at different locations along DV axis of MEC (Giocomo etal 2020; Yoshida etal 2011). Data [oscillations (Hz) vs distance from dorsal surface (mm) @[-50, -45] mV, Frequency (Hz) vs [-58, -54, -50] mV]. Simulations MPO frequency (Hz) s [response, habituation] rate.
  • image p596fig16.30 Data (top row) and simulations (bottom row) of spatial phases of learned grid and place cells.
    || Spatial phases of learned grid and place cells (Hafting etal 2005). Data: Cross-correlogram of rate maps of two grid cells; Distribution of phase difference: distance from origin to nearest peak in cross-correlogram. Simulations: Grid cell histogram of spatial correlation coefficients; Place cell histogram of spatial correlation coefficients.
  • image p597fig16.31 Data (a) and simulations (b-d) about multimodal place cell receptive fields in large spaces. The simulations are the result of learned place fields.
    || Multimodal place cell firing in large spaces (Fenton etal 2008; Henriksen etal 2010; Park etal 2011). Number of cells (%) vs Number of place fields. [2, 3] place fields, 100*100 cm space.
  • image p597fig16.32 Data (top row) and simulations (bottom row) about grid cell development in juvenile rats. Grid score increases (a-b and d), whereas grid spacing remains fairly flat (c and e).
    || Model fits data about grid cell development (Wills etal 2010; Langston etal 2010). Data: [Gridness, grid score, inter-field distance (cm)]. Simulations: [Gridness score, Grid spacing (cm)] vs trial.
  • image p598fig16.33 Data (top row) and simulations (bottom row) of changes in place cell properties in juvenile rats, notably about spatial information (a,c) and inter-trial stability (b,d).
    || Model fits data about grid cell development (Wills etal 2010). [Data, Simulation] vs [spatial information, inter-trial stability]. x-axis [age (postnatal day), trial].
  • image p598fig16.34 The spiking GridPlaceMap model generates theta-modulated place and grid cell firing, unlike the rate-based model.
    || Theta-modulated cells in spiking model. [place, grid] cell vs [membrane potential (mV vs time), frequency vs inter-spike intervals (s), power spectra (normalized power vs frequency (Hz))].
  • image p599fig16.35 Data (a) and simulations (b,c) about anatomically overlapping grid cell modules. (a) shows the anatomical distribution of grid cells belonging to different modules in one animal. DV location (mm) vs postrhinal border. (b) shows the simulated distribution of learned grid cell spacings from two stripe cell scales. frequency (%) vs grid spacing (cm). mu = [1, 0.6]. (c) shows what happens when half the cells respond with one rate and half another rate. (d) shows the same with three rates. (e-g) show spatial maps and autocorrelograms of grid cells that arise from the different rates in (d). [rate map, autocorelogram] vs [score [1.07, 0.5, 0.67], spacing (cm) [23.58, 41, 63.64]].
    ||
  • image p600fig16.36 The entorhinal-hipppocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories. See the text for details.
    || Entorhinal-hippocampal interactions as an ART system. Hippocampal place cells as spatial categories. Angular head velocity-> head direction cells-> stripe cells- small scale 1D periodic code (ECIII) SOM-> grid cells- small scale 2D periodic code (ECII) SOM-> place cells- larger scale spatial map (DG/CA3)-> place cells (CA1)-> conjunctive-coding cells (EC V/VI)-> top-down feedback back to stripe cells- small scale 1D periodic code (ECIII). stripe cells- small scale 1D periodic code (ECIII)-> place cells (CA1).
  • image p602fig16.37 Data showing the effect of hippocampal inactivation by muscimol on grid cell firing before, during, and six hours after the muscimol, reading from left to right.
    || Hippocampal inactivation disrupts grid cells (Bonnevie etal 2013). muscimole inactivation. spikes on trajectory: [before, after min [6-20, 20-40, 40-60, 6h]]. rate map (Hz) [18.6, 11.4, 9.5, 6.7, 10.8]. spatial autocorrelogram g=[1.12, 0.05, -0.34, 0.09, 1.27].
  • image p603fig16.38 Role of hippocampal feedback in maintaining grid fields. (a) Data showing the effect of hippocampal inactivation before and during muscimol inhibition of hippocampal cells, as in Figure 16.37. (b) Model simulation with normal grid fields. (c) Model simulation that emulates the effect of hippocampal inhibition on grid fields.
    || (a) Data: hippocampal inactivation [before, after] cart [spikes on trajectory (p: [18.6, 6.7] Hz), spatial autocorrelogram (g= [1.12, 0.09])]. (b) Model: noise-free path integration, [spikes on trajectory (p: 14.56 Hz), rate map, spatial autocorrelogram (g= 1.41), dynamic autocorrelogram (g=0.6)]. (c) Model: noisy path integration + non-specific tonic inhibition, [spikes on trajectory (p: 11.33 Hz), rate map, spatial autocorrelogram (g= 0.05), dynamic autocorrelogram (g=0.047)].
  • image p605fig16.39 Data showing effects of medial septum (MS) inactivation on grid cells and network theta oscillations in medial entorhinal cortex (MEC). (A) Examples of disruption in the spatial expression of the hexagonal grid structure for two grid cells (Brandon etal 2011). (B) Temporal reduction in the power and frequency of network theta oscillations (Koenig etal 2011). (C) Temporary reduction in the gridness score, mean firing rate, and patial stability of grid cells (Koenig etal 2011).
    || Disruptive effects of Medial Septum inactivation in Medial Entorhinal Cortex (Brandon etal 2011; Koenig etal 2011). (A) Rate map [rate map, spatial autocorrelations, trajectory] vs [baseline, sub-sampled, medial septum inactivation, 3-6 hour recovery, 24 hour recovery], [rate map (Hz- m, p), spatial autocorrelations (gridness)][ 1.2, 7.2, 1.1; 0.25, 1.7, 0.6; 0.25, 2.5, -0.53; 0.7, 5.1, 0.55; 1.0, 5.3, 1.3; 2.1, 15, 0.19; 1.7, 12, 0.71; 1.7, 3.2, -0.22; 1.8, 9.1, 0.68; 2.5, 13, 0.46]. (B) [normalized power at 7-9 Hz, frequency (Hz)] vs 5-minute periods. (C) [mean gridness score (+-SEM), mean firing rate (% of baseline), mean correlation coeff (+-SEM)] vs 10-minute periods.
  • image p607fig16.40 Effects of medial septum (MS) inactivation on grid cells. (a) Each row shows data and different data-derived measures of grid cell responsiveness, starting from the left with the baseline response to the middle column with maximal inhibition. (b) Data showing the temporary reduction in the gridness scores during MS inactivation, followed by recovery. (c) Simulation of the collapse in gridness, achieved by reduction in cell response rates to mimic reduced cholinergic transmission. (d,e) Simulations of the reduction in gridness scores in (d) by reduction of cell response rates, in (e) by changing the leak conductance. See the text for details.
    ||
  • image p611fig16.41 How back-propagating action potentials, supplemented by recurrent inhibitory interneurons, control both learning within the synapses on the apical dendrites of winning pyramidal cells, and regulate a rythm by which associative read-out is dissociated from read-in. See the text for details.
    ||
  • image p612fig16.42 Macrocircuit of the main SOVEREIGN subsystems.
    || [reward input, drive input, drive representation (DR), visual working memory and planning system (VWMPS), visual form and motion system (VFMS), motor approach and orienting system (MAOS), visual input (VisIn), motor working memory and planning system (MWMPS), motor approach and orienting system (MAOS), motor plant (MotP), Proprioceptive Input (PropIn), Vestibular Input (VesIn), Environmental feedback (EnvFB). DR [incentive motivational learning-> [VWMPS, MWMPS], -> VFMS, -> MAOS], VWMPS [conditioned reinforcer learning-> DR, MAOS], VFMS [visual object categories-> VWMPS, reactive movement commands-> MAOS], MWMPS [conditioned reinforcer learning-> DR, planned movement commands-> MAOS], MAOS [motor map positions-> MWMPS, motor outflow-> MotP], VisIn-> VFMS, VesIn-> MAOS, EnvFB-> [VisIn, MotP, VesIn].
  • image p613fig16.43 The main visual form and motion processing stream mechanisms of SOVEREIGN, many of them described at length in previous chapters.
    || Render 3-D scene (R3DS), figure-ground separation (FGS), log-polar transform (LPT), Gaussian coarse-coding (GCC), Invariant visual target map (IVTM), What Fuzzy ART (WhatFuzz), body spatial coordinates (BSC), where reactive visual TPV storage (WRVTS), Directional transient cell network (DTCN), Motion direction hemifild map (MDHM), Hemifiled left/right scoring (HLRS), reactive visual control signal (RVCS), Parvo/Magno/Erg competition (PMEC), Approach and Orient GOp (AOGp), GOm (GOm). R3DS [parvo-> FGS, magno-> DTCN], FGS-> [LPT, WRVTS], LPT-> GCC-> IVTM-> WhatFuzz, BSC-> [RVTS, PMEC], PMEC-> [gateRVTS-> RVTS, gateRVCS-> RVCS], DTCN-> MDHM-> HLRS, HLRS-> [PMEC, RVCS], AOGp-> gateRVTS, GOm-> gateRVCS.
  • image p613fig16.44 The main target position vector (TPV), difference vector (DV), and volitional GO computations in SOVEREIGN that bring together reactive and planned signals to control decision-making and action. See the text for details.
    || Reactive visual TPV (RVT), NETs (NETs), S-MV mismatch (SMVM), NETmv (NETmv), reactive visual TPV storage (RVTS), reactive DV1 (RD1), NET (NET), motivated what and where decisions (MWWD), Planned DV1 (PD1), tonic (Tonic), top-down readout mismatch (TDRM), Parvo gate (tonic) (PG), Orienting GOp offset (OGpO). RVT-> [NETs, RVTS], NETs-> [SMVM, NET], SMVM-> NET, NETmv-> SMVM, RVTS-> [NETs, RD1], NET-> [RD1, PD1, TDRM], MWWD-> PD1, PD1-> Tonic-> TDRMPG-> NETs, OGpO-> [NETmv, PD1].
  • image p614fig16.45 The main distance (d) and angle (a) computations that bring together and learn dimensionally-consistent visual and motor information whereby to make the currently best decisions and actions. See the text for details.
    || Reactive Visual TPV [m storage], NETm S-MV mismatch, MV mismatch, NETmv, PPVv, PPVm, Vestibular feedback, motor copy.
  • image p615fig16.46 SOVEREIGN uses homologous processing stages to model the (a) What cortical stream and the (b) Where cortical stream, including their cognitive working memories and chunking networks, and their modulation by motivational mechanisms. See the text for details.
    ||
  • image p615fig16.47 SOVEREIGN models how multiple READ circuits, operating in parallel in response to multiple internal drive sources, can be coordinated to realize a sensory-drive heterarchy that can maximally amplify the motivationally most currently favored option.
    ||
  • image p616fig16.48 SOVEREIGN was tested using a virtual reality 3D rendering of a cross maze (a) with different visual cues at the end of each corridor.
    ||
  • image p616fig16.49 The animat learned to convert (a) inefficient exploration of the maze into (b) an efficient direct learned path to the goal.
    ||
  • image p617fig16.50 The perirhinal and parahippocampal cortices enable adaptively timed reinforcement learning and spatial navigational processes that are modeled by Spectral Spacing models in the What and Where cortical streams, respectively, to be fused in the hippocampus.
    || What and Where inputs to the hippocampus (Diana, Yonelinas, Ranganath 2007). Adaptively timed conditioning and spatial naviga039tbl01.03 tion. Hippocampus <-> Entorhinal Cortex <-> [Perirhinal Cortex <-> what, Parahippocampal Cortex <-> where].
  • image p627tbl17.01 Homologs between reaction-diffusion and recurrent shunting cellular network models of development.
    || byRows: (reaction-diffusion, recurrent shunting net) (activator, excitatory activity) (inhibitor, inhibitory activity) (morphogenic source density, inputs) (firing of morphogen gradient, contrast enhancement) (maintenance of morphogen gradient, short-term memory) (power or sigmoidal signal functions, power or sigmoidal signal functions) (on-center off-surround interactions via diffusion, on-center off-surround interactions via signals) (self-stabilizing distributions of morphogens if inhibitors equilibrate rapidly, short-term memory pattern if inhibitors equilibrate rapidly) (periodic pulses if inhibitors equilibrate slowly, periodic pulses if inhibitors equilibrate slowly) (regulation, adaptation).
  • image p628fig17.01 A hydra
    ||
  • image p628fig17.02 Schematics of how different cuts and grafts of the normal Hydra in (a) may (*) or may not lead to the growth of a new head. See the text for details.
    ||
  • image p629fig17.03 How an initial morphogenetic gradient may be contrast enhanced to exceed the threshold for head formation in its most active region.
    || head formation threshold, final gradient, initial gradient.
  • image p630fig17.04 Morphogenesis: more ratios (Wolpert 1969). Shape preserved as size increases. French flag problem. Use cellular models! (Grossberg 1976, 1978) vs chemical or fluid reaction-diffusion models (Turing 1952; Gierer, Meinhardt 1972).
    ||
  • image p631fig17.05 How a blastula develops into a gastrula. See the text for details.
    || 1. The vegetal pole of the blastula flattens, [Animal, vegetal] hemisphere, blastocoel. 2. Some cells change shape and move inward to form the archenteron, Elastopore. 3. Other cells break free, becoming mesenchyme. 4. Then extensions of mesenchyme cells attach to the overlying ctoderm, Archenteron. 5. The archenteron elongates, assisted by the contraction of mesenchyme cells. 6. The mouth will form, where the archenteron meets ectoderm. 7. The blastopone will form the anus of the mature animal. [Mesenchyme, Ectoderm, Endoderm, Blastocoel, Archenteron, Mesenchyme]. Concept 38.3, www.macmillanhighered.com
  • image p634fig17.06 Summing over a population of cells with binary output signals whose firing thresholds are Gaussianly distributed (left image) generates a total output signal that grows in a sigmoidal fashion with increasing input size (dashed vertical line).
    || How binary cells with a Gaussian distribution of output thresholds generates a sigmoidal population signal. [# of binary cells with threshold T, Total output signal] vs Cell firing thresholds T. Cell population with firing thresholds Gaussianly distributed around a mean value. As input increases (dashed line), more cells in population fire with binary signals. Total population output obeys a sigmoid signal function f.
  • Introduction webPage, questions driving this "webSite" (collection of webPages, defined by the menu above) are :
  • How difficult would it be to augment "Transformer Neural Networks" (TrNNs) with Grossberg This section is repeated in the Introduction webPage.
  • Grossberg 2021 p229c2h0.60 SMART computer simulations demonstrate that a good enough match of a top-down expectation with a bottom-up feature pattern generates an attentive resonance during which the spikes of active cells synchronize in the gamma frequency range of 20-70 Hz (Figure 5.40). Many labs have reported a link between attention and gamma oscillations in the brain, including two articles published in 2001, one from the laboratory of Robert Desimone when he was at the the National Institute of Mental Health in Bethseda (Fries, Reynolds, Rorie, Desimone 2001), and the other from the laboratory of Wolf Singer in Frankfurt (Engel, Fries, Singer 2001). You
  • Grossberg 2021 p081c2h0.66 sub-section: How does one evolve a computational brain?
    The above discussion illustrates that no single step of theoretical derivation can derive a whole brain. One needs a method for deriving a brain in stages, or cycles, much as evolution has incrementally discovered ever more complex brains over many thousands of years. The following theoretical method has been successfully applied many times since I first used it in 1957. It embodies a kind of conceptual evolutionary process for deriving a brain.

    Because "brain evolution needs to achieve behavioural success", we need to start with data that embodiey indices of behavioral success. That is why, as illustrated in Figure 2.37 Modelling method and cycle, one starts with Behavioral Data from scores or hundreds of psychological experiments. These data are analyszed as the result of an individual adapting autonomously in real time to a changing world. This is the Arty of Modeling. It requires that one be able to infer from static data curves the dynamical processes that control individual behaviors occuring in real time. One of the hardest things that I teach to my students to do is "how to think in real time" to be able to carry out this speculative leap.

    Properly carried out, this analysis leads to the discovery of new Design Principles that are embodied by these behavioral processes. The Design Principles highlight the functional meaning of the data, and clarify how individual behaviors occurring in real time give rise to these static data curves.

    These principles are then converted into the simplest Mathematical Model using a method of minimal anatomies, which is a form of Occam
  • [definitions, models] of consciousness.html -
  • What is consciousness: from historical to Grossberg -
  • data from [neuroscience, psychology] : quick list, more details
  • success in [definitions, models] of [consciousness, sentience]. However, for reasons given on that webpage, only Stephen Grossberg A few models of consciousness are summarized on my webPage A quick comparison of Consciousness Theories. Only a few concepts are listed, almost randomly selected except for [Grossberg, Taylor] Stephen Grossberg may have the ONLY definition of consciousness that is directly tied to quantitative models for lower-level [neuron, general neurology, psychology] data. Foundational models, similar in nature to the small number of general theories in physics to describe a vast range of phenomena, were derived over a period of ?4-5? decades BEFORE they were found to apply to consciousness. That paralleled their use in very widespread
  • John Taylor
  • references- Grossberg and
  • see Grossberg 2021: the biological need for machine consciousness
    Howell 30Dec2011, page 39 "Part VI - Far beyond current toolsets"
  • 13.3K Followers ..."(Blake Lemoine, 2022)
  • 11Jun2022 Is LaMDA Sentient? — an Interview

    22Jun2022 We’re All Different and That’s Okay

    11Jun2022 What is LaMDA and What Does it Want?

    14Aug2022 What is sentience and why does it matter?

    More detail following from Sejnnowski
  • Historical thinking about consciousness.
  • Historical thinking about quantum [neurophysiology, consciousness]
  • WRONG!! It may help the ready to re-visit comments about the historical thinking about consciousness, which is not limited to quantum consciousness. This complements items below. Early era of [General Relativity, Quantum Mechanics]: I would be greatly surprised if there wasn Pribram 1993 quantum fields and consciousness proceedings provides references back to 1960, and Jibu, Yasue comment that :
  • Howells questions about 1993 conference proceedings
  • see incorporate reader questions into theme webPage
    see Navigation: [menu, link, directory]s
  • p153 Howell: grepStr
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
    bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :Northern EuropeNorth AfricaSouth AsiaMesopotamia etc

    Howell, OurWorldInData.org daily-covid-cases-per-million-three-day-avg Howell, OurWorldInData.org daily-covid-deaths-per-million-3-day-avg.ods
  • using the highest Note : The first model that I saw online was posted in February by complete amateurs!! Their model simply adopted models that had been used for the Spanish flu, and (largely because of the simplicity) is far better described than some academic models. YYG claims "Artificial Intelligence" (at very brief glance, looks like very basic machine learning to me, but I would have to go through it in detail). Is this yet another case of the machines clobbering the best human academics? Is is also yet another case of apparently one individual (apparently not directly affiliated with an institution), besting well-[[salary & benefit]ed, established, permanent] experts of government (including academic) institutions?
  • Note that a separate webPage lists a very small portion of Stephan Grossberg
  • J.E. Kaal, A. Otte, J.A. Sorensen, J.G. Emming 2021 "The nature of the atom" www.Curtis-Press.com, 268pp ISBN 978-1-8381280-2-9 https://StructuredAtom.org/
  • rationalwiki.org "Quantum consciousness" (last update 07Nov2022, viewed 16Jul2023)
    also critiques of the article above
  • Terrence J. Sejnowski 21Aug2023 "Large Language Models and the Reverse Turing Test", Neural Computation (2023) 35 (3): 309–342 (33 pages) https://direct.mit.edu/neco/issue (also copy in case original link fails)
  • Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin 12Jun2017 "Attention Is All You Need" [v5] Wed, 6 Dec 2017 03:30:32 UTC https://arxiv.org/abs/1706.03762
  • Wikipedia Consciousness
  • from the section
  • As per the second question from the section
  • As per the first question from the section
  • Menu
  • Grossbergs list of [chapter, section]s.html - Note that the links on this webPage can be used to individually view all captioned images.
  • directory of captioned images - users can easily view all of the captioned images, especially if they are downloaded onto their computer. Many image viewers have "forward, backward] arrows to go through these sequentially, or right-click to open a link in a window.
  • core bash script for extracting captions from webPage listing, convert them to images, then vertically appending them to the figure.
  • my bash utility to [position, move] windows. This is normally used to start up 6 workspaces on my computer (Linux Mint Debian Edition), each with 5-10 apps in separate windows.
  • Prepared themes with links to the captioned images - there are a huge number of themes from the book to focus on. I have prepared a few as examples.
  • What is consciousness? - video example not ready as of 30Aug2023. I save videos as "ogv/ogg" files, and open standard format. The "VLC media viewer" is the program that I use to view them. I have found that although some of the standard video viewers complain, when pushed into the process ogv files can be viewed with them.
  • Navigation: [menu, link, directory]s
  • Theme webPage generation by bash script
  • Notation for [chapter, section, figure, table, index, note]s
  • incorporate reader questions into theme webPages
  • A very primitive bash script is used to generate the search results for ALL themes in the Themes webPage. Many readers will already have far better tools for this from the Computational Intelligence area etc.
    Because the theme webPage is automatically generated, and frequently re-generated as I update the list of themes and sources, I do NOT edit the file directly. The output format can be confusing, due to the special formatted [chapter, section] headings, and large tables which will keep the readers guessing whether they are still within the theme they want to peruse (as per the Table of Contents). Perhaps I can upgrade the searches in time to reduce the confusion, and to split themes in a better way.
  • list of [chapter, section]s
  • list of [figure, table]s
  • selected index items - I have NO intention of re-typing the entire index!
  • Grossberg quotes
  • reader Howell notes - this is an example of building your own webPage of [note, comment, thought]s when reading the book, which can them be added to the bash script for searches. Are notes in addition to [figure, table] captions, mostly comprised of text within the image, but also including quotes of text in the book. Rarely, it includes comments by Howell preceded by "Howell".
    The latter are distinct from "readers notes" (see, for example : Grossberg The reader may want to create their own file of comments based on this example, or augment this list with their [own, others More importantly, and as an easy first adaptation of Grossbergs [core, fun, strange] concepts.html thematic listings, you probably want to get rid of Howell
  • downloading the entire webDirectories below to some directory on your filesystem, say {yourDir} : TrNNs_ART , bin (hopefully I
  • adapt the bash script bash script: thematic [search, collect]s.sh to your own system, and run. This will require re-defining several environmental variables for your, such as :
  • thematic sub-lists appear in the webPage "Grossberg
  • 29Sep2023 Here is a list of various problems with the captioned images and their links on the webPage Grossbergs list of [figure, table]s.html :
    10Aug2023 I haven
  • How difficult would it be to augment "Transformer Neural Networks" (TrNNs) with Grossberg 10Aug2023 This webPage has not yet been worked on. It will touch on one of three questions of this webSite as mentioned in the Introduction :
  • How difficult would it be to augment "Transformer Neural Networks" (TrNNs) with Grossberg 10Aug2023 I haven
  • conscious ART (cART), etc
  • A surprisingly small number of neural architectures can simulate [extensive, diverse] [neuro, pyscho]logical data at BOTH the [sub, ]conscious levels, and for [perception, action] of [sight, auditory, touch, language, cognition, emotion, etc]. This is similar to what we see in physics.
  • simple grepStr search results : Grossberg (Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin 2017)
  • Byoung-Kyong Min 2010 "A Thalamic reticular networking model of consciousness"
    "... The model suggests consciousness as a "mental state embodied through TRN-modulated synchronization of thalamocortical networks". In this model the thalamic reticular nucleus (TRN) is suggested as ideally suited for controlling the entire cerebral network, and responsible (via GABAergic networking) for synchronization of neural activity. ..." (Wiki2023)
  • directory status & updates copyrights
  • directory status & updates copyrights
    directory status & updates copyrights
    directory status & updates copyrights
  • (first posted 28Aug2020 on this blog and MarketWatch, but MW dropped the content about Roosevelt)

  • Ben Davidson of Suspicious Observers posted 3 brilliant videos on nearby stellar flaring, as further support for a potential "micro-flare" or other solar disruption to explain the 12,000 year [mythological observations, paleontology, geology, planetary] quasi-periodicity of disruptive events on Earth, which by appearances may be "imminent". I like Ben If we take an "Electric Universe" perspective, then perhaps shifts in the galactic currents could be expected to "light up" or "extinguish" stars to various degrees as the currents shift and move. In other words, the "lit-up regions" motions may relate more to drifts of galactic currents than to the motions of the stars themselves? My own [cheap, crappy] animation for the spiral currents moving though stations stars is shown in my video (mpeg format) : Bill Howells videos/Birkeland rotation in galaxy - not dark matter/Dark matter video 1 - initial, simple.mpeg

  • The [description, instructions] text file (click on link above), provides links to other macros and spreadsheets, but the other macros aren
  • Ben Davidson of Suspicious Observer In his effective style of thinking and presenting, Ben Davidson raises excellent questions about this discovery, and ties it to the Electric-Universe-related SAFIRE project, in addition to the rapidly-declining Earth Returning to Ben (first posted here 26May2018, also posted to YouTube)
    18May2018 Ben Davidson of Suspicious Observer Ben
  • The Great Fire of Chicago 1871 - descriptions and comments Here is a fascinating collection of eyewitness accounts and comments about the fire, taken from : Charles Ginenthal 1990 "Carl Sagan and Immanuel Velikovsky" Ivy Press Books, New York 359pp (Amazon shows 1995 paperback)
  • Milan M. Radovanovic, Tomislav M. Pavlovic, Gorica B. Stanojevic, Misko M. Milanovic, Mila A. Pavlovic, Aleksandar R. Radivojevic 2014 "The influence of solar activities on occurance of the forest fires in Southern Europe"
  • Ben Davidson 25Jun2014 "2012 Serbian Wildfires"
  • Douglas V. Hoyt, Kenneth H. Schatten ?year? "The role of the sun in climate change", citing Auclair 1995 Forest fires versus sunspots 05Jul2017 Modern recognition for a Ukrainian [mathematician, scientist] Alexey Grigorevich Ivakhnenko? - I haven
  • Online version : http://www.idsia.ch/~juergen/DeepLearning2July2014.pdf
  • Toolsets can be browsed via: Past and Future Worlds directory. Perhaps these may be of interest, help] to others putting together a film from Linux-based free software.
  • In addition to the video of the presentation, "Howell 161220 Big Data, Deep Learning, Safety.ogv">, I have also posted the [software, scheduling/coding spredsheet, slides, etc] used to produce the video, but not any proprietary video segments (to reduce space). This may be of use to others who produce videos and would like to kow how I approached this. It won
  • 28May2016 Paul Vaughan - Ring_of_Fire and Volcano_Explosivity_Index versus El_Nino_La_Nina Here is a breakthrough graph from Paul that relates volcanic activity to El Nino/ La Nina. This makes an interesting century-scale [compliment, contrast] to a "short term" model for major earthquakes by Ben Davidson and colleagues of Suspicious0bservers.org (that
  • 07Dec2015 Alert! awesome, beautiful paper! :
  • directory status & updates copyrights
    directory status & updates copyrights
  • directory status & updates copyrights
  • Bill Howell, "Ring
  • Bill Howell "Are we ready for global cooling?" - A short presentation to Toastmasters – Dows Lake, Ottawa, 14Mar06. Needs corrections and comments! (some time later...)

  • Bill Howell 2006, "Genetic Specification of Recurrent Neural Networks: Initial Thoughts", Presentation in the Special Session on Neural Network Models and Applications in Bioinformatics, Neuroscience, and Neuro-Genetics, World Congress on Computational Intelligence 2006, Vancouver, 21-27Jul06 (available from author)

  • Bill Howell "Genetic specification of recurrent neural networks, draft concepts and
  • Bill Howell "Junk
  • directory status & updates copyrights
  • directory status & updates copyrights
    directory status & updates copyrights
    directory status & updates copyrights
  • Reincarnation : A Riddle by, for, of, on, and with the Mind
  • Anti-engineering and the emergence of racisim
  • Are we ready for Global Cooling
  • The Basis for Democracy
  • Cheating Theory, Parasitic behavior, and Stupidity
  • Climate Change, or not...
  • The Freedom of a robot, the wisdom of Methuseleh
  • Climate Change, or not...
  • I, Robot
  • Inspirational Friends
  • The Last generation to die...
  • Mega-Life, Mega-Death, and the invisible hand of the Sun
  • Mind and Brain, this debate is back in fashion
  • Nuclear spent fuel, high-level radio-active waste
  • Overbreeding is the first act of war
  • Peach fuzz
  • Post-mortem of a presentation
  • directory status & updates copyrights
    directory status & updates copyrights observation and experience. see also Favourite sayings & Crazy Thoughts
    directory status & updates copyrights
    directory status & updates copyrights
  • Ottawa
  • Ottawa
  • directory status & updates copyrights
    directory status & updates copyrights
    directory status & updates copyrights
    directory status & updates copyrights
  • Introduction
  • Surface Conductive Faults 11Mar2016
  • Part 1 Thunderblog 11May2016
  • Part 2 Thunderblog 21May2016
  • Part 3 Thunderblog 28May216
  • The Monocline Thunderblog 06Oct016
  • Part 1 Thunderblog 20Jan2016
  • Part 2 Thunderblog 16Feb2017
  • The Summer Thermopile Thunderblog 21May2017
  • Tornado - The Electric Model Thunderblog 13Jun2017
  • Part 1 Thunderblog 10Dec2017
  • Part 2 Thunderblog 17Dec2017
  • Part 1 Arches National Monument, Thunderblog 12Feb2018
  • Part 2 Colorado Plateau, Thunderblog 12Feb2018
  • Part 3 Secondary effects from electrical deposition, Thunderblog 31Mar2018
  • Part 1 Thunderblog 31Mar2019
  • Part 2 The Electric Winds of Jupiter, Thunderblog 05May2019
  • Part 3 Some storms suck and others blow, Thunderblog 05May2019
  • Part 4 Wind Map, Thunderblog 20Jun2019
  • Part 5 Large Scale Wind Structures, Thunderblog 19Mar2020
  • Part 7, 1 of 2 Electric Earth & the Cosmic Dragon, Thunderblog and video 24Sep2020
  • Part 7, 2 of 2 Electric Earth & the Cosmic Dragon, Thunderblog and video 24Sep2020
  • Part 8 Proving the Passage of the Dragon, Thunderblog and video 31Oct2020
  • Part 9, 1 of 2 San Andreas Fault - A Dragon in Action? Thunderblog and video 18Dec2020
  • Part 9, 2 of 2 Ground Currents and Subsurface Birkeland Currents - How the Earth Thinks? Thunderblog and video 25Dec2020
  • Part 10 Reverse Engineering the Earth, Thunderblog and video 28Jan2021
  • Part 1 Easter Egg Hunt, video 22May2021
  • Part 2 The Cross from the Laramie Mountains, video 29May2021
  • The Shocking Truth, Thunderblog and video 20Aug2021
  • Cracks in Theory 20Nov2021
  • Electricity in Ancient Egypt, video 26Aug2023
  • Other electric geology concepts Based on Electric Universe theory going back to Immanuel Velikovsky, David Talbot and others (see Thunderbolts), Andrew Hall combines keen [observations, questions, electromagnetism, analysis] to provide a [stunning, brilliant] extension to electric geology concepts. [Right, wrong, true, false] are less important at this stage than to see a very [strong, independent, original] stream of thinking that challenges conventional theories. Even for strong [skeptic, critic]s, his series is well worth viewing. Perhaps only those willing to challenge their own belief systems can appreciate this? As of 29Aug2021, Andrew Hall continues making great contributions in this area. This webPage will therefore be missing his most recent postings.
    https://www.thunderbolts.info/wp/2016/03/11/surface-conductive-faults/
  • Mark Boslough and team at Sandia National Laboratory - Exploding asteroids, 1,950 views, Dec 27, 2010
  • CraterHunter (?Dennis Cox?) - A Catastrophe of Comets, The geophysical world according to me, and a few folks I happen to agree with, ~23Dec2010?
  • EU2015 speakers Bruce Leybourne and Ben Davidson - explain theories of our electromagnetic environment and the hot spots of current welling inside the Earth. 2015 (Ben Davidson video 24Feb2016) https://www.youtube.com/watch?v=mPcF40vBqzs https://www.thunderbolts.info/wp/2016/05/11/arc-blast-part-1/
  • Mark Boslough and team at Sandia National Laboratory - Exploding asteroids, 1,950 views, Dec 27, 2010
  • https://www.thunderbolts.info/wp/2016/05/21/arc-blast-part-two/ https://www.thunderbolts.info/wp/2016/05/28/arc-blast-part-three/ https://www.thunderbolts.info/wp/2016/10/06/the-monocline/ https://www.thunderbolts.info/wp/2017/01/20/the-maars-of-pinacate-part-one/ https://www.thunderbolts.info/wp/2017/02/16/the-maars-of-pinacate-part-two/ https://www.thunderbolts.info/wp/2017/04/22/natures-electrode/ https://www.thunderbolts.info/wp/2017/05/21/the-summer-thermopile/ https://www.thunderbolts.info/wp/2017/06/13/tornado-the-electric-model/ https://www.thunderbolts.info/wp/2017/12/10/lightning-scarred-earth-part-1/ https://www.thunderbolts.info/wp/2017/12/17/lightning-scarred-earth-part-2/ https://www.thunderbolts.info/wp/2018/02/12/sputtering-canyons-part-1/ https://www.thunderbolts.info/wp/2018/02/12/sputtering-canyons-part-2/ https://www.thunderbolts.info/wp/2018/03/31/sputtering-canyons-part-3/ https://www.thunderbolts.info/wp/2019/03/31/the-eye-of-the-storm-part-1/ https://www.thunderbolts.info/wp/2019/05/05/the-eye-of-the-storm-part-2/ https://www.thunderbolts.info/wp/2019/05/24/eye-of-the-storm-part-3/ https://www.thunderbolts.info/wp/2019/06/20/eye-of-the-storm-part-4-2/ https://www.thunderbolts.info/wp/2020/03/19/47212/ https://www.thunderbolts.info/wp/2020/04/04/the-great-red-spot/ https://www.thunderbolts.info/wp/2020/09/24/48437/ https://www.youtube.com/watch?v=DgNTKrjpiiI&t=0s https://www.youtube.com/watch?v=_3ITTdl_QRY&t=0s https://www.thunderbolts.info/wp/2020/09/24/48437/ https://youtu.be/_3ITTdl_QRY https://www.thunderbolts.info/wp/2020/10/31/eye-of-the-storm-part-8/ https://youtu.be/2WS0vsVB4Tw https://www.thunderbolts.info/wp/2020/12/25/eye-of-the-storm-part-9/ https://youtu.be/LwbsA-QDBFY https://www.thunderbolts.info/wp/2020/12/25/eye-of-the-storm-part-9/ https://youtu.be/-KoJ9wpvD_g https://www.youtube.com/watch?v=-KoJ9wpvD_g https://www.thunderbolts.info/wp/2021/01/28/eye-of-the-storm-part-10-2/ https://www.youtube.com/watch?v=hW4kCP-ascw https://www.patreon.com/posts/andrew-hall-egg-51555997?utm_medium=post_notification_email&utm_source=post_link&utm_campaign=patron_engagement https://thunderbolts.us7.list-manage.com/track/click?u=1b8e5fc5ffab70f95805dea12&id=f6b8bab8a7&e=54f3bc9169 https://www.thunderbolts.info/wp/2021/08/20/the-shocking-truth/ https://www.youtube.com/watch?v=Pt6NscQ2qS8 Thunderblog source article
    https://www.youtube.com/watch?v=ISfuOZgaN3c
    https://www.youtube.com/watch?v=i4jWPfNJ0rM&t=1s
    Shine On You Crazy Diamond
  • Immanuel Velikovsky - Velikovsky is a primary inspiration for a great deal of breakthrough thinking across many subjects! He was not liked by establishment science, but over time most of his [idea, prediction]s have been OR[right, insightful], and mainstream scientists wrong! That thing about Venus sprouting from [Saturn, Mars, something] (I forget which) in historical times is a bit much for me, but given his track record I am afraid to say that he was wrong.
  • Rens Van Der Sluijs "Theories on the Rocks - In a Flash (Part Two)" 27Aug2021
  • Expanding Earth (EE) hypothesis [?Hildebrand?, Neal Adams (Batman artist), James Maxlow, ??? Hurrell?] - entirely subsumes plate techtonics and takes it to an entirely new level,both for geology and evolution.
  • Petroglyphs [David Talbot, Wal Thornhill, Anthony Peratt] - Mythology backed by space plasma science help explain what some [mythology, petroglyphic images] may represent. This is far superior to any other explanations that I have seen (including ?Joseph Campbell
  • directory status & updates copyrights
    directory status & updates copyrights

    Go to: Home page of www.BillHowell.ca



    directory status & updates copyrights $ grep
  • Howell
  • 2022-03-02 08:44 The Control Center of the Russian Space Agency Roskosmos has no more control over its spy-sattelites.
    “Hacking group NB65, affiliated with Anonymous, has shut down the Control Center of the Russian Space Agency ‘Roskosmos’. Russia has no more control over their own spy-sattelites,” reads a message on Telegram channel of the latest information from Ukraine’s Armed Forces. 2022-03-04 15:45 2022-02-11 12:52 Questions about Russia’s war against Ukraine with defense reporter Illia Ponomarenko
  • Kyle Beattie 30Oct2021 "Worldwide Bayesian Causal Impact Analysis of Vaccine Administration on Deaths and Cases Associated with COVID-19" https://drive.google.com/file/d/1DLlRa9rUqvW9pG1vNEsWMEydWwsmSMbe/view
  • A very incomplete webPage with my covax notes (very messy, but some great references, some ratty) : /home/bill/web/ProjMajor/Sun pandemics, health/corona virus/Howell - covax adverse effects.html /home/bill/web/Bill Howells videos/Howell - videos.html
    Toolsets can be browsed via: Icebreaker unchained directory. Perhaps these may be of interest, help] to others putting together a film from Linux-based free software.
    https://gellerreport.com/2022/01/the-coming-war-with-russia.html/
    23Jan2022 I "Published" this idea on TradingView : USOIL snakes, ladders, Tchaichovsky (I am not sure if non-members can view this?).
  • directory status & updates copyrights
  • Howell
  • Wow! I was totally surprised by this ethnic map!! View John J. Mearsheimer, Uof Chicago, presentation : Why is Ukraine the West’s Fault?
    Again from John J. Mearsheimer, Uof Chicago, presentation : Why is Ukraine the West’s Fault?
    Again from John J. Mearsheimer, Uof Chicago, presentation : Why is Ukraine the West’s Fault?
    Again from John J. Mearsheimer, Uof Chicago, presentation : Why is Ukraine the West’s Fault?
    Again from John J. Mearsheimer, Uof Chicago, presentation : Why is Ukraine the West’s Fault?
    (initially from Lawrence Person @AGHamilton29 - Here is a liveuamap.com of all the verified Russian attacks on Ukraine as of 3:30 am last night. Just a reminder that just a day and a half ago, Putin was claiming he was just sending in “peacekeeping” forces to defend the area circled in yellow.
    27Feb2022 ukrinform.net - Russian losses update - 03Mar2022 Russia’s losses reached nearly 9,000 in a week – Zelensky 26Feb2022 kyivindependent.com - Russias losses to date
    Again from John J. Mearsheimer, Uof Chicago, presentation : Why is Ukraine the West’s Fault?
    eng.minerals-ua.info - Mineral Resources of Ukraine - Metallic minerals
    John J. Mearsheimer, Uof Chicago, GREAT presentation 04-07Jun2015 : Why is Ukraine the West’s Fault? 2. Maps of Ukraine [war [forecast, battles], losses, oil, gas pipelines, minerals] -
  • kyivindependent.com - news log This is my main source of daily information on the Ukraine-Russia situation
  • ukrinform.net news log
  • John J. Mearsheimer, Uof Chicago, GREAT presentation 04-07Jun2015 : Why is Ukraine the West’s Fault?
    Mearsheimer 2015 Ethnic breakdown of Ukraine
    Mearsheimer 2015 Ukraine election 2010
    Mearsheimer 2015 Ukraine election 2004
    Mearsheimer 2015 Ukraine 2015 survey join NATO
    Mearsheimer 2015 Ukraine 2015 survey join EU or [Russia, Belarus, Kazakhstan]
    Mearsheimer 2015 Europe dependence on Russian gas
    Mearsheimer 2015 Eastward expansion of NATO stages
  • 04Mar2022 MktWatch, Opinion: Russia’s invasion of Ukraine: 4 ways this war could end
  • 04Mar2022 10:14 external Ukrainian Navy sinks its flagship to keep it out of Russian hands
  • 02Mar2022 08:44 (Ukr time) ukrinform.net - The Control Center of the Russian Space Agency Roskosmos has no more control over its spy-sattelites.
    “Hacking group NB65, affiliated with Anonymous, has shut down the Control Center of the Russian Space Agency ‘Roskosmos’. Russia has no more control over their own spy-sattelites,” reads a message on Telegram channel of the latest information from Ukraine’s Armed Forces.
  • What has been the ramp-up in Russian personnel since 24Feb2022? Even a day or two later, I was still left with the impression of 150,000 troops, but that
  • 27Feb2022 bloomberg.com suggests 205,000 Ukrainian active troops, but call-ups and active conscription will have increased that. In the same article, they report Russian troops at 190,000. Ukraine Population (2022) - worldometers.info
  • 02Mar2022 07:24 (Ukr) ukrinform.net - Canada: No-fly zone would be severe escalation
  • 27Feb2022 moderndiplomacy.eu - China should take a more proactive role in Russia-Ukraine negotiation, by Haoyu "Henry" Huang
  • 28Feb2022 republicworld.com - China Reveals Its Hand! Will Trade With Russia Despite Ukraine War; Slams NATO
  • 01Mar2022 spacenews.com - Russia looks to China for collaboration in space but faces isolation over Ukraine invasion, by Andrew Jones
  • 02Mar2022 14:16 (UkrT) .ukrinform.net - Dozens of volunteers from Japan volunteer to fight for Ukraine
  • To me, a main question for this whole issue is whether Russia has sufficient "control of Western democracy puppets" via [politician, advisor, trade union, military, academic, teacher, media] agents and spies, to do what they have so easily accomplished since 1917 (see the section Russia below). My guess is that they still have this long-term historical capability to influence strategic decisions from within the democracies, and they will use it to success and get the best risk-balanced gains they can from the current situation. This webPage serves only to provide a fraction of my thinking about the situation between [Russia, Ukraine] in circa January 2022. More extensive work is out of the question, as I would only consider more work after completing the WWII project of my father and I, if I ever even get back to that : "Icebreaker unchained : We should have lost WWII" (directory tree of many files - perhaps start with "Script for Icebreaker Unchained.odt".
  • With a hint of Icebreaker unchained: We should have lost WWI
  • By far the gratest amount of work that I have done on WWII is on the unfinished video project of my father and I "Icebreaker unchained : We should have lost WWII" (directory tree of many files - perhaps start with "Script for Icebreaker Unchained.odt". That project cannot reasonably condensed for the current webPage, nor would I be happy with any simplistic "zero-dimensional" quips to summarize it. I have no interest in catering to that type of reader.
  • Icebreaker unchained : We should have lost WWII" (directory tree of many files - perhaps start with "Script for Icebreaker Unchained.odt", until I hit a wall trying to produce animations for our video, and finally in ~2015 my father no longer able to [read, comprehen] well enough to workk on the project. Stephen Yaskell cited us as the "two fools who rushed in" for our previous history project : the rise and fall of civilisations over the last 7,500 years. Yaskell used some of our earlier graphs in his book (see references below). One argument we re-visited several times was the basis for periods of prolonged warfare between nations. "... Great periods of war in history : desperation, affluence, religion, natural quasi-cycles? ..." He favoured religion as the main driver, me, well as usual, I retain "multiple conflicting hypothesis".
  • My father and I had great fun working on our WWII project "Icebreaker unchained : We should have lost WWII" (directory tree of many files - perhaps start with "Script for Icebreaker Unchained.odt"
  • directory status & updates copyrights
  • Howell
  • 2022-02-11 12:52 Questions about Russia’s war against Ukraine with defense reporter Illia Ponomarenko
    I enjoyed hearing Ponomarenko 2022-03-07 19:41 4,000 people stuck in hot spots outside Kyiv.
    2022-03-07 18:57 Canada to impose further sanctions on Russian individuals.
    2022-03-07 18:34 Man arrested for driving truck into gates of Russian embassy in Dublin.
    2022-03-07 18:19 UK to provide additional $230 million in aid to Ukraine.
    2022-03-07 18:05 Blinken: Ukraine’s using defense support 2022-03-07 17:54 State Emergency Service: At least 13 civilians killed in air strike on bread factory in Kyiv Oblast.
    2022-03-07 17:40 UN: 1,207 civilian casualties of Russia’s war in Ukraine, 406 people killed and 801 injured.
    2022-03-07 17:12 Russian shelling damaged or destroyed 202 schools, 34 hospitals, more than 1,500 residential buildings
    2022-03-07 16:09 Bloomberg: Some EU nations balk at push to advance Ukraine’s membership bid.
    2022-03-07 15:52 Russia claims it will stop war ‘in a moment’ if Ukraine agrees to its demands.
    2022-03-07 15:49 Ukrainian forces have retaken Mykolaiv airport, according to regional governor Vitaliy Kim.
    2022-03-07 15:27 EU 2022-03-07 14:57 Online map shows lines on Ukraine 2022-03-07 14:03 Ukrainian, Russian foreign ministers to meet in Turkey on March 10.
    2022-03-07 13:57 Hungary bans weapon shipments to Ukraine from its territory.
    2022-03-07 13:39 Third round of Ukraine-Russia negotiations to begin at 4 p.m.
    2022-03-07 12:58 Klitschko: 2022-03-07 12:50 Russian government refuses to participate in ICJ hearings on Ukraine.
    2022-03-07 12:24 Zelensky asks world to stop buying Russian petroleum.
    2022-03-07 11:49 Over 140,000 Ukrainians have returned since the start of Russia 2022-03-07 11:36 State Emergency Service: at least 8 killed in Kharkiv over 24 hours.
    2022-03-07 11:18 Chinese Foreign Minister: China-Russia friendship is ‘rock solid,’ prospects for cooperation extensive.
    2022-03-07 10:30 State Emergency Service: 9 killed, 6 injured in attack on Vinnytsia airport.
    2022-03-07 10:11 Kremlin announces 2022-03-07 09:16 Hostomel village council confirms on March 7 Russian troops killed head of Hostomel territorial community Yury Prylypko.
    2022-03-07 09:07 India continues efforts to evacuate students in eastern Ukraine.
    2022-03-07 07:31 UK injects $100 million into Ukraine’s economy to mitigate financial pressures
    2022-03-07 07:05 Russian artillery pounds the southern Ukrainian city of Mykolaiv overnight on March 7.
    2022-03-07 06:44 New Zealand prepares sanctions law against Russia and Belarus
    2022-03-07 06:38 South Korea issues travel ban to areas in Russia and Belarus near Ukraine,
    2022-03-07 06:03 50 Russian diplomats including their family members return to Moscow from New York City,
    2022-03-07 05:20 Kuleba urges EU and G7 countries to introduce further sanctions against Russia.
    2022-03-07 04:31 Russian forces restrict access to external communication at Zaporizhzhia nuclear power plant,
    2022-03-07 04:29 Russian troops launch a missile strike near the village of Tuzla in Odesa Oblast.
    2022-03-07 04:13 Two Big Four accounting firms KPMG International and PricewaterhouseCoopers to suspend operations in Russia and Belarus.
    2022-03-07 03:38 UK continues to lead effort to suspend Russia from Interpol.
    2022-03-07 02:59 Russia has fired 600 missiles and deployed 95% of its amassed troops into Ukraine,
    2022-03-07 02:35 Ukraine suspends exports of rye, oats, buckwheat, millet, sugar, salt, meat, and livestock,
    2022-03-07 02:11 Eugene Czolij: NATO member countries are targets of Putin’s military aggression and must act accordingly in Ukraine
    2022-03-07 02:02 Hacking group Anonymous interrupts Russian state tv programs with footage of Russia 2022-03-07 01:48 WSJ: Russia recruiting Syrians skilled in urban combat to be sent to Ukraine to help take Kyiv, according to U.S. officials.
    2022-03-07 01:22 Reuters: U.S. does not see imminent Russian amphibious assault of Odesa, according to U.S. official.
    2022-03-07 01:30 Civilians flee in terror as Ukraine’s military deter Russia in Irpin
    2022-03-06 06:55 Kyiv resident gives birth during war: ‘I forgot about the bombings only in labor’
    2022-03-06 01:55 10 days of suffering. Russia’s war against Ukraine in photos
    2022-03-05 20:33 Ukrainian loses parent to Russian propaganda: ‘I can consider myself an orphan’
    2022-03-05 20:39 SBU: Russia planned to create 2022-03-05 20:33 Ukrainian loses parent to Russian propaganda: ‘I can consider myself an orphan’
    2022-03-05 20:17 Israeli Prime Minister secretly visits Russia, talks to Putin.
    2022-03-05 19:47 Russia announces resuming fighting in Mariupol and Volnovakha.
    2022-03-05 19:42 Payoneer, Paypal, Adobe suspend operations in Russia.
    2022-03-05 19:30 Third round of Ukraine-Russia talks to be held on March 7.
    2022-03-05 18:18 Ukraine 2022-03-05 17:49 Ukrainian forces take control of Mykolaiv, seize Russian occupiers 2022-03-05 17:14 Nuremberg Trials prosecutor: Putin should be 2022-03-05 17:10 Putin says Ukraine could 2022-03-05 16:53 About 400 civilians evacuated from Volnovakha despite thwarted evacuation.
    2022-03-05 16:30 To combat Russia, Ukraine invites foreign fighters. Here’s how to apply
    2022-03-05 15:03 Media: Italy seizes property worth $150 million from Russian oligarchs.
    2022-03-05 14:32 Hundreds of thousands of households left without gas due to Russia 2022-03-05 14:24 Russian forces kill Hero of Ukraine, captain Chybineiev.
    2022-03-05 13:56 Russian pilot captured near Chernihiv.
    2022-03-05 13:49 Media: SBU kills member of Ukrainian negotiations team suspected of treason.
    2022-03-05 13:22 Civilians 2022-03-05 12:32 Zelensky: 2022-03-05 11:59 Ukraine wins 7 medals on first day of Paralympics.
    2022-03-05 11:35 Kherson citizens rally against Russian occupiers.
    2022-03-05 11:05 Over 66,000 men return to Ukraine amid Russia’s war.
    2022-03-05 10:37 Elon Musk refuses to ban Russian news sources on Starlink satellite internet.
    2022-03-05 10:16 Blinken: 2022-03-05 09:36 Temporary ceasefire begins in Mariupol and Volnovakha to set up humanitarian corridors.
    2022-03-05 07:56 The staff of the Chernobyl nuclear power plant remain trapped 10 days after it was captured by Russian forces.
    2022-03-05 07:48 ​​In the Bucha district near Kyiv, Russian forces opened fire on a car with civilians.
    2022-03-05 07:41 Canadian Prime Minister Justin Trudeau will visit Europe from March 6- 11 to coordinate new sanctions against Russia and show support for Ukraine.
    2022-03-05 07:32 Singapore sanctions Russia in response to its all-out war against Ukraine.
    2022-03-05 07:28 International Boxing Association bans Russian and Belarusian boxers and competition officials from participating in international boxing competitions.
    2022-03-05 06:52 Uniqlo’s parent company to donate $10 million and 200,000 clothing items to UNHCR to support people forced to flee Russia 2022-03-05 05:48 CNN: Russia is poised to deploy up to 1,000 more mercenaries to Ukraine.
    2022-03-05 05:27 As of March 3, more than 1.2 million refugees have left Ukraine since Russia 2022-03-05 05:23 Pentagon: Russia has fired more than 500 missiles in the week since its full-scale invasion of Ukraine began.
    2022-03-05 04:11 International Gymnastics Federation bans Russian and Belarusian athletes from competitions starting March 7,
    2022-03-05 03:53 Poll: Around 74% of Americans - including Republicans and Democrats - said the United States and its NATO allies should impose a no-fly zone in Ukraine,
    2022-03-05 03:32 Samsung Electronics said on March 5 that shipments to Russia have been suspended "due to current geopolitical developments."
    2022-03-05 02:41 US Ambassador to the United Nations Linda Thomas-Greenfield said on March 4 at the UN that Russian forces are now 20 miles, and closing, from Ukraine’s second largest nuclear facility
    2022-03-05 01:56 The U.S. Embassy did not send a message saying that there "will be strong shelling tonight," and that Russians will target all civil activists after capturing the city and imposing martial law.
    2022-03-05 00:52 Bloomberg temporarily halts work of its journalists in Russia after Putin criminalizes independent journalism.
    2022-03-05 00:45 Reuters: Ukraine still has a "significant majority" of its military aircraft available nine days after Russian forces started their full-scale invasion of the country,
    2022-03-05 00:26 Armed Forces: Russia fires cluster munition in Pokrovsk, no casualties.
    2022-03-05 00:25 What will the invasion of Ukraine bring for Russia?
    2022-03-05 00:05 Lviv launches protective measures to save cultural heritage from possible airstrikes.
    2022-03-04 23:57 Zelensky condemns NATO 2022-03-04 23:37 Macron calls for emergency UN Security Council meeting on nuclear safety.
    2022-03-04 23:26 Blinken rejects calls for no-fly zone over Ukraine, sides with NATO.
    2022-03-04 23:01 US 2022-03-04 22:41 Russia occupies Trostyanets, Sumy Oblast.
    2022-03-05 11:59 Ukraine wins 7 medals on first day of Paralympics.
    2022-03-05 11:35 Kherson citizens rally against Russian occupiers.
    2022-03-05 11:05 Over 66,000 men return to Ukraine amid Russia’s war.
    2022-03-05 10:37 Elon Musk refuses to ban Russian news sources on Starlink satellite internet.
    2022-03-05 10:16 Blinken: 2022-03-05 09:36 Temporary ceasefire begins in Mariupol and Volnovakha to set up humanitarian corridors.
    2022-03-05 07:56 The staff of the Chernobyl nuclear power plant remain trapped 10 days after it was captured by Russian forces.
    2022-03-05 07:48 ​​In the Bucha district near Kyiv, Russian forces opened fire on a car with civilians.
    2022-03-05 07:41 Canadian Prime Minister Justin Trudeau will visit Europe from March 6- 11 to coordinate new sanctions against Russia and show support for Ukraine.
    2022-03-05 07:32 Singapore sanctions Russia in response to its all-out war against Ukraine.
    2022-03-05 07:28 International Boxing Association bans Russian and Belarusian boxers and competition officials from participating in international boxing competitions.
    2022-03-05 06:52 Uniqlo’s parent company to donate $10 million and 200,000 clothing items to UNHCR to support people forced to flee Russia 2022-03-05 05:48 CNN: Russia is poised to deploy up to 1,000 more mercenaries to Ukraine.
    2022-03-05 05:27 As of March 3, more than 1.2 million refugees have left Ukraine since Russia 2022-03-05 05:23 Pentagon: Russia has fired more than 500 missiles in the week since its full-scale invasion of Ukraine began.
    2022-03-05 04:11 International Gymnastics Federation bans Russian and Belarusian athletes from competitions starting March 7,
    2022-03-05 03:53 Poll: Around 74% of Americans - including Republicans and Democrats - said the United States and its NATO allies should impose a no-fly zone in Ukraine,
    2022-03-05 03:32 Samsung Electronics said on March 5 that shipments to Russia have been suspended "due to current geopolitical developments."
    2022-03-05 02:41 US Ambassador to the United Nations Linda Thomas-Greenfield said on March 4 at the UN that Russian forces are now 20 miles, and closing, from Ukraine’s second largest nuclear facility
    2022-03-05 01:56 The U.S. Embassy did not send a message saying that there "will be strong shelling tonight," and that Russians will target all civil activists after capturing the city and imposing martial law.
    2022-03-05 00:52 Bloomberg temporarily halts work of its journalists in Russia after Putin criminalizes independent journalism.
    2022-03-05 00:45 Reuters: Ukraine still has a "significant majority" of its military aircraft available nine days after Russian forces started their full-scale invasion of the country,
    2022-03-05 00:26 Armed Forces: Russia fires cluster munition in Pokrovsk, no casualties.
    2022-03-05 00:25 What will the invasion of Ukraine bring for Russia?
    2022-03-05 00:05 Lviv launches protective measures to save cultural heritage from possible airstrikes.
    2022-03-04 23:57 Zelensky condemns NATO 2022-03-04 23:37 Macron calls for emergency UN Security Council meeting on nuclear safety.
    2022-03-04 23:26 Blinken rejects calls for no-fly zone over Ukraine, sides with NATO.
    2022-03-04 23:01 US 2022-03-04 22:41 Russia occupies Trostyanets, Sumy Oblast.
    2022-03-04 22:02 Zelensky delivered a televised speech to the people of Europe, asked for support.
    2022-03-04 21:13 Mariupol Mayor pleads for help, asks for humanitarian corridor.
    2022-03-04 21:01 Russia bans Facebook, Twitter.
    2022-03-04 20:15 Ukraine joins NATO’s Cyber Defense Center as contributing participant.
    2022-03-04 19:53 Ukrainian supermarket chains stop selling Coca-Cola products as company continues to operate in Russia.
    2022-03-04 19:36 EIB will give Ukraine 668 million euros in immediate financial support.
    2022-03-04 19:28 Lawmaker Shufrych detained by territorial defense, alleged of helping Russia.
    2022-03-04 18:58 Ukraine 2022-03-04 18:41 American IT giants Netscout, Autodesk suspend operations in Russia.
    2022-03-04 18:40 A missile fragment, presumably from a Russian missile, falls in the yard of Zelensky 2022-03-04 18:16 28 children killed, 64 wounded in Russia 2022-03-04 18:06 Ukraine 2022-03-04 17:51 As their universities are shelled, fleeing foreign students wait days at borders
    2022-03-04 17:34 NATO chief: Putin underestimated Ukraine 2022-03-04 16:39 Microsoft suspends sales of its products and services in Russia.
    2022-03-04 16:26 Coming days of war are 2022-03-04 16:00 Baby lemur born in Kyiv zoo named Bayraktar, after Turkish-made drone used by Ukrainian military.
    2022-03-04 15:45 Russia attacks, captures Europe’s largest nuclear power plant in Ukraine
    2022-03-04 15:25 Belarusian dictator Alexander Lukashenko said on TV that his country isn 2022-03-05 00:52 Bloomberg temporarily halts work of its journalists in Russia after Putin criminalizes independent journalism.
    2022-03-05 00:45 Reuters: Ukraine still has a "significant majority" of its military aircraft available nine days after Russian forces started their full-scale invasion of the country,
    2022-03-05 00:26 Armed Forces: Russia fires cluster munition in Pokrovsk, no casualties.
    2022-03-05 00:25 What will the invasion of Ukraine bring for Russia?
    2022-03-05 00:05 Lviv launches protective measures to save cultural heritage from possible airstrikes.
    2022-03-04 23:57 Zelensky condemns NATO 2022-03-04 23:37 Macron calls for emergency UN Security Council meeting on nuclear safety.
    2022-03-04 23:26 Blinken rejects calls for no-fly zone over Ukraine, sides with NATO.
    2022-03-04 23:01 US 2022-03-04 22:41 Russia occupies Trostyanets, Sumy Oblast.
    2022-03-04 22:02 Zelensky delivered a televised speech to the people of Europe, asked for support.
    2022-03-04 21:13 Mariupol Mayor pleads for help, asks for humanitarian corridor.
    2022-03-04 21:01 Russia bans Facebook, Twitter.
    2022-03-04 20:15 Ukraine joins NATO’s Cyber Defense Center as contributing participant.
    2022-03-04 19:53 Ukrainian supermarket chains stop selling Coca-Cola products as company continues to operate in Russia.
    2022-03-04 19:36 EIB will give Ukraine 668 million euros in immediate financial support.
    2022-03-04 19:28 Lawmaker Shufrych detained by territorial defense, alleged of helping Russia.
    2022-03-04 18:58 Ukraine 2022-03-04 18:41 American IT giants Netscout, Autodesk suspend operations in Russia.
    2022-03-04 18:40 A missile fragment, presumably from a Russian missile, falls in the yard of Zelensky 2022-03-04 18:16 28 children killed, 64 wounded in Russia 2022-03-04 18:06 Ukraine 2022-03-04 17:51 As their universities are shelled, fleeing foreign students wait days at borders
    2022-03-04 17:34 NATO chief: Putin underestimated Ukraine 2022-03-04 16:39 Microsoft suspends sales of its products and services in Russia.
    2022-03-04 16:26 Coming days of war are 2022-03-04 16:00 Baby lemur born in Kyiv zoo named Bayraktar, after Turkish-made drone used by Ukrainian military.
    2022-03-04 15:45 Russia attacks, captures Europe’s largest nuclear power plant in Ukraine
    2022-03-04 15:25 Belarusian dictator Alexander Lukashenko said on TV that his country isn 2022-03-04 14:57 In the nation’s darkest hours, Ukrainians look out for each other
    2022-03-04 14:47 Poll: 82% of Ukrainians believe in victory against Russia.
    2022-03-04 14:38 Ukrzaliznytsia evacuates over 1 million people from Ukraine’s hot spots since the start of Russia’s invasion.
    2022-03-04 14:13 Lithuanian PM: 2022-03-04 13:52 Ukraine 2022-03-04 13:15 Panasonic suspends transactions with Russia.
    2022-03-04 13:12 Ukraine asks Red Cross to organize humanitarian corridors.
    2022-03-04 12:59 Death toll in attack on Chernihiv rises to 47 people.
    2022-03-04 12:36 UK opposition suggests housing Ukrainian refugees in London 2022-03-04 11:16 Zelensky: Zaporizhzhia Nuclear Plant disaster would be 6 times worse than Chornobyl.
    2022-03-04 10:40 The Times: Volodymyr Zelensky survives 3 assassination attempts in days.
    2022-03-04 10:14 Ukrainian Navy sinks its flagship to keep it out of Russian hands.
    2022-03-04 09:16 Sergey Fursa: These sanctions need to be imposed against Russia immediately
    2022-03-04 07:45 Ukraine 2022-03-04 07:40 Home rental platform Airbnb is suspending all operations in Belarus and Russia,
    2022-03-04 06:07 The International Atomic Energy Agency reports that the Ukraine regulator has not detected a change in radiation levels at the Zaporizhzhia Nuclear Power Plant site,
    2022-03-04 05:49 UNESCO calls on Russia to respect Ukraine’s cultural heritage, cessation of attacks on civilian facilities.
    2022-03-04 05:11 The Russian military seized a TV broadcasting tower in Kherson.
    2022-03-04 05:01 Glen Grant: Ukraine stands firm despite Russian offensive
    2022-03-04 04:46 Russia demands Telegram remove bots that search the platform for evidence of Russian servicemen captured or killed in Ukraine.
    2022-03-04 18:16 28 children killed, 64 wounded in Russia 2022-03-04 18:06 Ukraine 2022-03-04 17:51 As their universities are shelled, fleeing foreign students wait days at borders
    2022-03-04 17:34 NATO chief: Putin underestimated Ukraine 2022-03-04 16:39 Microsoft suspends sales of its products and services in Russia.
    2022-03-04 16:26 Coming days of war are 2022-03-04 16:00 Baby lemur born in Kyiv zoo named Bayraktar, after Turkish-made drone used by Ukrainian military.
    2022-03-04 15:45 Russia attacks, captures Europe’s largest nuclear power plant in Ukraine
    2022-03-04 15:25 Belarusian dictator Alexander Lukashenko said on TV that his country isn 2022-03-04 14:57 In the nation’s darkest hours, Ukrainians look out for each other
    2022-03-04 14:47 Poll: 82% of Ukrainians believe in victory against Russia.
    2022-03-04 14:38 Ukrzaliznytsia evacuates over 1 million people from Ukraine’s hot spots since the start of Russia’s invasion.
    2022-03-04 14:13 Lithuanian PM: 2022-03-04 13:52 Ukraine 2022-03-04 13:15 Panasonic suspends transactions with Russia.
    2022-03-04 13:12 Ukraine asks Red Cross to organize humanitarian corridors.
    2022-03-04 12:59 Death toll in attack on Chernihiv rises to 47 people.
    2022-03-04 12:36 UK opposition suggests housing Ukrainian refugees in London 2022-03-04 11:16 Zelensky: Zaporizhzhia Nuclear Plant disaster would be 6 times worse than Chornobyl.
    2022-03-04 10:40 The Times: Volodymyr Zelensky survives 3 assassination attempts in days.
    2022-03-04 10:14 Ukrainian Navy sinks its flagship to keep it out of Russian hands.
    2022-03-04 09:16 Sergey Fursa: These sanctions need to be imposed against Russia immediately
    2022-03-04 07:45 Ukraine 2022-03-04 07:40 Home rental platform Airbnb is suspending all operations in Belarus and Russia,
    2022-03-04 06:07 The International Atomic Energy Agency reports that the Ukraine regulator has not detected a change in radiation levels at the Zaporizhzhia Nuclear Power Plant site,
    2022-03-04 05:49 UNESCO calls on Russia to respect Ukraine’s cultural heritage, cessation of attacks on civilian facilities.
    2022-03-04 05:11 The Russian military seized a TV broadcasting tower in Kherson.
    2022-03-04 05:01 Glen Grant: Ukraine stands firm despite Russian offensive
    2022-03-04 04:46 Russia demands Telegram remove bots that search the platform for evidence of Russian servicemen captured or killed in Ukraine.
    2022-03-04 04:29 Latest US sanctions target Russian oligarchs and their families.
    2022-03-04 03:15 Pentagon: 90% of Russian combat power pre-staged at Ukraine’s borders have already entered Ukraine.
    2022-03-04 02:20 Russian forces are firing at the Zaporizhzhia Nuclear Power Station in Enerhodar,
    2022-03-04 00:47 Eugene Czolij: It is in NATO’s best interests to help Ukraine secure its airspace
    2022-03-03 23:18 Russia Today announces that it will cease production in US.
    2022-03-03 22:40 Kuleba urges countries to provide Ukraine with aircraft.
    2022-03-03 22:37 Kyiv under shelling: ‘First thing I heard was my child’s scream’
    2022-03-03 22:10 Center for Defense Strategies: Russia will likely impose martial law on March 4.
    2022-03-03 21:18 Economists say Russian invasion could lead to largest wheat shortage in history.
    2022-03-03 20:56 Zelensky: Close the sky or give us aircraft.
    2022-03-03 20:30 Heavy shelling in Sumy Oblast leaves civilians without utilities.
    2022-03-03 20:00 Ukraine, Russia agree to set up humanitarian corridors.
    2022-03-03 19:20 OSCE invokes mechanism to address potential war crimes, crimes against humanity committed by Russia in Ukraine.
    2022-03-03 19:13 Zelensky tells Putin to leave Ukraine or meet him for talks.
    2022-03-03 19:12 Kuleba: Russia plans to fire rockets at its own territory in false-flag operation.
    2022-03-03 18:47 Q&A with US Chargé d’Affaires Kristina Kvien: ‘From now on, Russia will be a pariah state’
    2022-03-03 18:03 BFMTV: 2022-03-03 17:50 Parliament approves general mobilization of reservists, passes ‘military package’ laws.
    2022-03-03 16:41 Danilov: Ukrainian forces reach Russian border inSumy Oblast.
    2022-03-04 10:40 The Times: Volodymyr Zelensky survives 3 assassination attempts in days.
    2022-03-04 10:14 Ukrainian Navy sinks its flagship to keep it out of Russian hands.
    2022-03-04 09:16 Sergey Fursa: These sanctions need to be imposed against Russia immediately
    2022-03-04 07:45 Ukraine 2022-03-04 07:40 Home rental platform Airbnb is suspending all operations in Belarus and Russia,
    2022-03-04 06:07 The International Atomic Energy Agency reports that the Ukraine regulator has not detected a change in radiation levels at the Zaporizhzhia Nuclear Power Plant site,
    2022-03-04 05:49 UNESCO calls on Russia to respect Ukraine’s cultural heritage, cessation of attacks on civilian facilities.
    2022-03-04 05:11 The Russian military seized a TV broadcasting tower in Kherson.
    2022-03-04 05:01 Glen Grant: Ukraine stands firm despite Russian offensive
    2022-03-04 04:46 Russia demands Telegram remove bots that search the platform for evidence of Russian servicemen captured or killed in Ukraine.
    2022-03-04 04:29 Latest US sanctions target Russian oligarchs and their families.
    2022-03-04 03:15 Pentagon: 90% of Russian combat power pre-staged at Ukraine’s borders have already entered Ukraine.
    2022-03-04 02:20 Russian forces are firing at the Zaporizhzhia Nuclear Power Station in Enerhodar,
    2022-03-04 00:47 Eugene Czolij: It is in NATO’s best interests to help Ukraine secure its airspace
    2022-03-03 23:18 Russia Today announces that it will cease production in US.
    2022-03-03 22:40 Kuleba urges countries to provide Ukraine with aircraft.
    2022-03-03 22:37 Kyiv under shelling: ‘First thing I heard was my child’s scream’
    2022-03-03 22:10 Center for Defense Strategies: Russia will likely impose martial law on March 4.
    2022-03-03 21:18 Economists say Russian invasion could lead to largest wheat shortage in history.
    2022-03-03 20:56 Zelensky: Close the sky or give us aircraft.
    2022-03-03 20:30 Heavy shelling in Sumy Oblast leaves civilians without utilities.
    2022-03-03 20:00 Ukraine, Russia agree to set up humanitarian corridors.
    2022-03-03 19:20 OSCE invokes mechanism to address potential war crimes, crimes against humanity committed by Russia in Ukraine.
    2022-03-03 19:13 Zelensky tells Putin to leave Ukraine or meet him for talks.
    2022-03-03 19:12 Kuleba: Russia plans to fire rockets at its own territory in false-flag operation.
    2022-03-03 18:47 Q&A with US Chargé d’Affaires Kristina Kvien: ‘From now on, Russia will be a pariah state’
    2022-03-03 18:03 BFMTV: 2022-03-03 17:50 Parliament approves general mobilization of reservists, passes ‘military package’ laws.
    2022-03-03 16:41 Danilov: Ukrainian forces reach Russian border inSumy Oblast.
    2022-03-03 16:15 Volkswagen, Ikea suspend business in Russia.
    2022-03-03 15:29 EU to provide Ukraine with additional 1.2 billion euros.
    2022-03-03 15:18 Russian forces strike at residential buildings in central Chernihiv.
    2022-03-03 14:21 Ukraine confirms peace talks with Russia today, on March 3.
    2022-03-03 14:12 Fitch, Moody 2022-03-03 14:04 Biden asks Congress to approve $10 billion in aid for Ukraine.
    2022-03-03 13:50 Reuters: EU is eyeing disconnecting Belarus from SWIFT.
    2022-03-03 13:40 Kherson regional administration captured by Russia.
    2022-03-03 12:49 Ukraine, Russia to hold new round of peace talks.
    2022-03-03 12:48 Ukrainian Armed Forces: Russia has lost about 9,000 troops.
    2022-03-03 12:12 Ukrainians that can 2022-03-03 11:48 Zelensky: All lines of our defense preserved.
    2022-03-03 11:27 Zelensky to Russia: Learn the word "reparation."
    2022-03-03 11:14 Deputy FM of Poland: Ukraine could get EU candidate status any day.
    2022-03-03 10:33 Ukraine 2022-03-03 09:41 Athletes from Russia and Belarus will not be allowed to compete at the 2022 Winter Paralympics in Beijing, the International Paralympic Committee said.
    2022-03-03 08:26 Russian forces seize the southern Ukrainian city of Kherson after taking control of the local council building, Kherson’s mayor, Igor Kolykhaiev, said in a Facebook post on March 2.
    2022-03-03 08:12 Spotify closes its office in Russia in response to its full-scale war in Ukraine.
    2022-03-03 07:05 In the city of Sumy in northeastern Ukraine, a building of the military faculty
    2022-03-03 06:26 Six adults and two children were killed in a residential building
    2022-03-03 20:30 Heavy shelling in Sumy Oblast leaves civilians without utilities.
    2022-03-03 20:00 Ukraine, Russia agree to set up humanitarian corridors.
    2022-03-03 19:20 OSCE invokes mechanism to address potential war crimes, crimes against humanity committed by Russia in Ukraine.
    2022-03-03 19:13 Zelensky tells Putin to leave Ukraine or meet him for talks.
    2022-03-03 19:12 Kuleba: Russia plans to fire rockets at its own territory in false-flag operation.
    2022-03-03 18:47 Q&A with US Chargé d’Affaires Kristina Kvien: ‘From now on, Russia will be a pariah state’
    2022-03-03 18:03 BFMTV: 2022-03-03 17:50 Parliament approves general mobilization of reservists, passes ‘military package’ laws.
    2022-03-03 16:41 Danilov: Ukrainian forces reach Russian border inSumy Oblast.
    2022-03-03 16:15 Volkswagen, Ikea suspend business in Russia.
    2022-03-03 15:29 EU to provide Ukraine with additional 1.2 billion euros.
    2022-03-03 15:18 Russian forces strike at residential buildings in central Chernihiv.
    2022-03-03 14:21 Ukraine confirms peace talks with Russia today, on March 3.
    2022-03-03 14:12 Fitch, Moody 2022-03-03 14:04 Biden asks Congress to approve $10 billion in aid for Ukraine.
    2022-03-03 13:50 Reuters: EU is eyeing disconnecting Belarus from SWIFT.
    2022-03-03 13:40 Kherson regional administration captured by Russia.
    2022-03-03 12:49 Ukraine, Russia to hold new round of peace talks.
    2022-03-03 12:48 Ukrainian Armed Forces: Russia has lost about 9,000 troops.
    2022-03-03 12:12 Ukrainians that can 2022-03-03 11:48 Zelensky: All lines of our defense preserved.
    2022-03-03 11:27 Zelensky to Russia: Learn the word "reparation."
    2022-03-03 11:14 Deputy FM of Poland: Ukraine could get EU candidate status any day.
    2022-03-03 10:33 Ukraine 2022-03-03 09:41 Athletes from Russia and Belarus will not be allowed to compete at the 2022 Winter Paralympics in Beijing, the International Paralympic Committee said.
    2022-03-03 08:26 Russian forces seize the southern Ukrainian city of Kherson after taking control of the local council building, Kherson’s mayor, Igor Kolykhaiev, said in a Facebook post on March 2.
    2022-03-03 08:12 Spotify closes its office in Russia in response to its full-scale war in Ukraine.
    2022-03-03 07:05 In the city of Sumy in northeastern Ukraine, a building of the military faculty
    2022-03-03 06:26 Six adults and two children were killed in a residential building
    2022-03-03 05:51 Canada sanctions 10 people in Russia’s energy sector, offers further support to Ukraine.
    2022-03-03 05:49 At least 1 million Ukrainians have fled the country
    2022-03-03 05:06 UN: 752 civilian casualties as a result of Russia 2022-03-03 04:38 Massive explosions were heard in Kyiv around 4 a.m. local time.
    2022-03-03 04:32 A number of cities and towns in Ukraine, including
    2022-03-03 04:30 Russia 2022-03-03 03:23 Zelensky: 2022-03-03 03:07 Canada leads international call to action against Russian state-sponsored disinformation in Ukraine.
    2022-03-03 02:39 Danilov: 2022-03-03 02:13 BBC to launch shortwave radio service in Ukraine and Russia.
    2022-03-03 01:47 Ukrainians flee war by train, car, on foot
    2022-03-03 01:21 OSCE employee killed in Kharkiv.
    2022-03-03 00:43 Poland asks Ukraine’s new fleet Noosphere to service its Antarctic station.
    2022-03-03 00:26 Ukrainian anti-corruption authorities: No need to declare captured Russian tanks, equipment as income.
    2022-03-03 00:19 Pavlo Lodyn: Putin’s aggression and the threat of environmental catastrophe in Europe
    2022-03-03 00:16 38 countries urge ICC to investigate Russia’s war crimes in Ukraine.
    2022-03-03 00:13 Kherson Mayor Kolykhaev: 2022-03-03 00:01 UN approves resolution to condemn Russia 2022-03-02 23:56 Ukrainian special forces will no longer capture Russian artillerymen.
    2022-03-02 23:38 JP Morgan: Russia risks default on public debt.
    2022-03-03 12:12 Ukrainians that can 2022-03-03 11:48 Zelensky: All lines of our defense preserved.
    2022-03-03 11:27 Zelensky to Russia: Learn the word "reparation."
    2022-03-03 11:14 Deputy FM of Poland: Ukraine could get EU candidate status any day.
    2022-03-03 10:33 Ukraine 2022-03-03 09:41 Athletes from Russia and Belarus will not be allowed to compete at the 2022 Winter Paralympics in Beijing, the International Paralympic Committee said.
    2022-03-03 08:26 Russian forces seize the southern Ukrainian city of Kherson after taking control of the local council building, Kherson’s mayor, Igor Kolykhaiev, said in a Facebook post on March 2.
    2022-03-03 08:12 Spotify closes its office in Russia in response to its full-scale war in Ukraine.
    2022-03-03 07:05 In the city of Sumy in northeastern Ukraine, a building of the military faculty
    2022-03-03 06:26 Six adults and two children were killed in a residential building
    2022-03-03 05:51 Canada sanctions 10 people in Russia’s energy sector, offers further support to Ukraine.
    2022-03-03 05:49 At least 1 million Ukrainians have fled the country
    2022-03-03 05:06 UN: 752 civilian casualties as a result of Russia 2022-03-03 04:38 Massive explosions were heard in Kyiv around 4 a.m. local time.
    2022-03-03 04:32 A number of cities and towns in Ukraine, including
    2022-03-03 04:30 Russia 2022-03-03 03:23 Zelensky: 2022-03-03 03:07 Canada leads international call to action against Russian state-sponsored disinformation in Ukraine.
    2022-03-03 02:39 Danilov: 2022-03-03 02:13 BBC to launch shortwave radio service in Ukraine and Russia.
    2022-03-03 01:47 Ukrainians flee war by train, car, on foot
    2022-03-03 01:21 OSCE employee killed in Kharkiv.
    2022-03-03 00:43 Poland asks Ukraine’s new fleet Noosphere to service its Antarctic station.
    2022-03-03 00:26 Ukrainian anti-corruption authorities: No need to declare captured Russian tanks, equipment as income.
    2022-03-03 00:19 Pavlo Lodyn: Putin’s aggression and the threat of environmental catastrophe in Europe
    2022-03-03 00:16 38 countries urge ICC to investigate Russia’s war crimes in Ukraine.
    2022-03-03 00:13 Kherson Mayor Kolykhaev: 2022-03-03 00:01 UN approves resolution to condemn Russia 2022-03-02 23:56 Ukrainian special forces will no longer capture Russian artillerymen.
    2022-03-02 23:38 JP Morgan: Russia risks default on public debt.
    2022-03-02 23:33 Russian navy hits foreign ship docked in Mykolaiv Oblast.
    2022-03-02 23:06 Four Russian fighter jets violated Swedish airspace over Baltic Sea.
    2022-03-02 22:34 White House announces new sanctions on Russian, Belarusian companies.
    2022-03-02 22:31 Andriy Guck: No-fly zone is what Ukraine urgently needs to save civilians and cities
    2022-03-02 22:14 Russia fires rockets, debris hits near train station.
    2022-03-02 21:48 Abramovich to sell Chelsea, says will provide funds to victims of war in Ukraine.
    2022-03-02 21:19 Russia reports 498 killed, 1,597 wounded in first report of military casualties.
    2022-03-02 20:09 Kyivans hide in subways as Russian forces bombard the capital
    2022-03-02 19:53 ECHR suspends all procedures that require action from Ukraine.
    2022-03-02 19:32 Armed Forces report interception of Russian planning documents alleging invasion affirmed on Jan. 22.
    2022-03-02 18:46 The EU is introducing an amendment banning shipping euro banknotes into Russia.
    2022-03-02 18:29 Almost 836,000 people fled Ukraine since war started.
    2022-03-02 18:03 Explainer: When can Ukraine get EU membership?
    2022-03-02 17:18 Russian parliament proposes imprisonment for spreading 2022-03-02 17:12 National Bank: Ukrainians have donated $200 million to support army.
    2022-03-02 16:51 Malta suspends ‘golden passports’ scheme for Russians, Belarusians.
    2022-03-02 15:32 Russia killed more than 2,000 civilian Ukrainians.
    2022-03-02 15:01 EXCLUSIVE: Voice message reveals Russian military unit’s catastrophic losses in Ukraine
    2022-03-02 14:55 Russian missiles hit central Kharkiv again.
    2022-03-02 14:31 Grybauskaite: NATO should fight alongside Ukraine.
    2022-03-02 18:46 The EU is introducing an amendment banning shipping euro banknotes into Russia.
    2022-03-02 18:29 Almost 836,000 people fled Ukraine since war started.
    2022-03-02 18:03 Explainer: When can Ukraine get EU membership?
    2022-03-02 17:18 Russian parliament proposes imprisonment for spreading 2022-03-02 17:12 National Bank: Ukrainians have donated $200 million to support army.
    2022-03-02 16:51 Malta suspends ‘golden passports’ scheme for Russians, Belarusians.
    2022-03-02 15:32 Russia killed more than 2,000 civilian Ukrainians.
    2022-03-02 15:01 EXCLUSIVE: Voice message reveals Russian military unit’s catastrophic losses in Ukraine
    2022-03-02 14:55 Russian missiles hit central Kharkiv again.
    2022-03-02 14:31 Grybauskaite: NATO should fight alongside Ukraine.
    2022-03-02 14:22 EU diplomats approve new sanctions against Belarus.
    2022-03-02 13:50 Up to 15,000 people currently hiding from bombs in Kyiv metro.
    2022-03-02 13:20 Kuleba: Unclear when next round of Ukraine-Russia talks will be held.
    2022-03-02 12:20 Poll: Ukrainians 2022-03-02 12:18 Russia threatens to raze Konotop to the ground if it doesn’t surrender.
    2022-03-02 12:15 Nord Stream 2 files for bankruptcy.
    2022-03-02 11:55 88% of Ukrainians believe that Ukraine will successfully fight off Russia.
    2022-03-02 11:47 Media: Putin wants to reinstate Yanukovych as president of Ukraine.
    2022-03-02 11:40 Russia’s troops land near Mykolayiv, residents urged to remain indoors.
    2022-03-02 11:36 Unarmed Enerhodar residents block city entrance to Russian troops,
    2022-03-02 10:43 Armed Forces: Russia loses around 5,840 troops.
    2022-03-02 10:11 Oksana Bashuk Hepburn: Dictatorship and isolationism don’t work in the 21st century
    2022-03-02 08:45 A Russian rocket hit police headquarters in Kharkiv, UNIAN news agency reports.
    2022-03-02 08:13 General Staff of Ukraine 2022-03-02 08:02 Russian rockets hit a military academy in Kharkiv, according to the Hromadske news website.
    2022-03-02 06:46 At least 136 people, including 13 children, have been killed in Ukraine
    2022-03-02 06:28 U.S. airplane manufacturer Boeing announced it is suspending parts,
    2022-03-02 05:50 Ukrainian tennis champion to donate prize money to Ukraine 2022-03-02 05:45 Russian troops seized the river port and railway station in Kherson,
    2022-03-02 05:12 ExxonMobil, one of the world 2022-03-02 04:45 Vladimir Putin signs decree to prohibit leaving Russia with more than $10,000
    2022-03-02 04:35 Artem Zdunov, who heads Russia 2022-03-02 04:30 Biden: US Department of Justice is assembling a task force to go after the crimes of Russian oligarchs and corrupt Russian leaders.
    2022-03-02 04:29 Biden: We will join our allies in closing our airspace to Russian planes.
    2022-03-02 04:03 128 wounded civilians are currently being treated in local hospitals
    2022-03-02 03:57 Russian lawmakers of the Gagarinsky municipal district in Moscow
    2022-03-02 03:24 Russian paratroopers landed in Kharkiv and attacked one of the city’s military medical centers,
    2022-03-02 03:23 The city of Trostyanets in Sumy Oblast, was occupied by Russian forces, journalists report.
    2022-03-02 02:57 Ukraine negotiating humanitarian corridors for medicine, food.
    2022-03-02 02:48 Belarus troops are on standby for deployment
    2022-03-02 02:34 346 residents evacuated from Volnovakha, Donetsk oblast today.
    2022-03-02 02:23 About 80 percent of the troops that Russia massed along Ukraine’s borders
    2022-03-02 02:14 Russian troops in Crimea refuse to take part in Ukraine invasion.
    2022-03-02 02:13 A Ukrainian doctor was killed on March 1 while transporting her wounded nephew to the hospital,
    2022-03-02 02:10 Estonia, Latvia to send Ukraine Javelins, ammunition, fuel.
    2022-03-02 02:08 Zaluzhny: Russia has lost tactical initiative, offensive has slowed.
    2022-03-02 01:49 IMF and World Bank preparing $4.4 billion aid package for Ukraine,
    2022-03-02 01:40 Zhytomyr under Russian bombardment, residential area destroyed.
    2022-03-02 10:43 Armed Forces: Russia loses around 5,840 troops.
    2022-03-02 10:11 Oksana Bashuk Hepburn: Dictatorship and isolationism don’t work in the 21st century
    2022-03-02 08:45 A Russian rocket hit police headquarters in Kharkiv, UNIAN news agency reports.
    2022-03-02 08:13 General Staff of Ukraine 2022-03-02 08:02 Russian rockets hit a military academy in Kharkiv, according to the Hromadske news website.
    2022-03-02 06:46 At least 136 people, including 13 children, have been killed in Ukraine
    2022-03-02 06:28 U.S. airplane manufacturer Boeing announced it is suspending parts,
    2022-03-02 05:50 Ukrainian tennis champion to donate prize money to Ukraine 2022-03-02 05:45 Russian troops seized the river port and railway station in Kherson,
    2022-03-02 05:12 ExxonMobil, one of the world 2022-03-02 04:45 Vladimir Putin signs decree to prohibit leaving Russia with more than $10,000
    2022-03-02 04:35 Artem Zdunov, who heads Russia 2022-03-02 04:30 Biden: US Department of Justice is assembling a task force to go after the crimes of Russian oligarchs and corrupt Russian leaders.
    2022-03-02 04:29 Biden: We will join our allies in closing our airspace to Russian planes.
    2022-03-02 04:03 128 wounded civilians are currently being treated in local hospitals
    2022-03-02 03:57 Russian lawmakers of the Gagarinsky municipal district in Moscow
    2022-03-02 03:24 Russian paratroopers landed in Kharkiv and attacked one of the city’s military medical centers,
    2022-03-02 03:23 The city of Trostyanets in Sumy Oblast, was occupied by Russian forces, journalists report.
    2022-03-02 02:57 Ukraine negotiating humanitarian corridors for medicine, food.
    2022-03-02 02:48 Belarus troops are on standby for deployment
    2022-03-02 02:34 346 residents evacuated from Volnovakha, Donetsk oblast today.
    2022-03-02 02:23 About 80 percent of the troops that Russia massed along Ukraine’s borders
    2022-03-02 02:14 Russian troops in Crimea refuse to take part in Ukraine invasion.
    2022-03-02 02:13 A Ukrainian doctor was killed on March 1 while transporting her wounded nephew to the hospital,
    2022-03-02 02:10 Estonia, Latvia to send Ukraine Javelins, ammunition, fuel.
    2022-03-02 02:08 Zaluzhny: Russia has lost tactical initiative, offensive has slowed.
    2022-03-02 01:49 IMF and World Bank preparing $4.4 billion aid package for Ukraine,
    2022-03-02 01:40 Zhytomyr under Russian bombardment, residential area destroyed.
    2022-03-02 01:23 Georgian opposition demands resignation of Prime Minister Garibashvili.
    2022-03-02 01:10 Ukraine destroys Russian column near Mykolaiv.
    2022-03-02 01:03 Over 100 western diplomats walk out of speech by Russian Foreign Minister Sergey Lavrov at UN Human Rights Council.
    2022-03-02 00:55 Apple suspends product sales in Russia, limits Apple Pay
    2022-03-02 00:55 U.K introduces further economic sanctions on Russia.
    2022-03-02 00:52 Russia bans leading opposition media 2022-03-01 23:53 Ukraine opens 2 more refugee crossings into Poland
    2022-03-01 23:28 Russia bombs Kyiv neighborhoods, Vyshneve city outside of the capital,
    2022-03-01 22:46 International Court of Justice to hold hearings over Russia’s war in Ukraine.
    2022-03-01 22:42 UK imposes sanctions on Belarus for its role in invasion of Ukraine.
    2022-03-01 22:14 World’s biggest container lines – Maersk, CMA, and MSC – suspend shipping to and from Russia.
    2022-03-01 22:12 80,000 Ukrainians came back from abroad since Russia invaded further.
    2022-03-01 20:58 Canada closes ports, territorial waters to Russian ships.
    2022-03-01 20:30 Seven killed, 24 injured in central Kharkiv rocket strike.
    2022-03-01 20:19 CNN: Russia fired more than 400 missiles on Ukraine.
    2022-03-01 19:59 Eight television channels back up after interruption following Russian attack on Kyiv 2022-03-01 19:01 BREAKING: European Parliament recommends giving Ukraine EU candidate status.
    2022-03-01 18:42 Danilov: Team of elite Chechen forces, sent to assassinate Zelensky, has been eliminated.
    2022-03-01 18:31 Yermak: Russia hits Babyn Yar, holocaust memorial site.
    2022-03-02 02:57 Ukraine negotiating humanitarian corridors for medicine, food.
    2022-03-02 02:48 Belarus troops are on standby for deployment
    2022-03-02 02:34 346 residents evacuated from Volnovakha, Donetsk oblast today.
    2022-03-02 02:23 About 80 percent of the troops that Russia massed along Ukraine’s borders
    2022-03-02 02:14 Russian troops in Crimea refuse to take part in Ukraine invasion.
    2022-03-02 02:13 A Ukrainian doctor was killed on March 1 while transporting her wounded nephew to the hospital,
    2022-03-02 02:10 Estonia, Latvia to send Ukraine Javelins, ammunition, fuel.
    2022-03-02 02:08 Zaluzhny: Russia has lost tactical initiative, offensive has slowed.
    2022-03-02 01:49 IMF and World Bank preparing $4.4 billion aid package for Ukraine,
    2022-03-02 01:40 Zhytomyr under Russian bombardment, residential area destroyed.
    2022-03-02 01:23 Georgian opposition demands resignation of Prime Minister Garibashvili.
    2022-03-02 01:10 Ukraine destroys Russian column near Mykolaiv.
    2022-03-02 01:03 Over 100 western diplomats walk out of speech by Russian Foreign Minister Sergey Lavrov at UN Human Rights Council.
    2022-03-02 00:55 Apple suspends product sales in Russia, limits Apple Pay
    2022-03-02 00:55 U.K introduces further economic sanctions on Russia.
    2022-03-02 00:52 Russia bans leading opposition media 2022-03-01 23:53 Ukraine opens 2 more refugee crossings into Poland
    2022-03-01 23:28 Russia bombs Kyiv neighborhoods, Vyshneve city outside of the capital,
    2022-03-01 22:46 International Court of Justice to hold hearings over Russia’s war in Ukraine.
    2022-03-01 22:42 UK imposes sanctions on Belarus for its role in invasion of Ukraine.
    2022-03-01 22:14 World’s biggest container lines – Maersk, CMA, and MSC – suspend shipping to and from Russia.
    2022-03-01 22:12 80,000 Ukrainians came back from abroad since Russia invaded further.
    2022-03-01 20:58 Canada closes ports, territorial waters to Russian ships.
    2022-03-01 20:30 Seven killed, 24 injured in central Kharkiv rocket strike.
    2022-03-01 20:19 CNN: Russia fired more than 400 missiles on Ukraine.
    2022-03-01 19:59 Eight television channels back up after interruption following Russian attack on Kyiv 2022-03-01 19:01 BREAKING: European Parliament recommends giving Ukraine EU candidate status.
    2022-03-01 18:42 Danilov: Team of elite Chechen forces, sent to assassinate Zelensky, has been eliminated.
    2022-03-01 18:31 Yermak: Russia hits Babyn Yar, holocaust memorial site.
    2022-03-01 18:28 TV tower attack kills at least 5, injures 5.
    2022-03-01 18:11 Finance Ministry: Ukraine raises Hr 8 billion from first sale of military bonds.
    2022-03-01 18:01 Interfax: China is ready to help end war in Ukraine.
    2022-03-01 17:30 Nord Stream 2 operator on the verge of bankruptcy.
    2022-03-01 17:19 Russia strikes at TV tower in Kyiv.
    2022-03-01 16:57 European Court of Human Rights ordered Russia to stop bombing and shelling civilian targets in Ukraine,
    2022-03-01 16:31 Explosion heard in Kyiv.
    2022-03-01 16:00 Ukraine, Russia hold first prisoner exchange.
    2022-03-01 15:02 The European Parliament received Ukraine 2022-03-01 14:40 Zelensky gives emotional speech at European Parliament, talks about shelling, accuses Russia of killing children.
    2022-03-01 14:39 BREAKING: EU disconnects key Russian banks from SWIFT.
    2022-03-01 14:28 Zelensky speaks at European Parliament, asks for support.
    2022-03-01 13:35 Sumy Oblast Governor: Russian tanks move in with white flags, shoot at civilians.
    2022-03-01 12:56 Zelensky appoints General Zhernov as head of Kyiv military administration.
    2022-03-01 12:43 Zelensky: Launching a rocket at the central square of Kharkiv is an outright, undisguised terror.
    2022-03-01 12:19 Russian troops entered Kherson.
    2022-03-01 12:12 Indian student killed in Kharkiv shelling, according to Indian foreign ministry.
    2022-03-01 11:57 Suspilne: Belarusian troops have entered Ukraine.
    2022-03-01 22:14 World’s biggest container lines – Maersk, CMA, and MSC – suspend shipping to and from Russia.
    2022-03-01 22:12 80,000 Ukrainians came back from abroad since Russia invaded further.
    2022-03-01 20:58 Canada closes ports, territorial waters to Russian ships.
    2022-03-01 20:30 Seven killed, 24 injured in central Kharkiv rocket strike.
    2022-03-01 20:19 CNN: Russia fired more than 400 missiles on Ukraine.
    2022-03-01 19:59 Eight television channels back up after interruption following Russian attack on Kyiv 2022-03-01 19:01 BREAKING: European Parliament recommends giving Ukraine EU candidate status.
    2022-03-01 18:42 Danilov: Team of elite Chechen forces, sent to assassinate Zelensky, has been eliminated.
    2022-03-01 18:31 Yermak: Russia hits Babyn Yar, holocaust memorial site.
    2022-03-01 18:28 TV tower attack kills at least 5, injures 5.
    2022-03-01 18:11 Finance Ministry: Ukraine raises Hr 8 billion from first sale of military bonds.
    2022-03-01 18:01 Interfax: China is ready to help end war in Ukraine.
    2022-03-01 17:30 Nord Stream 2 operator on the verge of bankruptcy.
    2022-03-01 17:19 Russia strikes at TV tower in Kyiv.
    2022-03-01 16:57 European Court of Human Rights ordered Russia to stop bombing and shelling civilian targets in Ukraine,
    2022-03-01 16:31 Explosion heard in Kyiv.
    2022-03-01 16:00 Ukraine, Russia hold first prisoner exchange.
    2022-03-01 15:02 The European Parliament received Ukraine 2022-03-01 14:40 Zelensky gives emotional speech at European Parliament, talks about shelling, accuses Russia of killing children.
    2022-03-01 14:39 BREAKING: EU disconnects key Russian banks from SWIFT.
    2022-03-01 14:28 Zelensky speaks at European Parliament, asks for support.
    2022-03-01 13:35 Sumy Oblast Governor: Russian tanks move in with white flags, shoot at civilians.
    2022-03-01 12:56 Zelensky appoints General Zhernov as head of Kyiv military administration.
    2022-03-01 12:43 Zelensky: Launching a rocket at the central square of Kharkiv is an outright, undisguised terror.
    2022-03-01 12:19 Russian troops entered Kherson.
    2022-03-01 12:12 Indian student killed in Kharkiv shelling, according to Indian foreign ministry.
    2022-03-01 11:57 Suspilne: Belarusian troops have entered Ukraine.
    2022-03-01 11:34 Hundreds of thousands of refugees flee Ukraine as war rages on
    2022-03-01 10:55 State Emergency Service: 6 people injured in central Kharkiv rocket strike, including one child.
    2022-03-01 10:12 Ukraine conducts successful test of SpaceX 2022-03-01 09:25 Taiwan joins other countries in blocking some Russian banks from SWIFT.
    2022-03-01 09:07 Russian forces have struck Freedom Square in central Kharkiv with a powerful explosion.
    2022-03-01 08:00 UK government orders ports to block Russian vessels.
    2022-03-01 07:48 Russian forces shell rehabilitation center of the Ministry of Veterans Affairs of Ukraine in the village of Borodyanka near Kyiv.
    2022-03-01 07:38 CNN: Overstretched Russian forces could struggle to hold Ukraine at current levels, expert predicts.
    2022-03-01 07:34 A video shared by UNIAN news agency shows dozens of Russian soldiers
    2022-03-01 07:30 Russian shelling hits maternity hospital near Kyiv,
    2022-03-01 07:19 Journalist: Kherson almost completely surrounded by Russian troops.
    2022-03-01 07:12 CNN: Australia to send missiles to Ukraine as part of $50 million support package,
    2022-03-01 07:07 Ukrainian fighter jets intercepted and shot down two Russian planes
    2022-03-01 06:57 Microsoft 2022-03-01 06:48 Roskomnadzor, Russia’s state communications regulator is blocking sites that write about Russia’s invasion into Ukraine.
    2022-03-01 06:41 More than 500,000 refugees have fled Ukraine amid Russia 2022-03-01 06:39 Shelling of a military unit in Okhtyrka, killing at least 70 Ukrainian soldiers,
    2022-03-01 06:34 Ukraine to receive 70 fighter jets from the EU.
    2022-03-01 06:28 Netflix will not comply with new Russian rules to carry 20 state-backed channels,
    2022-03-01 06:24 Mastercard blocks multiple financial institutions as a result of sanctions imposed on Russia over its invasion of Ukraine.
    2022-03-01 14:40 Zelensky gives emotional speech at European Parliament, talks about shelling, accuses Russia of killing children.
    2022-03-01 14:39 BREAKING: EU disconnects key Russian banks from SWIFT.
    2022-03-01 14:28 Zelensky speaks at European Parliament, asks for support.
    2022-03-01 13:35 Sumy Oblast Governor: Russian tanks move in with white flags, shoot at civilians.
    2022-03-01 12:56 Zelensky appoints General Zhernov as head of Kyiv military administration.
    2022-03-01 12:43 Zelensky: Launching a rocket at the central square of Kharkiv is an outright, undisguised terror.
    2022-03-01 12:19 Russian troops entered Kherson.
    2022-03-01 12:12 Indian student killed in Kharkiv shelling, according to Indian foreign ministry.
    2022-03-01 11:57 Suspilne: Belarusian troops have entered Ukraine.
    2022-03-01 11:34 Hundreds of thousands of refugees flee Ukraine as war rages on
    2022-03-01 10:55 State Emergency Service: 6 people injured in central Kharkiv rocket strike, including one child.
    2022-03-01 10:12 Ukraine conducts successful test of SpaceX 2022-03-01 09:25 Taiwan joins other countries in blocking some Russian banks from SWIFT.
    2022-03-01 09:07 Russian forces have struck Freedom Square in central Kharkiv with a powerful explosion.
    2022-03-01 08:00 UK government orders ports to block Russian vessels.
    2022-03-01 07:48 Russian forces shell rehabilitation center of the Ministry of Veterans Affairs of Ukraine in the village of Borodyanka near Kyiv.
    2022-03-01 07:38 CNN: Overstretched Russian forces could struggle to hold Ukraine at current levels, expert predicts.
    2022-03-01 07:34 A video shared by UNIAN news agency shows dozens of Russian soldiers
    2022-03-01 07:30 Russian shelling hits maternity hospital near Kyiv,
    2022-03-01 07:19 Journalist: Kherson almost completely surrounded by Russian troops.
    2022-03-01 07:12 CNN: Australia to send missiles to Ukraine as part of $50 million support package,
    2022-03-01 07:07 Ukrainian fighter jets intercepted and shot down two Russian planes
    2022-03-01 06:57 Microsoft 2022-03-01 06:48 Roskomnadzor, Russia’s state communications regulator is blocking sites that write about Russia’s invasion into Ukraine.
    2022-03-01 06:41 More than 500,000 refugees have fled Ukraine amid Russia 2022-03-01 06:39 Shelling of a military unit in Okhtyrka, killing at least 70 Ukrainian soldiers,
    2022-03-01 06:34 Ukraine to receive 70 fighter jets from the EU.
    2022-03-01 06:28 Netflix will not comply with new Russian rules to carry 20 state-backed channels,
    2022-03-01 06:24 Mastercard blocks multiple financial institutions as a result of sanctions imposed on Russia over its invasion of Ukraine.
    2022-03-01 05:23 Disney stops releasing films in Russia, condemns Moscow’s invasion of Ukraine.
    2022-03-01 04:31 Finland to support Ukraine with weapons and ammunition in a shift of policy.
    2022-03-01 04:23 Japan to freeze assets of Russian leaders, three financial institutions.
    2022-03-01 03:56 Satellite imagery collected by Maxar Technologies suggests
    2022-03-01 03:05 Kherson, a major city in southern Ukraine, is under attack,
    2022-03-01 02:59 Ukrainian tennis player Elina Svitolina said she would not play against her Russian competitor Anastasia Potapova
    2022-03-01 02:12 Russian ballistic missile attack destroys three residential buildings in Kyiv Oblast.
    2022-03-01 01:43 Armed Forces: Russia to use Belarus troops in war against Ukraine.
    2022-03-01 00:54 ICC prosecutor to investigate war crimes in Ukraine.
    2022-03-01 00:44 President Zelensky orders to temporarily lift visas for foreigners wishing to join the International Legion and fight for Ukraine against Russia.
    2022-03-28 23:55 Canada to impose new sanctions prohibiting all imports of Russian crude oil.
    2022-03-28 23:16 Elon Musk 2022-03-28 23:04 Boxer Usyk joins Territorial Defense Forces.
    2022-03-28 22:54 EU accepts to integrate Ukraine into its electricity network.
    2022-03-28 22:47 Kremlin warns against supplying lethal weapons to Ukraine.
    2022-03-28 22:41 Shell to divest from Gazprom, Nord Stream 2 amid Russia 2022-03-28 22:37 EU imposes sanctions on Peskov, 26 high-profile Russian oligarchs.
    2022-03-28 21:15 Ukraine to issue military bonds, first auction on March 1.
    2022-03-28 21:02 Eugene Czolij: A no-fly zone needs to be enforced now over Ukraine’s airspace
    2022-03-28 20:53 FIFA, UEFA suspend Russia from all football competitions.
    2022-03-01 07:19 Journalist: Kherson almost completely surrounded by Russian troops.
    2022-03-01 07:12 CNN: Australia to send missiles to Ukraine as part of $50 million support package,
    2022-03-01 07:07 Ukrainian fighter jets intercepted and shot down two Russian planes
    2022-03-01 06:57 Microsoft 2022-03-01 06:48 Roskomnadzor, Russia’s state communications regulator is blocking sites that write about Russia’s invasion into Ukraine.
    2022-03-01 06:41 More than 500,000 refugees have fled Ukraine amid Russia 2022-03-01 06:39 Shelling of a military unit in Okhtyrka, killing at least 70 Ukrainian soldiers,
    2022-03-01 06:34 Ukraine to receive 70 fighter jets from the EU.
    2022-03-01 06:28 Netflix will not comply with new Russian rules to carry 20 state-backed channels,
    2022-03-01 06:24 Mastercard blocks multiple financial institutions as a result of sanctions imposed on Russia over its invasion of Ukraine.
    2022-03-01 05:23 Disney stops releasing films in Russia, condemns Moscow’s invasion of Ukraine.
    2022-03-01 04:31 Finland to support Ukraine with weapons and ammunition in a shift of policy.
    2022-03-01 04:23 Japan to freeze assets of Russian leaders, three financial institutions.
    2022-03-01 03:56 Satellite imagery collected by Maxar Technologies suggests
    2022-03-01 03:05 Kherson, a major city in southern Ukraine, is under attack,
    2022-03-01 02:59 Ukrainian tennis player Elina Svitolina said she would not play against her Russian competitor Anastasia Potapova
    2022-03-01 02:12 Russian ballistic missile attack destroys three residential buildings in Kyiv Oblast.
    2022-03-01 01:43 Armed Forces: Russia to use Belarus troops in war against Ukraine.
    2022-03-01 00:54 ICC prosecutor to investigate war crimes in Ukraine.
    2022-03-01 00:44 President Zelensky orders to temporarily lift visas for foreigners wishing to join the International Legion and fight for Ukraine against Russia.
    2022-02-28 23:55 Canada to impose new sanctions prohibiting all imports of Russian crude oil.
    2022-02-28 23:16 Elon Musk 2022-02-28 23:04 Boxer Usyk joins Territorial Defense Forces.
    2022-02-28 22:54 EU accepts to integrate Ukraine into its electricity network.
    2022-02-28 22:47 Kremlin warns against supplying lethal weapons to Ukraine.
    2022-02-28 22:41 Shell to divest from Gazprom, Nord Stream 2 amid Russia 2022-02-28 22:37 EU imposes sanctions on Peskov, 26 high-profile Russian oligarchs.
    2022-02-28 21:15 Ukraine to issue military bonds, first auction on March 1.
    2022-02-28 21:02 Eugene Czolij: A no-fly zone needs to be enforced now over Ukraine’s airspace
    2022-02-28 20:53 FIFA, UEFA suspend Russia from all football competitions.
    2022-02-28 20:14 UK to provide additional military support to Ukraine soon.
    2022-02-28 20:05 First round of Ukraine-Russia negotiations inconclusive as casualties pile up
    2022-02-28 19:41 NATO rules out 2022-02-28 19:38 Zelensky signs Ukraine 2022-02-28 19:06 Kharkiv barraged with rockets, 11 killed
    2022-02-28 18:42 Hungary rejects sending arms to Ukraine, blocks transit.
    2022-02-28 18:21 US unleashes further sanctions on Moscow, targets Russia’s central bank.
    2022-02-28 18:06 Ukrainians transfer $50 million in 3 days to support the army,
    2022-02-28 17:42 Several thousand foreigners applied to fight for Ukraine.
    2022-02-28 17:38 Pivdennyi mayor in Kharkiv Oblast detained on suspicion of high treason.
    2022-02-28 17:37 Sanctions likely to seriously hurt Russia. But they may not stop the war
    2022-02-28 17:26 Switzerland adopts EU sanctions, freezes Putin 2022-02-28 17:01 Kupyansk mayor in Kharkiv Oblast charged with treason.
    2022-02-28 16:23 Russia launched 6 missile strikes, 4 air strikes on Feb. 27,
    2022-02-28 16:13 Japan joins Western sanctions against Russia 2022-02-28 16:06 Kharkiv resident killed in Russia’s latest attack.
    2022-02-28 15:57 New curfew in Kyiv will last from 8 p.m. until 7 a.m.
    2022-02-28 15:20 JPMorgan: Russian economy to shrink by fifth in second quarter.
    2022-02-28 23:55 Canada to impose new sanctions prohibiting all imports of Russian crude oil.
    2022-02-28 23:16 Elon Musk 2022-02-28 23:04 Boxer Usyk joins Territorial Defense Forces.
    2022-02-28 22:54 EU accepts to integrate Ukraine into its electricity network.
    2022-02-28 22:47 Kremlin warns against supplying lethal weapons to Ukraine.
    2022-02-28 22:41 Shell to divest from Gazprom, Nord Stream 2 amid Russia 2022-02-28 22:37 EU imposes sanctions on Peskov, 26 high-profile Russian oligarchs.
    2022-02-28 21:15 Ukraine to issue military bonds, first auction on March 1.
    2022-02-28 21:02 Eugene Czolij: A no-fly zone needs to be enforced now over Ukraine’s airspace
    2022-02-28 20:53 FIFA, UEFA suspend Russia from all football competitions.
    2022-02-28 20:14 UK to provide additional military support to Ukraine soon.
    2022-02-28 20:05 First round of Ukraine-Russia negotiations inconclusive as casualties pile up
    2022-02-28 19:41 NATO rules out 2022-02-28 19:38 Zelensky signs Ukraine 2022-02-28 19:06 Kharkiv barraged with rockets, 11 killed
    2022-02-28 18:42 Hungary rejects sending arms to Ukraine, blocks transit.
    2022-02-28 18:21 US unleashes further sanctions on Moscow, targets Russia’s central bank.
    2022-02-28 18:06 Ukrainians transfer $50 million in 3 days to support the army,
    2022-02-28 17:42 Several thousand foreigners applied to fight for Ukraine.
    2022-02-28 17:38 Pivdennyi mayor in Kharkiv Oblast detained on suspicion of high treason.
    2022-02-28 17:37 Sanctions likely to seriously hurt Russia. But they may not stop the war
    2022-02-28 17:26 Switzerland adopts EU sanctions, freezes Putin 2022-02-28 17:01 Kupyansk mayor in Kharkiv Oblast charged with treason.
    2022-02-28 16:23 Russia launched 6 missile strikes, 4 air strikes on Feb. 27,
    2022-02-28 16:13 Japan joins Western sanctions against Russia 2022-02-28 16:06 Kharkiv resident killed in Russia’s latest attack.
    2022-02-28 15:57 New curfew in Kyiv will last from 8 p.m. until 7 a.m.
    2022-02-28 15:20 JPMorgan: Russian economy to shrink by fifth in second quarter.
    2022-02-28 15:06 National Bank of Ukraine receives Hr 1 billion ($33 million) in support for armed forces.
    2022-02-28 14:53 Ukrainian defenders of Zmiinyi Island in Black Sea, presumed dead in a Russian bombardment, are alive and held captive,
    2022-02-28 14:03 London-traded shares of Russia 2022-02-28 13:40 Oil depot burns after artillery shelling in Okhtyrka, Sumy Oblast.
    2022-02-28 13:06 Russian soldiers offered 5 million rubles in cryptocurrency for surrendering to Ukrainian army.
    2022-02-28 12:53 Russian forces burn museum with paintings of Maria Prymachenko.
    2022-02-28 12:25 Zelensky: Ukraine to release prisoners with combat experience.
    2022-02-28 11:55 Zelensky addressed Russian soldiers: “Leave."
    2022-02-28 11:39 Zelensky calls on EU to give Ukraine membership.
    2022-02-28 11:28 Russia’s casualties as of Feb. 28: 5,300 troops (to be confirmed).
    2022-02-28 10:57 Ukrainian delegation has arrived at the border with Belarus for peace talks with Russia.
    2022-02-28 10:05 The Times: Russian mercenaries tasked to kill Zelensky.
    2022-02-28 09:21 Russian Central Bank raises borrowing rates from 9.5% to 20%.
    2022-02-28 08:36 General Staff: Russia reduced offensive pace, still trying to develop success.
    2022-02-28 08:29 UK: Zelensky believes next 24 hours crucial period for Ukraine.
    2022-02-28 08:02 Zaluzhnyi: Russia used Iskander missile systems to attack Zhytomyr.
    2022-02-28 07:58 Grocery stores and public transport will open in Kyiv starting at 8 am.
    2022-02-28 07:34 The situation in Kyiv is under Ukrainian control, according to Ukraine’s Armed Forces.
    2022-02-28 07:21 Russian forces carried out missile strikes across Ukraine overnight.
    2022-02-28 06:39 CNN: Google Maps has blocked two features in Ukraine
    2022-02-28 17:42 Several thousand foreigners applied to fight for Ukraine.
    2022-02-28 17:38 Pivdennyi mayor in Kharkiv Oblast detained on suspicion of high treason.
    2022-02-28 17:37 Sanctions likely to seriously hurt Russia. But they may not stop the war
    2022-02-28 17:26 Switzerland adopts EU sanctions, freezes Putin 2022-02-28 17:01 Kupyansk mayor in Kharkiv Oblast charged with treason.
    2022-02-28 16:23 Russia launched 6 missile strikes, 4 air strikes on Feb. 27,
    2022-02-28 16:13 Japan joins Western sanctions against Russia 2022-02-28 16:06 Kharkiv resident killed in Russia’s latest attack.
    2022-02-28 15:57 New curfew in Kyiv will last from 8 p.m. until 7 a.m.
    2022-02-28 15:20 JPMorgan: Russian economy to shrink by fifth in second quarter.
    2022-02-28 15:06 National Bank of Ukraine receives Hr 1 billion ($33 million) in support for armed forces.
    2022-02-28 14:53 Ukrainian defenders of Zmiinyi Island in Black Sea, presumed dead in a Russian bombardment, are alive and held captive,
    2022-02-28 14:03 London-traded shares of Russia 2022-02-28 13:40 Oil depot burns after artillery shelling in Okhtyrka, Sumy Oblast.
    2022-02-28 13:06 Russian soldiers offered 5 million rubles in cryptocurrency for surrendering to Ukrainian army.
    2022-02-28 12:53 Russian forces burn museum with paintings of Maria Prymachenko.
    2022-02-28 12:25 Zelensky: Ukraine to release prisoners with combat experience.
    2022-02-28 11:55 Zelensky addressed Russian soldiers: “Leave."
    2022-02-28 11:39 Zelensky calls on EU to give Ukraine membership.
    2022-02-28 11:28 Russia’s casualties as of Feb. 28: 5,300 troops (to be confirmed).
    2022-02-28 10:57 Ukrainian delegation has arrived at the border with Belarus for peace talks with Russia.
    2022-02-28 10:05 The Times: Russian mercenaries tasked to kill Zelensky.
    2022-02-28 09:21 Russian Central Bank raises borrowing rates from 9.5% to 20%.
    2022-02-28 08:36 General Staff: Russia reduced offensive pace, still trying to develop success.
    2022-02-28 08:29 UK: Zelensky believes next 24 hours crucial period for Ukraine.
    2022-02-28 08:02 Zaluzhnyi: Russia used Iskander missile systems to attack Zhytomyr.
    2022-02-28 07:58 Grocery stores and public transport will open in Kyiv starting at 8 am.
    2022-02-28 07:34 The situation in Kyiv is under Ukrainian control, according to Ukraine’s Armed Forces.
    2022-02-28 07:21 Russian forces carried out missile strikes across Ukraine overnight.
    2022-02-28 06:39 CNN: Google Maps has blocked two features in Ukraine
    2022-02-28 06:18 Russian ruble hits record low,
    2022-02-28 05:50 Blasts are reported in Kyiv, Kharkiv,
    2022-02-28 05:37 A missile strikes an apartment building in Chernihiv,
    2022-02-28 04:36 Here’s how to support the Ukrainian military
    2022-02-28 03:57 The general staff of Ukraine’s armed forces says Russia suffers significant losses.
    2022-02-28 03:13 Sources: Belarus to join Russia’s war on Ukraine within hours
    2022-02-28 02:48 Belarus will renounce its non-nuclear and neutral status,
    2022-02-28 02:26 McDonald’s and KFC offer food assistance amid Russian invasion
    2022-02-28 02:18 Belarus will renounce its non-nuclear and neutral status, allowing Russia to place nuclear weapons on its territory, as a result of the referendum held today.
    2022-02-28 01:54 State of New York orders state entities to stop doing business with Russia,
    2022-02-28 01:27 Putin 2022-02-28 00:54 Victor Tregubov: Are you a foreigner who wants to help Ukraine? Here’s how
    2022-02-28 00:07 Von der Leyen: Ukraine is “one of us and we want them in.”
    2022-02-27 23:58 Borrell: EU countries to send ‘fighter jets’ to Ukraine.
    2022-02-27 23:11 U.S. condemns Putin’s nuclear order, considers imposing more sanctions.
    2022-02-27 22:44 Lyashko: 16 children killed in Russia 2022-02-27 22:28 Germany to build two port terminals for liquefied natural gas (LNG) to reduce its energy dependency on Russia.
    2022-02-27 22:14 Petition appears online demanding Putin 2022-02-27 21:55 70% believe in Ukraine’s victory, 91% support Zelensky,
    2022-02-27 21:55 Klitschko 2022-02-28 10:57 Ukrainian delegation has arrived at the border with Belarus for peace talks with Russia.
    2022-02-28 10:05 The Times: Russian mercenaries tasked to kill Zelensky.
    2022-02-28 09:21 Russian Central Bank raises borrowing rates from 9.5% to 20%.
    2022-02-28 08:36 General Staff: Russia reduced offensive pace, still trying to develop success.
    2022-02-28 08:29 UK: Zelensky believes next 24 hours crucial period for Ukraine.
    2022-02-28 08:02 Zaluzhnyi: Russia used Iskander missile systems to attack Zhytomyr.
    2022-02-28 07:58 Grocery stores and public transport will open in Kyiv starting at 8 am.
    2022-02-28 07:34 The situation in Kyiv is under Ukrainian control, according to Ukraine’s Armed Forces.
    2022-02-28 07:21 Russian forces carried out missile strikes across Ukraine overnight.
    2022-02-28 06:39 CNN: Google Maps has blocked two features in Ukraine
    2022-02-28 06:18 Russian ruble hits record low,
    2022-02-28 05:50 Blasts are reported in Kyiv, Kharkiv,
    2022-02-28 05:37 A missile strikes an apartment building in Chernihiv,
    2022-02-28 04:36 Here’s how to support the Ukrainian military
    2022-02-28 03:57 The general staff of Ukraine’s armed forces says Russia suffers significant losses.
    2022-02-28 03:13 Sources: Belarus to join Russia’s war on Ukraine within hours
    2022-02-28 02:48 Belarus will renounce its non-nuclear and neutral status,
    2022-02-28 02:26 McDonald’s and KFC offer food assistance amid Russian invasion
    2022-02-28 02:18 Belarus will renounce its non-nuclear and neutral status, allowing Russia to place nuclear weapons on its territory, as a result of the referendum held today.
    2022-02-28 01:54 State of New York orders state entities to stop doing business with Russia,
    2022-02-28 01:27 Putin 2022-02-28 00:54 Victor Tregubov: Are you a foreigner who wants to help Ukraine? Here’s how
    2022-02-28 00:07 Von der Leyen: Ukraine is “one of us and we want them in.”
    2022-02-27 23:58 Borrell: EU countries to send ‘fighter jets’ to Ukraine.
    2022-02-27 23:11 U.S. condemns Putin’s nuclear order, considers imposing more sanctions.
    2022-02-27 22:44 Lyashko: 16 children killed in Russia 2022-02-27 22:28 Germany to build two port terminals for liquefied natural gas (LNG) to reduce its energy dependency on Russia.
    2022-02-27 22:14 Petition appears online demanding Putin 2022-02-27 21:55 70% believe in Ukraine’s victory, 91% support Zelensky,
    2022-02-27 21:55 Klitschko 2022-02-27 21:01 100,000 Ukrainians mobilized in 2 days.
    2022-02-27 20:56 EU imposes sanctions on Belarus, bans petroleum imports, sharing technologies.
    2022-02-27 20:29 US to provide Ukraine $54 million in humanitarian assistance.
    2022-02-27 20:25 Belarus missile ruins historical building in Chernihiv.
    2022-02-27 19:53 EU to finance purchase, delivery of weapons and equipment for Ukraine.
    2022-02-27 19:42 EU bans Russia Today, Sputnik.
    2022-02-27 19:09 Missile from Belarusian territory fired at Zhytomyr airport in central Ukraine,
    2022-02-27 18:50 US sanctions target computer chip sales to Russia.
    2022-02-27 18:46 EU shuts its airspace to all Russian-owned, Russian-registered, or Russian-controlled aircraft,
    2022-02-27 18:35 Legendary Mriya aircraft ruined in a Russian attack near Kyiv.
    2022-02-27 18:28 Ukraine to pay $3,400 (Hr 100,000) per month to military personnel.
    2022-02-27 18:15 Zelensky doesn 2022-02-27 17:34 Russian oligarch Deripaska: 2022-02-27 17:32 Kuleba: Ukraine will not capitulate.
    2022-02-27 17:18 FM: Putin 2022-02-27 16:19 Human Rights Ombudsman: 210 Ukrainians killed by Russian assault.
    2022-02-27 15:36 BREAKING: Ukraine confirms peace talks with Russia today.
    2022-02-27 15:31 BREAKING: Putin orders Russian nuclear deterrent forces on alert.
    2022-02-27 15:29 Amid fierce defense, Ukraine foils Russian blitz victory plans
    2022-02-27 15:27 Reports suggest Belarus prepares to send its troops to Ukraine, joining Putin in war
    2022-02-28 01:27 Putin 2022-02-28 00:54 Victor Tregubov: Are you a foreigner who wants to help Ukraine? Here’s how
    2022-02-28 00:07 Von der Leyen: Ukraine is “one of us and we want them in.”
    2022-02-27 23:58 Borrell: EU countries to send ‘fighter jets’ to Ukraine.
    2022-02-27 23:11 U.S. condemns Putin’s nuclear order, considers imposing more sanctions.
    2022-02-27 22:44 Lyashko: 16 children killed in Russia 2022-02-27 22:28 Germany to build two port terminals for liquefied natural gas (LNG) to reduce its energy dependency on Russia.
    2022-02-27 22:14 Petition appears online demanding Putin 2022-02-27 21:55 70% believe in Ukraine’s victory, 91% support Zelensky,
    2022-02-27 21:55 Klitschko 2022-02-27 21:01 100,000 Ukrainians mobilized in 2 days.
    2022-02-27 20:56 EU imposes sanctions on Belarus, bans petroleum imports, sharing technologies.
    2022-02-27 20:29 US to provide Ukraine $54 million in humanitarian assistance.
    2022-02-27 20:25 Belarus missile ruins historical building in Chernihiv.
    2022-02-27 19:53 EU to finance purchase, delivery of weapons and equipment for Ukraine.
    2022-02-27 19:42 EU bans Russia Today, Sputnik.
    2022-02-27 19:09 Missile from Belarusian territory fired at Zhytomyr airport in central Ukraine,
    2022-02-27 18:50 US sanctions target computer chip sales to Russia.
    2022-02-27 18:46 EU shuts its airspace to all Russian-owned, Russian-registered, or Russian-controlled aircraft,
    2022-02-27 18:35 Legendary Mriya aircraft ruined in a Russian attack near Kyiv.
    2022-02-27 18:28 Ukraine to pay $3,400 (Hr 100,000) per month to military personnel.
    2022-02-27 18:15 Zelensky doesn 2022-02-27 17:34 Russian oligarch Deripaska: 2022-02-27 17:32 Kuleba: Ukraine will not capitulate.
    2022-02-27 17:18 FM: Putin 2022-02-27 16:19 Human Rights Ombudsman: 210 Ukrainians killed by Russian assault.
    2022-02-27 15:36 BREAKING: Ukraine confirms peace talks with Russia today.
    2022-02-27 15:31 BREAKING: Putin orders Russian nuclear deterrent forces on alert.
    2022-02-27 15:29 Amid fierce defense, Ukraine foils Russian blitz victory plans
    2022-02-27 15:27 Reports suggest Belarus prepares to send its troops to Ukraine, joining Putin in war
    2022-02-27 15:02 Ukraine files lawsuit against Russia at the Hague.
    2022-02-27 14:56 A loud explosion was heard in Kyiv center at 2.54 p.m.
    2022-02-27 14:48 Russian oligarch Fridman: 2022-02-27 14:41 Japan joins Western allies to eject selected Russian banks from SWIFT.
    2022-02-27 14:37 Klitschko: 9 civilians, including one child, killed in Kyiv since Feb. 24.
    2022-02-27 14:33 Zelensky asks Switzerland to hold ceasefire talks in Geneva on Feb. 28, Ukrainska Pravda reported.
    2022-02-27 14:09 Ukraine has full control over Kharkiv, Governor of Kharkiv Oblast Oleh Synegubov says.
    2022-02-27 13:52 Three large explosions heard near central Kyiv around 1:37 p.m.
    2022-02-27 13:38 Russia gives Ukraine a deadline of 3 p.m. on Feb. 27 to decide whether to meet for talks in Belarus.
    2022-02-27 12:53 Truss: No talks until Russia withdraws troops from Ukraine.
    2022-02-27 12:33 6 babies born in Ukraine’s bomb shelters.
    2022-02-27 12:12 Lukashenko confirms rockets fired at Ukraine from Belarus, threatens to join war on Ukraine.
    2022-02-27 12:00 Ukrainian Defense Ministry: Approximately 4,300 Russian troops killed so far.
    2022-02-27 11:48 Ukraine launches website to help Russian families find their relatives killed in combat.
    2022-02-27 11:32 Russians shoot at a bus with civilians in Okhtyrka district of Sumy Oblast
    2022-02-27 11:11 Presidential advisor: Russia trying to put Ukraine into an "unacceptable ultimatum."
    2022-02-27 11:07 Ukrainian border guards open 2 more 24-hour crossing points to Hungary.
    2022-02-27 10:55 Ministry of Digital Transformation crowdfunds over $14 million for Ukrainian military,
    2022-02-27 10:52 Czech Republic makes it illegal to openly support Russia’s war on Ukraine.
    2022-02-27 10:48 International Judo Federation strips Putin of honorary status.
    2022-02-27 18:28 Ukraine to pay $3,400 (Hr 100,000) per month to military personnel.
    2022-02-27 18:15 Zelensky doesn 2022-02-27 17:34 Russian oligarch Deripaska: 2022-02-27 17:32 Kuleba: Ukraine will not capitulate.
    2022-02-27 17:18 FM: Putin 2022-02-27 16:19 Human Rights Ombudsman: 210 Ukrainians killed by Russian assault.
    2022-02-27 15:36 BREAKING: Ukraine confirms peace talks with Russia today.
    2022-02-27 15:31 BREAKING: Putin orders Russian nuclear deterrent forces on alert.
    2022-02-27 15:29 Amid fierce defense, Ukraine foils Russian blitz victory plans
    2022-02-27 15:27 Reports suggest Belarus prepares to send its troops to Ukraine, joining Putin in war
    2022-02-27 15:02 Ukraine files lawsuit against Russia at the Hague.
    2022-02-27 14:56 A loud explosion was heard in Kyiv center at 2.54 p.m.
    2022-02-27 14:48 Russian oligarch Fridman: 2022-02-27 14:41 Japan joins Western allies to eject selected Russian banks from SWIFT.
    2022-02-27 14:37 Klitschko: 9 civilians, including one child, killed in Kyiv since Feb. 24.
    2022-02-27 14:33 Zelensky asks Switzerland to hold ceasefire talks in Geneva on Feb. 28, Ukrainska Pravda reported.
    2022-02-27 14:09 Ukraine has full control over Kharkiv, Governor of Kharkiv Oblast Oleh Synegubov says.
    2022-02-27 13:52 Three large explosions heard near central Kyiv around 1:37 p.m.
    2022-02-27 13:38 Russia gives Ukraine a deadline of 3 p.m. on Feb. 27 to decide whether to meet for talks in Belarus.
    2022-02-27 12:53 Truss: No talks until Russia withdraws troops from Ukraine.
    2022-02-27 12:33 6 babies born in Ukraine’s bomb shelters.
    2022-02-27 12:12 Lukashenko confirms rockets fired at Ukraine from Belarus, threatens to join war on Ukraine.
    2022-02-27 12:00 Ukrainian Defense Ministry: Approximately 4,300 Russian troops killed so far.
    2022-02-27 11:48 Ukraine launches website to help Russian families find their relatives killed in combat.
    2022-02-27 11:32 Russians shoot at a bus with civilians in Okhtyrka district of Sumy Oblast
    2022-02-27 11:11 Presidential advisor: Russia trying to put Ukraine into an "unacceptable ultimatum."
    2022-02-27 11:07 Ukrainian border guards open 2 more 24-hour crossing points to Hungary.
    2022-02-27 10:55 Ministry of Digital Transformation crowdfunds over $14 million for Ukrainian military,
    2022-02-27 10:52 Czech Republic makes it illegal to openly support Russia’s war on Ukraine.
    2022-02-27 10:48 International Judo Federation strips Putin of honorary status.
    2022-02-27 10:43 National Bank crowdfunds $15 million for Ukrainian military.
    2022-02-27 10:19 Ukrainian military blow up bridge outside of Kyiv to slow down Russian troops.
    2022-02-27 10:09 Zelensky says Ukraine is ready for talks with Russia, but only in a country "from which missiles aren 2022-02-27 09:55 Areferendum on constitutional amendments in Belarus, held today, on Feb. 27, is set to allow Russia to place nuclear weapons on the territory of Belarus.
    2022-02-27 09:38 Two Danish journalists shot in Okhtyrka.
    2022-02-27 09:32 Russian delegation arrived in Belarus for talks with Ukraine.
    2022-02-27 09:08 A 9-story residential building in Bucha was hit by a Russian strike, casualties are unknown.
    2022-02-27 09:05 Zaluzhnyi: Ukraine’s air defense shot down cruise missile launched at Kyiv.
    2022-02-27 09:02 Finland latest country to close its airspace to Russian planes.
    2022-02-27 08:32 A large column of Russian vehicles is pushing into the city of Sumy from the side of Khimprom,
    2022-02-27 08:26 Kharkiv governor reports that heavy fighting is taking place inside the city.
    2022-02-27 07:45 Russian troops have entered Kharkiv.
    2022-02-27 07:09 Kyiv is under control of Ukrainian military and territorial defense forces.
    2022-02-27 07:04 Defense Minister Oleksii Reznikov praises Ukraine 2022-02-27 06:51 Snake Island defenders who told Russian navy ‘go f*ck yourself’ may still be alive.
    2022-02-27 06:36 Russian military launches drones in Odesa.
    2022-02-27 06:30 Nataliia Steblyna: Everyone is important in the war against Putin
    2022-02-27 06:25 Six people, including a seven-year-old girl, killed in Russian shelling of Okhtyrka, in Sumy Oblast in northeastern Ukraine, the Governor Dmitry Zhivitsky said.
    2022-02-27 05:57 Mykolaiv mayor confirms the city is under Ukraine’s control.
    2022-02-27 12:33 6 babies born in Ukraine’s bomb shelters.
    2022-02-27 12:12 Lukashenko confirms rockets fired at Ukraine from Belarus, threatens to join war on Ukraine.
    2022-02-27 12:00 Ukrainian Defense Ministry: Approximately 4,300 Russian troops killed so far.
    2022-02-27 11:48 Ukraine launches website to help Russian families find their relatives killed in combat.
    2022-02-27 11:32 Russians shoot at a bus with civilians in Okhtyrka district of Sumy Oblast
    2022-02-27 11:11 Presidential advisor: Russia trying to put Ukraine into an "unacceptable ultimatum."
    2022-02-27 11:07 Ukrainian border guards open 2 more 24-hour crossing points to Hungary.
    2022-02-27 10:55 Ministry of Digital Transformation crowdfunds over $14 million for Ukrainian military,
    2022-02-27 10:52 Czech Republic makes it illegal to openly support Russia’s war on Ukraine.
    2022-02-27 10:48 International Judo Federation strips Putin of honorary status.
    2022-02-27 10:43 National Bank crowdfunds $15 million for Ukrainian military.
    2022-02-27 10:19 Ukrainian military blow up bridge outside of Kyiv to slow down Russian troops.
    2022-02-27 10:09 Zelensky says Ukraine is ready for talks with Russia, but only in a country "from which missiles aren 2022-02-27 09:55 Areferendum on constitutional amendments in Belarus, held today, on Feb. 27, is set to allow Russia to place nuclear weapons on the territory of Belarus.
    2022-02-27 09:38 Two Danish journalists shot in Okhtyrka.
    2022-02-27 09:32 Russian delegation arrived in Belarus for talks with Ukraine.
    2022-02-27 09:08 A 9-story residential building in Bucha was hit by a Russian strike, casualties are unknown.
    2022-02-27 09:05 Zaluzhnyi: Ukraine’s air defense shot down cruise missile launched at Kyiv.
    2022-02-27 09:02 Finland latest country to close its airspace to Russian planes.
    2022-02-27 08:32 A large column of Russian vehicles is pushing into the city of Sumy from the side of Khimprom,
    2022-02-27 08:26 Kharkiv governor reports that heavy fighting is taking place inside the city.
    2022-02-27 07:45 Russian troops have entered Kharkiv.
    2022-02-27 07:09 Kyiv is under control of Ukrainian military and territorial defense forces.
    2022-02-27 07:04 Defense Minister Oleksii Reznikov praises Ukraine 2022-02-27 06:51 Snake Island defenders who told Russian navy ‘go f*ck yourself’ may still be alive.
    2022-02-27 06:36 Russian military launches drones in Odesa.
    2022-02-27 06:30 Nataliia Steblyna: Everyone is important in the war against Putin
    2022-02-27 06:25 Six people, including a seven-year-old girl, killed in Russian shelling of Okhtyrka, in Sumy Oblast in northeastern Ukraine, the Governor Dmitry Zhivitsky said.
    2022-02-27 05:57 Mykolaiv mayor confirms the city is under Ukraine’s control.
    2022-02-27 05:12 The destruction of a convoy of Chechen special forces near Hostomel on Feb. 26 officially confirmed by the President 2022-02-27 04:54 CNN: Six-year-old boy killed in Kyiv clashes, several more Ukrainian civilians wounded,
    2022-02-27 04:51 Germany to close airspace to Russian planes.
    2022-02-27 04:50 Vasylkiv Air Base remains a hot spot.
    2022-02-27 04:24 Protesters around the world planning rallies in support of Ukraine (UPDATING)
    2022-02-27 04:07 Russian forces fired at radioactive waste disposal site in Kyiv.
    2022-02-27 03:52 After three days of Russia 2022-02-27 03:36 UN: At least 240 civilian casualties in Ukraine since Russia’s invasion began on Thursday.
    2022-02-27 03:09 Macron asks Lukashenko to quickly order Russian troops to leave.
    2022-02-27 03:05 One woman killed in the shelling of a residential building in Kharkiv.
    2022-02-27 02:48 Ukraine 2022-02-27 02:38 Number of reservists of the Territorial Defense Forces reached 37,000,
    2022-02-27 02:32 Kyiv administration: Kyiv residents must close their windows tightly.
    2022-02-27 02:17 Russians blow up a gas pipeline in Kharkiv.
    2022-02-27 01:54 U.S. and its allies to expel certain Russian banks from SWIFT.
    2022-02-27 01:51 Ukrainian military blows up 56 Russian tank fuels in Chernihiv Oblast, depriving units of combat capability.
    2022-02-27 01:42 Russia’s war on Ukraine: Where fighting is on now (Feb. 27 live updates)
    2022-02-27 01:28 Oil depot catches fire in Vasylkiv, 40 kilometres south of Kyiv.
    2022-02-27 01:23 CNN: Two massive explosions in Kyiv.
    2022-02-27 01:10 Individuals who draw marks on Ukraine 2022-02-27 08:32 A large column of Russian vehicles is pushing into the city of Sumy from the side of Khimprom,
    2022-02-27 08:26 Kharkiv governor reports that heavy fighting is taking place inside the city.
    2022-02-27 07:45 Russian troops have entered Kharkiv.
    2022-02-27 07:09 Kyiv is under control of Ukrainian military and territorial defense forces.
    2022-02-27 07:04 Defense Minister Oleksii Reznikov praises Ukraine 2022-02-27 06:51 Snake Island defenders who told Russian navy ‘go f*ck yourself’ may still be alive.
    2022-02-27 06:36 Russian military launches drones in Odesa.
    2022-02-27 06:30 Nataliia Steblyna: Everyone is important in the war against Putin
    2022-02-27 06:25 Six people, including a seven-year-old girl, killed in Russian shelling of Okhtyrka, in Sumy Oblast in northeastern Ukraine, the Governor Dmitry Zhivitsky said.
    2022-02-27 05:57 Mykolaiv mayor confirms the city is under Ukraine’s control.
    2022-02-27 05:12 The destruction of a convoy of Chechen special forces near Hostomel on Feb. 26 officially confirmed by the President 2022-02-27 04:54 CNN: Six-year-old boy killed in Kyiv clashes, several more Ukrainian civilians wounded,
    2022-02-27 04:51 Germany to close airspace to Russian planes.
    2022-02-27 04:50 Vasylkiv Air Base remains a hot spot.
    2022-02-27 04:24 Protesters around the world planning rallies in support of Ukraine (UPDATING)
    2022-02-27 04:07 Russian forces fired at radioactive waste disposal site in Kyiv.
    2022-02-27 03:52 After three days of Russia 2022-02-27 03:36 UN: At least 240 civilian casualties in Ukraine since Russia’s invasion began on Thursday.
    2022-02-27 03:09 Macron asks Lukashenko to quickly order Russian troops to leave.
    2022-02-27 03:05 One woman killed in the shelling of a residential building in Kharkiv.
    2022-02-27 02:48 Ukraine 2022-02-27 02:38 Number of reservists of the Territorial Defense Forces reached 37,000,
    2022-02-27 02:32 Kyiv administration: Kyiv residents must close their windows tightly.
    2022-02-27 02:17 Russians blow up a gas pipeline in Kharkiv.
    2022-02-27 01:54 U.S. and its allies to expel certain Russian banks from SWIFT.
    2022-02-27 01:51 Ukrainian military blows up 56 Russian tank fuels in Chernihiv Oblast, depriving units of combat capability.
    2022-02-27 01:42 Russia’s war on Ukraine: Where fighting is on now (Feb. 27 live updates)
    2022-02-27 01:28 Oil depot catches fire in Vasylkiv, 40 kilometres south of Kyiv.
    2022-02-27 01:23 CNN: Two massive explosions in Kyiv.
    2022-02-27 01:10 Individuals who draw marks on Ukraine 2022-02-27 01:02 Elon Musk 2022-02-27 00:59 Column of Russian special forces defeated near Hostomel "Kadyrovites,"
    2022-02-27 00:46 European Commission President Ursula von der Leyen on latest round of sanctions:
    2022-02-27 00:19 BREAKING: German government says allies cutting Russia out of SWIFT.
    2022-02-26 23:58 Lithuania to ban Russian airlines from its airspace from midnight tonight,
    2022-02-26 23:39 The Economist: Ukraine inflicted more casualties in 24 hours than Russia suffered over eight years of engagements in Syria.
    2022-02-26 23:37 Air raid sirens activated in Kyiv as multiple reports suggest that Russian forces are about to conduct heavy airstrikes and rocket attacks against the city.
    2022-02-26 23:07 YouTube blocks RT, other Russian channels from monetizing.
    2022-02-26 23:00 Ukraine 2022-02-26 22:34 Governor says Russian tanks attacked Mykolayiv.
    2022-02-26 22:28 Government to begin giving seized property to armed forces.
    2022-02-26 22:08 Russian artillery fire has struck Kyiv 2022-02-26 22:03 ATB supermarkets located underground are opened as bomb shelters.
    2022-02-26 21:56 U.S. to provide Ukraine additional $350 million of military assistance.
    2022-02-26 21:47 Russian losses today in its war against Ukraine:
    2022-02-26 21:39 Kyiv central railway station goes dark.
    2022-02-26 20:46 Germany to send 1,000 unspecified anti-tank weapons and 500 FIM-92 Stinger air defense missiles.
    2022-02-26 20:42 President 2022-02-26 20:09 Lviv-based Pravda brewery switches to making Molotov cocktails.
    2022-02-26 20:04 Instagram begins labeling accounts of Russian state-controlled media, which promulgate propaganda and disinformation.
    2022-02-27 02:48 Ukraine 2022-02-27 02:38 Number of reservists of the Territorial Defense Forces reached 37,000,
    2022-02-27 02:32 Kyiv administration: Kyiv residents must close their windows tightly.
    2022-02-27 02:17 Russians blow up a gas pipeline in Kharkiv.
    2022-02-27 01:54 U.S. and its allies to expel certain Russian banks from SWIFT.
    2022-02-27 01:51 Ukrainian military blows up 56 Russian tank fuels in Chernihiv Oblast, depriving units of combat capability.
    2022-02-27 01:42 Russia’s war on Ukraine: Where fighting is on now (Feb. 27 live updates)
    2022-02-27 01:28 Oil depot catches fire in Vasylkiv, 40 kilometres south of Kyiv.
    2022-02-27 01:23 CNN: Two massive explosions in Kyiv.
    2022-02-27 01:10 Individuals who draw marks on Ukraine 2022-02-27 01:02 Elon Musk 2022-02-27 00:59 Column of Russian special forces defeated near Hostomel "Kadyrovites,"
    2022-02-27 00:46 European Commission President Ursula von der Leyen on latest round of sanctions:
    2022-02-27 00:19 BREAKING: German government says allies cutting Russia out of SWIFT.
    2022-02-26 23:58 Lithuania to ban Russian airlines from its airspace from midnight tonight,
    2022-02-26 23:39 The Economist: Ukraine inflicted more casualties in 24 hours than Russia suffered over eight years of engagements in Syria.
    2022-02-26 23:37 Air raid sirens activated in Kyiv as multiple reports suggest that Russian forces are about to conduct heavy airstrikes and rocket attacks against the city.
    2022-02-26 23:07 YouTube blocks RT, other Russian channels from monetizing.
    2022-02-26 23:00 Ukraine 2022-02-26 22:34 Governor says Russian tanks attacked Mykolayiv.
    2022-02-26 22:28 Government to begin giving seized property to armed forces.
    2022-02-26 22:08 Russian artillery fire has struck Kyiv 2022-02-26 22:03 ATB supermarkets located underground are opened as bomb shelters.
    2022-02-26 21:56 U.S. to provide Ukraine additional $350 million of military assistance.
    2022-02-26 21:47 Russian losses today in its war against Ukraine:
    2022-02-26 21:39 Kyiv central railway station goes dark.
    2022-02-26 20:46 Germany to send 1,000 unspecified anti-tank weapons and 500 FIM-92 Stinger air defense missiles.
    2022-02-26 20:42 President 2022-02-26 20:09 Lviv-based Pravda brewery switches to making Molotov cocktails.
    2022-02-26 20:04 Instagram begins labeling accounts of Russian state-controlled media, which promulgate propaganda and disinformation.
    2022-02-26 20:00 City council: Russian troops seize Berdiansk Airport on Azov Sea coast.
    2022-02-26 19:59 Russian soldiers fire at ambulance, kill 2, injure 1 near Kherson.
    2022-02-26 19:20 Bloomberg: US considers sanctions on Russia 2022-02-26 19:17 Belgium to supply Ukraine with 2,000 machine guns, 3,800 tons of fuel.
    2022-02-26 18:58 Twitter stops registration of new accounts from Russian territory,
    2022-02-26 18:55 Russian forces fire at bus, kill 5 and injure 6 in Kharkiv Oblast.
    2022-02-26 18:45 Ukrzaliznytsia: All rail links to Russia destroyed.
    2022-02-26 18:45 Russian shell hits a house, kills 3 people in Kyiv Oblast.
    2022-02-26 18:42 Germany allows supplying weapons to Ukraine in major turnaround.
    2022-02-26 18:29 Ukrainian military destroys Russian Su-30 fighter aircraft in Black Sea.
    2022-02-26 18:19 Poland: EU should speed up Ukraine 2022-02-26 18:01 Russia says it will launch offensive on all fronts in Ukraine.
    2022-02-26 17:33 Ukrainian armed forces: Russian warship accidentally shoots down Russian military aircraft.
    2022-02-26 17:19 Minister of Digital Transformation calls on YouTube, Meta, Netflix to confront Russian propaganda.
    2022-02-26 17:08 State Telecommunications Service: Kremlin website down.
    2022-02-26 16:44 Russia 2022-02-26 16:35 Reuters: Kadyrov confirms deployment of Chechen forces to Ukraine.
    2022-02-26 16:32 Kyiv curfew extended until 8 a.m. on Feb. 28.
    2022-02-26 16:28 Preparations begin to cut Russia off SWIFT.
    2022-02-26 22:28 Government to begin giving seized property to armed forces.
    2022-02-26 22:08 Russian artillery fire has struck Kyiv 2022-02-26 22:03 ATB supermarkets located underground are opened as bomb shelters.
    2022-02-26 21:56 U.S. to provide Ukraine additional $350 million of military assistance.
    2022-02-26 21:47 Russian losses today in its war against Ukraine:
    2022-02-26 21:39 Kyiv central railway station goes dark.
    2022-02-26 20:46 Germany to send 1,000 unspecified anti-tank weapons and 500 FIM-92 Stinger air defense missiles.
    2022-02-26 20:42 President 2022-02-26 20:09 Lviv-based Pravda brewery switches to making Molotov cocktails.
    2022-02-26 20:04 Instagram begins labeling accounts of Russian state-controlled media, which promulgate propaganda and disinformation.
    2022-02-26 20:00 City council: Russian troops seize Berdiansk Airport on Azov Sea coast.
    2022-02-26 19:59 Russian soldiers fire at ambulance, kill 2, injure 1 near Kherson.
    2022-02-26 19:20 Bloomberg: US considers sanctions on Russia 2022-02-26 19:17 Belgium to supply Ukraine with 2,000 machine guns, 3,800 tons of fuel.
    2022-02-26 18:58 Twitter stops registration of new accounts from Russian territory,
    2022-02-26 18:55 Russian forces fire at bus, kill 5 and injure 6 in Kharkiv Oblast.
    2022-02-26 18:45 Ukrzaliznytsia: All rail links to Russia destroyed.
    2022-02-26 18:45 Russian shell hits a house, kills 3 people in Kyiv Oblast.
    2022-02-26 18:42 Germany allows supplying weapons to Ukraine in major turnaround.
    2022-02-26 18:29 Ukrainian military destroys Russian Su-30 fighter aircraft in Black Sea.
    2022-02-26 18:19 Poland: EU should speed up Ukraine 2022-02-26 18:01 Russia says it will launch offensive on all fronts in Ukraine.
    2022-02-26 17:33 Ukrainian armed forces: Russian warship accidentally shoots down Russian military aircraft.
    2022-02-26 17:19 Minister of Digital Transformation calls on YouTube, Meta, Netflix to confront Russian propaganda.
    2022-02-26 17:08 State Telecommunications Service: Kremlin website down.
    2022-02-26 16:44 Russia 2022-02-26 16:35 Reuters: Kadyrov confirms deployment of Chechen forces to Ukraine.
    2022-02-26 16:32 Kyiv curfew extended until 8 a.m. on Feb. 28.
    2022-02-26 16:28 Preparations begin to cut Russia off SWIFT.
    2022-02-26 15:40 Eugene Czolij: Expelling all Russian banks from SWIFT must be done now
    2022-02-26 15:17 Ukrzaliznytsia running evacuation trains from Kyiv.
    2022-02-26 15:01 Russian forces in occupied Donetsk are handing out gas masks to its troops and local militants,
    2022-02-26 14:44 Kyiv residents calm after heavy night fighting
    2022-02-26 14:43 Russian navy blocks off part of Black Sea.
    2022-02-26 14:19 Reznikov: After failure, Russia changing tactics.
    2022-02-26 14:14 Polish PM: Hungary supports ejecting Russia from SWIFT.
    2022-02-26 14:13 Kyiv curfew extended to 5 p.m. - 8 a.m.
    2022-02-26 12:40 Bridge blown up on Kyiv-Zhytomyr highway.
    2022-02-26 12:39 Netherlands to send 200 Stinger missiles to Ukraine.
    2022-02-26 11:38 Cyprus, Italy change their position and support cutting Russia off SWIFT.
    2022-02-26 11:02 FM Dmytro Kuleba calls on world to fully isolate Russia after missile hit residential building in Kyiv last night.
    2022-02-26 10:53 Klitschko: Kyiv metro now only working as bomb shelter.
    2022-02-26 10:11 Five people including 2 children injured in Russia’s attacks on Kyiv last night.
    2022-02-26 10:02 Kherson mayor: City under Ukrainian control.
    2022-02-26 09:51 President 2022-02-26 09:40 Ukrainian soldiers win back Kyiv Hydroelectric Power Plant.
    2022-02-26 09:27 Zelensky says weapons en route to Ukraine after Macron call.
    2022-02-26 09:19 Russia faces heavy losses as it attacks Ukraine on all fronts
    2022-02-26 08:41 Missile strikes an apartment building in Kyiv.
    2022-02-26 18:19 Poland: EU should speed up Ukraine 2022-02-26 18:01 Russia says it will launch offensive on all fronts in Ukraine.
    2022-02-26 17:33 Ukrainian armed forces: Russian warship accidentally shoots down Russian military aircraft.
    2022-02-26 17:19 Minister of Digital Transformation calls on YouTube, Meta, Netflix to confront Russian propaganda.
    2022-02-26 17:08 State Telecommunications Service: Kremlin website down.
    2022-02-26 16:44 Russia 2022-02-26 16:35 Reuters: Kadyrov confirms deployment of Chechen forces to Ukraine.
    2022-02-26 16:32 Kyiv curfew extended until 8 a.m. on Feb. 28.
    2022-02-26 16:28 Preparations begin to cut Russia off SWIFT.
    2022-02-26 15:40 Eugene Czolij: Expelling all Russian banks from SWIFT must be done now
    2022-02-26 15:17 Ukrzaliznytsia running evacuation trains from Kyiv.
    2022-02-26 15:01 Russian forces in occupied Donetsk are handing out gas masks to its troops and local militants,
    2022-02-26 14:44 Kyiv residents calm after heavy night fighting
    2022-02-26 14:43 Russian navy blocks off part of Black Sea.
    2022-02-26 14:19 Reznikov: After failure, Russia changing tactics.
    2022-02-26 14:14 Polish PM: Hungary supports ejecting Russia from SWIFT.
    2022-02-26 14:13 Kyiv curfew extended to 5 p.m. - 8 a.m.
    2022-02-26 12:40 Bridge blown up on Kyiv-Zhytomyr highway.
    2022-02-26 12:39 Netherlands to send 200 Stinger missiles to Ukraine.
    2022-02-26 11:38 Cyprus, Italy change their position and support cutting Russia off SWIFT.
    2022-02-26 11:02 FM Dmytro Kuleba calls on world to fully isolate Russia after missile hit residential building in Kyiv last night.
    2022-02-26 10:53 Klitschko: Kyiv metro now only working as bomb shelter.
    2022-02-26 10:11 Five people including 2 children injured in Russia’s attacks on Kyiv last night.
    2022-02-26 10:02 Kherson mayor: City under Ukrainian control.
    2022-02-26 09:51 President 2022-02-26 09:40 Ukrainian soldiers win back Kyiv Hydroelectric Power Plant.
    2022-02-26 09:27 Zelensky says weapons en route to Ukraine after Macron call.
    2022-02-26 09:19 Russia faces heavy losses as it attacks Ukraine on all fronts
    2022-02-26 08:41 Missile strikes an apartment building in Kyiv.
    2022-02-26 08:39 A baby born in Kyiv subway, now being used as a bomb shelter, overnight on Feb. 25.
    2022-02-26 08:00 A warehouse of Kyivenergo, capital’s energy generating company, was set on fire.
    2022-02-26 08:00 Russia loses 3,500 troops, nearly 200 kept hostage.
    2022-02-26 07:47 Ukraine repels Russian forces in Vasylkiv near Kyiv.
    2022-02-26 07:35 UN: Roughly 100,000 Ukrainians have left their homes, several thousand flee abroad to escape Russia’s war.
    2022-02-26 07:34 US approves $600 million aid for Ukraine, including $350 million for military.
    2022-02-26 07:18 Russian aircraft flies over Konotop, Sumy Oblast, Suspilne media 2022-02-26 06:49 Ukraine’s 101st brigade destroys a column of Russian forces.
    2022-02-26 06:46 Ukraine intercepts Russian drone in Black Sea.
    2022-02-26 06:30 President Volodymyr Zelensky is personally heading the defense of Kyiv.
    2022-02-26 06:17 Russia’s war on Ukraine: Where fighting is on now (Feb. 26 live updates)
    2022-02-26 05:20 Security Council Secretary Danilov:
    2022-02-26 05:18 Armed Forces of Ukraine successfully repel an attack by Russian forces on Mykolaiv,
    2022-02-26 04:26 An attack on a military unit in Kyiv has been repelled by Ukraine 2022-02-26 04:07 Yet another Russian Il-76 transporter downed.
    2022-02-26 03:58 Washington Post: U.S prepared to evacuate Zelensky
    2022-02-26 03:40 Explosions, gunfire in Kyiv’s Shulyavka, Kyiv Zoo, Beresteiska areas.
    2022-02-26 03:36 Reuters: White House asks Congress for $6.4 billion for Ukraine crisis.
    2022-02-26 03:29 Kazakhstan denies Russia 2022-02-26 03:23 Russian saboteurs disguised as the National Police drove up to a checkpoint near Vasylkiv south of Kyiv
    2022-02-26 11:38 Cyprus, Italy change their position and support cutting Russia off SWIFT.
    2022-02-26 11:02 FM Dmytro Kuleba calls on world to fully isolate Russia after missile hit residential building in Kyiv last night.
    2022-02-26 10:53 Klitschko: Kyiv metro now only working as bomb shelter.
    2022-02-26 10:11 Five people including 2 children injured in Russia’s attacks on Kyiv last night.
    2022-02-26 10:02 Kherson mayor: City under Ukrainian control.
    2022-02-26 09:51 President 2022-02-26 09:40 Ukrainian soldiers win back Kyiv Hydroelectric Power Plant.
    2022-02-26 09:27 Zelensky says weapons en route to Ukraine after Macron call.
    2022-02-26 09:19 Russia faces heavy losses as it attacks Ukraine on all fronts
    2022-02-26 08:41 Missile strikes an apartment building in Kyiv.
    2022-02-26 08:39 A baby born in Kyiv subway, now being used as a bomb shelter, overnight on Feb. 25.
    2022-02-26 08:00 A warehouse of Kyivenergo, capital’s energy generating company, was set on fire.
    2022-02-26 08:00 Russia loses 3,500 troops, nearly 200 kept hostage.
    2022-02-26 07:47 Ukraine repels Russian forces in Vasylkiv near Kyiv.
    2022-02-26 07:35 UN: Roughly 100,000 Ukrainians have left their homes, several thousand flee abroad to escape Russia’s war.
    2022-02-26 07:34 US approves $600 million aid for Ukraine, including $350 million for military.
    2022-02-26 07:18 Russian aircraft flies over Konotop, Sumy Oblast, Suspilne media 2022-02-26 06:49 Ukraine’s 101st brigade destroys a column of Russian forces.
    2022-02-26 06:46 Ukraine intercepts Russian drone in Black Sea.
    2022-02-26 06:30 President Volodymyr Zelensky is personally heading the defense of Kyiv.
    2022-02-26 06:17 Russia’s war on Ukraine: Where fighting is on now (Feb. 26 live updates)
    2022-02-26 05:20 Security Council Secretary Danilov:
    2022-02-26 05:18 Armed Forces of Ukraine successfully repel an attack by Russian forces on Mykolaiv,
    2022-02-26 04:26 An attack on a military unit in Kyiv has been repelled by Ukraine 2022-02-26 04:07 Yet another Russian Il-76 transporter downed.
    2022-02-26 03:58 Washington Post: U.S prepared to evacuate Zelensky
    2022-02-26 03:40 Explosions, gunfire in Kyiv’s Shulyavka, Kyiv Zoo, Beresteiska areas.
    2022-02-26 03:36 Reuters: White House asks Congress for $6.4 billion for Ukraine crisis.
    2022-02-26 03:29 Kazakhstan denies Russia 2022-02-26 03:23 Russian saboteurs disguised as the National Police drove up to a checkpoint near Vasylkiv south of Kyiv
    2022-02-26 02:58 Russian invaders attempt to land in Vasylkiv, Kyiv Oblast.
    2022-02-26 02:45 More shelling reported near power plant in northern Kyiv.
    2022-02-26 01:32 Ukraine’s air defense downs a Russian close support aircraft and a helicopter in Donbas.
    2022-02-26 01:27 Ukraine’s air defense downs a Russian transporter carrying paratroopers.
    2022-02-26 01:10 Russia blocks UN Security Council resolution condemning invasion of Ukraine.
    2022-02-26 00:24 Zelensky: 2022-02-25 23:53 Ukraine ready to negotiate with Russia.
    2022-02-25 23:26 U.S. to impose sanctions on Russia’s President Vladimir Putin and Foreign Minister Sergey Lavrov.
    2022-02-25 22:31 NATO member states to provide more weapons to Ukraine.
    2022-02-25 21:51 Kyiv home guard provided with NLAW tank killers.
    2022-02-25 21:21 Klitchko: Five explosions occurred near Troieshchyna power station in Kyiv.
    2022-02-25 20:49 Eugene Czolij: The West must act now
    2022-02-25 20:42 NATO Response Force activated for first time in history.
    2022-02-25 20:36 Ukrainian Territorial Defense forces are still in Russian-occupied Sumy.
    2022-02-25 20:24 BBC: EU freezes Putin and Lavrov 2022-02-25 20:01 The Russian government has decided to “partially restrict” access to Facebook
    2022-02-25 19:39 Zelensky addresses Ukrainians in the streets of Kyiv.
    2022-02-25 19:19 Russia barred from Eurovision.
    2022-02-25 18:55 Russia loses rights of representation in the Council of Europe 2022-02-26 06:17 Russia’s war on Ukraine: Where fighting is on now (Feb. 26 live updates)
    2022-02-26 05:20 Security Council Secretary Danilov:
    2022-02-26 05:18 Armed Forces of Ukraine successfully repel an attack by Russian forces on Mykolaiv,
    2022-02-26 04:26 An attack on a military unit in Kyiv has been repelled by Ukraine 2022-02-26 04:07 Yet another Russian Il-76 transporter downed.
    2022-02-26 03:58 Washington Post: U.S prepared to evacuate Zelensky
    2022-02-26 03:40 Explosions, gunfire in Kyiv’s Shulyavka, Kyiv Zoo, Beresteiska areas.
    2022-02-26 03:36 Reuters: White House asks Congress for $6.4 billion for Ukraine crisis.
    2022-02-26 03:29 Kazakhstan denies Russia 2022-02-26 03:23 Russian saboteurs disguised as the National Police drove up to a checkpoint near Vasylkiv south of Kyiv
    2022-02-26 02:58 Russian invaders attempt to land in Vasylkiv, Kyiv Oblast.
    2022-02-26 02:45 More shelling reported near power plant in northern Kyiv.
    2022-02-26 01:32 Ukraine’s air defense downs a Russian close support aircraft and a helicopter in Donbas.
    2022-02-26 01:27 Ukraine’s air defense downs a Russian transporter carrying paratroopers.
    2022-02-26 01:10 Russia blocks UN Security Council resolution condemning invasion of Ukraine.
    2022-02-26 00:24 Zelensky: 2022-02-25 23:53 Ukraine ready to negotiate with Russia.
    2022-02-25 23:26 U.S. to impose sanctions on Russia’s President Vladimir Putin and Foreign Minister Sergey Lavrov.
    2022-02-25 22:31 NATO member states to provide more weapons to Ukraine.
    2022-02-25 21:51 Kyiv home guard provided with NLAW tank killers.
    2022-02-25 21:21 Klitchko: Five explosions occurred near Troieshchyna power station in Kyiv.
    2022-02-25 20:49 Eugene Czolij: The West must act now
    2022-02-25 20:42 NATO Response Force activated for first time in history.
    2022-02-25 20:36 Ukrainian Territorial Defense forces are still in Russian-occupied Sumy.
    2022-02-25 20:24 BBC: EU freezes Putin and Lavrov 2022-02-25 20:01 The Russian government has decided to “partially restrict” access to Facebook
    2022-02-25 19:39 Zelensky addresses Ukrainians in the streets of Kyiv.
    2022-02-25 19:19 Russia barred from Eurovision.
    2022-02-25 18:55 Russia loses rights of representation in the Council of Europe 2022-02-25 18:38 Kyiv City administration issues warning about painted markings on rooves.
    2022-02-25 18:27 France backs cutting Russia from SWIFT.
    2022-02-25 17:43 Amnesty International: Russian military commits ‘indiscriminate attacks’ during invasion of Ukraine.
    2022-02-25 17:28 Russia 2022-02-25 16:04 Ukraine reports over 1,000 Russian soldiers killed on invasion 2022-02-25 15:08 Peskov: Russia ready to send delegation to Minsk to negotiate with Ukraine.
    2022-02-25 15:02 Ukraine 2022-02-25 14:52 Klitschko: Kyiv has entered defense phase, enemy wants to destroy the capital.
    2022-02-25 14:38 Russian rockets hit shelter, kindergarten in Okhtyrka, Ukraine’s northeastern Sumy Oblast.
    2022-02-25 14:01 Zelensky: 2022-02-25 13:45 Ukraine’s ex-Deputy Prosecutor General asks to collect evidence of Russia’s human right violation during its invasion.
    2022-02-25 13:40 Zelensky says he 2022-02-25 13:32 Volunteers create website to help Ukrainians fleeing the war, settle in western Ukraine.
    2022-02-25 12:40 Ukrzaliznytsia will run an evacuation Intercity+ train to Kharkiv, limit service after.
    2022-02-25 12:16 Mariupol mayor: Fierce combat in all directions around the city.
    2022-02-25 11:38 UEFA strips St. Petersburg of Champions League final, moves the flagship match to Paris.
    2022-02-25 21:21 Klitchko: Five explosions occurred near Troieshchyna power station in Kyiv.
    2022-02-25 20:49 Eugene Czolij: The West must act now
    2022-02-25 20:42 NATO Response Force activated for first time in history.
    2022-02-25 20:36 Ukrainian Territorial Defense forces are still in Russian-occupied Sumy.
    2022-02-25 20:24 BBC: EU freezes Putin and Lavrov 2022-02-25 20:01 The Russian government has decided to “partially restrict” access to Facebook
    2022-02-25 19:39 Zelensky addresses Ukrainians in the streets of Kyiv.
    2022-02-25 19:19 Russia barred from Eurovision.
    2022-02-25 18:55 Russia loses rights of representation in the Council of Europe 2022-02-25 18:38 Kyiv City administration issues warning about painted markings on rooves.
    2022-02-25 18:27 France backs cutting Russia from SWIFT.
    2022-02-25 17:43 Amnesty International: Russian military commits ‘indiscriminate attacks’ during invasion of Ukraine.
    2022-02-25 17:28 Russia 2022-02-25 16:04 Ukraine reports over 1,000 Russian soldiers killed on invasion 2022-02-25 15:08 Peskov: Russia ready to send delegation to Minsk to negotiate with Ukraine.
    2022-02-25 15:02 Ukraine 2022-02-25 14:52 Klitschko: Kyiv has entered defense phase, enemy wants to destroy the capital.
    2022-02-25 14:38 Russian rockets hit shelter, kindergarten in Okhtyrka, Ukraine’s northeastern Sumy Oblast.
    2022-02-25 14:01 Zelensky: 2022-02-25 13:45 Ukraine’s ex-Deputy Prosecutor General asks to collect evidence of Russia’s human right violation during its invasion.
    2022-02-25 13:40 Zelensky says he 2022-02-25 13:32 Volunteers create website to help Ukrainians fleeing the war, settle in western Ukraine.
    2022-02-25 12:40 Ukrzaliznytsia will run an evacuation Intercity+ train to Kharkiv, limit service after.
    2022-02-25 12:16 Mariupol mayor: Fierce combat in all directions around the city.
    2022-02-25 11:38 UEFA strips St. Petersburg of Champions League final, moves the flagship match to Paris.
    2022-02-25 11:09 Where Russia is attacking Ukraine today (Feb. 25)
    2022-02-25 10:43 Ukrainian President Volodymyr Zelensky remains in Kyiv.
    2022-02-25 10:23 UK Defense Minister: Russian President Vladimir Putin intends "to invade the whole of Ukraine."
    2022-02-25 10:10 Russia’s forces have entered the Obolon district in Kyiv.
    2022-02-25 09:38 Reuters: Ukrainian government seeks hackers to counter cyber warfare.
    2022-02-25 09:35 Senior Ukrainian official: Russian troops will enter Kyiv outskirts today.
    2022-02-25 09:16 The Russian military seized two vehicles of the Armed Forces of Ukraine,
    2022-02-25 09:10 Russian forces are not letting the Red Cross enter Schastia,
    2022-02-25 08:59 Ukraine’s military succesfully defending the area near Chernihiv.
    2022-02-25 08:35 Zelensky: 137 Ukrainians killed, 316 injured since Russia declared war.
    2022-02-25 08:33 Ukrainian-Canadians come together in support of their ancestral land
    2022-02-25 08:16 Russian forces might enter Vorzel and surrounding villages
    2022-02-25 07:41 French President Emmanuel Macron says he 2022-02-25 07:38 Warning sirens go off in Kyiv,
    2022-02-25 07:10 EU unveils second package of sanctions on Moscow, targets Russia’s access to key capital.
    2022-02-25 06:49 Russian rocket strikes the territory of Rivne airport,
    2022-02-25 06:31 Russia’s plan to seize Kyiv, according to Ukrainska Pravda intelligence sources:
    2022-02-25 06:09 Canada’s new sanctions target banks and Putin 2022-02-25 06:00 "It is unlikely that Russia has achieved its planned Day 1 military objectives,"
    2022-02-25 05:50 Russia already lost around 800 men, Ukraine 2022-02-25 14:38 Russian rockets hit shelter, kindergarten in Okhtyrka, Ukraine’s northeastern Sumy Oblast.
    2022-02-25 14:01 Zelensky: 2022-02-25 13:45 Ukraine’s ex-Deputy Prosecutor General asks to collect evidence of Russia’s human right violation during its invasion.
    2022-02-25 13:40 Zelensky says he 2022-02-25 13:32 Volunteers create website to help Ukrainians fleeing the war, settle in western Ukraine.
    2022-02-25 12:40 Ukrzaliznytsia will run an evacuation Intercity+ train to Kharkiv, limit service after.
    2022-02-25 12:16 Mariupol mayor: Fierce combat in all directions around the city.
    2022-02-25 11:38 UEFA strips St. Petersburg of Champions League final, moves the flagship match to Paris.
    2022-02-25 11:09 Where Russia is attacking Ukraine today (Feb. 25)
    2022-02-25 10:43 Ukrainian President Volodymyr Zelensky remains in Kyiv.
    2022-02-25 10:23 UK Defense Minister: Russian President Vladimir Putin intends "to invade the whole of Ukraine."
    2022-02-25 10:10 Russia’s forces have entered the Obolon district in Kyiv.
    2022-02-25 09:38 Reuters: Ukrainian government seeks hackers to counter cyber warfare.
    2022-02-25 09:35 Senior Ukrainian official: Russian troops will enter Kyiv outskirts today.
    2022-02-25 09:16 The Russian military seized two vehicles of the Armed Forces of Ukraine,
    2022-02-25 09:10 Russian forces are not letting the Red Cross enter Schastia,
    2022-02-25 08:59 Ukraine’s military succesfully defending the area near Chernihiv.
    2022-02-25 08:35 Zelensky: 137 Ukrainians killed, 316 injured since Russia declared war.
    2022-02-25 08:33 Ukrainian-Canadians come together in support of their ancestral land
    2022-02-25 08:16 Russian forces might enter Vorzel and surrounding villages
    2022-02-25 07:41 French President Emmanuel Macron says he 2022-02-25 07:38 Warning sirens go off in Kyiv,
    2022-02-25 07:10 EU unveils second package of sanctions on Moscow, targets Russia’s access to key capital.
    2022-02-25 06:49 Russian rocket strikes the territory of Rivne airport,
    2022-02-25 06:31 Russia’s plan to seize Kyiv, according to Ukrainska Pravda intelligence sources:
    2022-02-25 06:09 Canada’s new sanctions target banks and Putin 2022-02-25 06:00 "It is unlikely that Russia has achieved its planned Day 1 military objectives,"
    2022-02-25 05:50 Russia already lost around 800 men, Ukraine 2022-02-25 05:40 NATO will hold an extraordinary virtual summit on Feb. 25, 4 p.m. Kyiv time
    2022-02-25 05:27 Two residential buildings in Kyiv are on fire from intercepted unidentified enemy aircraft.
    2022-02-25 05:24 Kyiv sustains heavy missile, aircraft assault overnight.
    2022-02-25 04:46 Kyiv residents report loud explosions.
    2022-02-25 04:44 France will give Ukraine 300 million euros in aid and military equipment, Le Monde reports.
    2022-02-25 04:29 Two Russian soldiers surrender,
    2022-02-25 04:23 Russia detains over 1,700 protesters in 58 cities during anti-war demonstrations.
    2022-02-25 04:05 Russian military establish road checkpoints on Kyiv-Sumy highway.
    2022-02-25 03:42 Russian forces hold Konotop under siege, moving to Kyiv.
    2022-02-25 03:39 Hacktivist organization Anonymous declared cyber war on Russia.
    2022-02-25 03:07 Almost 24 hours since Russia 2022-02-25 03:05 Zelensky established commander-in-chief wartime headquarters.
    2022-02-25 02:57 Blinken: Russia plans to encircle Kyiv.
    2022-02-25 02:57 Patreon deleted the page of Ukraine-based Come Back Alive charity.
    2022-02-25 02:43 Major Ukrainian cities threatened by early morning offensive.
    2022-02-25 02:04 13 border guards were killed defending Ukraine’s Zmiinyi Island (Snake Island) in the Black Sea.
    2022-02-25 01:41 Psaki: U.S. ready to accept Ukrainian refugees.
    2022-02-25 01:24 Russian invaders abandon armoured vehicles, flee from Okhtyrka, Sumy oblast.
    2022-02-25 01:08 Zelensky: We are neither afraid of Russia nor talking with Kremlin.
    2022-02-25 08:33 Ukrainian-Canadians come together in support of their ancestral land
    2022-02-25 08:16 Russian forces might enter Vorzel and surrounding villages
    2022-02-25 07:41 French President Emmanuel Macron says he 2022-02-25 07:38 Warning sirens go off in Kyiv,
    2022-02-25 07:10 EU unveils second package of sanctions on Moscow, targets Russia’s access to key capital.
    2022-02-25 06:49 Russian rocket strikes the territory of Rivne airport,
    2022-02-25 06:31 Russia’s plan to seize Kyiv, according to Ukrainska Pravda intelligence sources:
    2022-02-25 06:09 Canada’s new sanctions target banks and Putin 2022-02-25 06:00 "It is unlikely that Russia has achieved its planned Day 1 military objectives,"
    2022-02-25 05:50 Russia already lost around 800 men, Ukraine 2022-02-25 05:40 NATO will hold an extraordinary virtual summit on Feb. 25, 4 p.m. Kyiv time
    2022-02-25 05:27 Two residential buildings in Kyiv are on fire from intercepted unidentified enemy aircraft.
    2022-02-25 05:24 Kyiv sustains heavy missile, aircraft assault overnight.
    2022-02-25 04:46 Kyiv residents report loud explosions.
    2022-02-25 04:44 France will give Ukraine 300 million euros in aid and military equipment, Le Monde reports.
    2022-02-25 04:29 Two Russian soldiers surrender,
    2022-02-25 04:23 Russia detains over 1,700 protesters in 58 cities during anti-war demonstrations.
    2022-02-25 04:05 Russian military establish road checkpoints on Kyiv-Sumy highway.
    2022-02-25 03:42 Russian forces hold Konotop under siege, moving to Kyiv.
    2022-02-25 03:39 Hacktivist organization Anonymous declared cyber war on Russia.
    2022-02-25 03:07 Almost 24 hours since Russia 2022-02-25 03:05 Zelensky established commander-in-chief wartime headquarters.
    2022-02-25 02:57 Blinken: Russia plans to encircle Kyiv.
    2022-02-25 02:57 Patreon deleted the page of Ukraine-based Come Back Alive charity.
    2022-02-25 02:43 Major Ukrainian cities threatened by early morning offensive.
    2022-02-25 02:04 13 border guards were killed defending Ukraine’s Zmiinyi Island (Snake Island) in the Black Sea.
    2022-02-25 01:41 Psaki: U.S. ready to accept Ukrainian refugees.
    2022-02-25 01:24 Russian invaders abandon armoured vehicles, flee from Okhtyrka, Sumy oblast.
    2022-02-25 01:08 Zelensky: We are neither afraid of Russia nor talking with Kremlin.
    2022-02-25 01:02 Russian saboteurs have entered Kyiv, Zelensky warned.
    2022-02-25 00:06 Ukraine announces general mobilization.
    2022-02-24 23:53 Zelensky calls on EU to suspend Russia from SWIFT, stop trade in oil and gas.
    2022-02-24 23:06 Ukrainian army recaptures Antonov International Airport in Hostomel, less than 10 kilometers from Kyiv.
    2022-02-24 21:26 Biden: Ejecting Russia from SWIFT international payments system "always an option” but not right now.
    2022-02-24 21:13 Exit permitted but no entry during Kyiv curfew hours.
    2022-02-24 21:02 Biden: U.S. to sanction 4 more Russian banks, more members of Russian elites and their families.
    2022-02-24 20:50 Wizzair seeks to evacuate planes and staff from Ukraine.
    2022-02-24 20:24 National Bank of Poland to provide Ukraine’s NBU with $1 billion.
    2022-02-24 20:13 Mobile operators vow to keep providing services to clients with zero balance.
    2022-02-24 19:57 Health minister: 57 Ukrainians dead from Russian attack.
    2022-02-24 19:50 Russian forces take control over Antonov International Airport in Hostomel, roughly 10 kilometers away from Kyiv,
    2022-02-24 19:32 Russian forces seize control of the Chornobyl Nuclear Power Plant.
    2022-02-24 18:58 Journalist: 18 Russian Il-76 planes take off from Russia 2022-02-24 18:20 UK joins Poland, Estonia, Latvia in a call to cut Russia from SWIFT international payments system.
    2022-02-24 17:50 Ukrzaliznytsia evacuated more than 7,600 people from eastern Ukraine since 1 p.m. Kyiv time.
    2022-02-24 17:27 Curfew introduced in Kyiv from 10 p.m. to 7 a.m.
    2022-02-25 03:39 Hacktivist organization Anonymous declared cyber war on Russia.
    2022-02-25 03:07 Almost 24 hours since Russia 2022-02-25 03:05 Zelensky established commander-in-chief wartime headquarters.
    2022-02-25 02:57 Blinken: Russia plans to encircle Kyiv.
    2022-02-25 02:57 Patreon deleted the page of Ukraine-based Come Back Alive charity.
    2022-02-25 02:43 Major Ukrainian cities threatened by early morning offensive.
    2022-02-25 02:04 13 border guards were killed defending Ukraine’s Zmiinyi Island (Snake Island) in the Black Sea.
    2022-02-25 01:41 Psaki: U.S. ready to accept Ukrainian refugees.
    2022-02-25 01:24 Russian invaders abandon armoured vehicles, flee from Okhtyrka, Sumy oblast.
    2022-02-25 01:08 Zelensky: We are neither afraid of Russia nor talking with Kremlin.
    2022-02-25 01:02 Russian saboteurs have entered Kyiv, Zelensky warned.
    2022-02-25 00:06 Ukraine announces general mobilization.
    2022-02-24 23:53 Zelensky calls on EU to suspend Russia from SWIFT, stop trade in oil and gas.
    2022-02-24 23:06 Ukrainian army recaptures Antonov International Airport in Hostomel, less than 10 kilometers from Kyiv.
    2022-02-24 21:26 Biden: Ejecting Russia from SWIFT international payments system "always an option” but not right now.
    2022-02-24 21:13 Exit permitted but no entry during Kyiv curfew hours.
    2022-02-24 21:02 Biden: U.S. to sanction 4 more Russian banks, more members of Russian elites and their families.
    2022-02-24 20:50 Wizzair seeks to evacuate planes and staff from Ukraine.
    2022-02-24 20:24 National Bank of Poland to provide Ukraine’s NBU with $1 billion.
    2022-02-24 20:13 Mobile operators vow to keep providing services to clients with zero balance.
    2022-02-24 19:57 Health minister: 57 Ukrainians dead from Russian attack.
    2022-02-24 19:50 Russian forces take control over Antonov International Airport in Hostomel, roughly 10 kilometers away from Kyiv,
    2022-02-24 19:32 Russian forces seize control of the Chornobyl Nuclear Power Plant.
    2022-02-24 18:58 Journalist: 18 Russian Il-76 planes take off from Russia 2022-02-24 18:20 UK joins Poland, Estonia, Latvia in a call to cut Russia from SWIFT international payments system.
    2022-02-24 17:50 Ukrzaliznytsia evacuated more than 7,600 people from eastern Ukraine since 1 p.m. Kyiv time.
    2022-02-24 17:27 Curfew introduced in Kyiv from 10 p.m. to 7 a.m.
    2022-02-24 16:55 Number of people killed by Russian strike in Odesa Oblast reaches 22.
    2022-02-24 16:49 Russian forces capture Kakhovka Hydroelectric Power Plant.
    2022-02-24 16:45 Russian forces have pushed into the Chornobyl exclusion zone,
    2022-02-24 16:26 Von der Leyen: EU ready to host all refugees from Ukraine.
    2022-02-24 16:23 Kyiv administration asks residents to immediately take shelter due to Russian air attack.
    2022-02-24 16:10 Johnson: Western sanctions will stop Russian economy.
    2022-02-24 15:57 Russian billionaires lose $38 billion due to Kremlin 2022-02-24 15:54 Belarusian regime launches 4 ballistic missiles.
    2022-02-24 15:50 More than 100 Russian troops killed.
    2022-02-24 15:36 Ukrainian soldiers clash with Russians in a fierce fight near Sumy, a regional capital located near the border with Russia.
    2022-02-24 15:32 President 2022-02-24 15:25 Ukrainian military plane hit, deaths to be confirmed.
    2022-02-24 14:30 NATO puts warplanes on high alert, activates its defense plans.
    2022-02-24 14:02 Dozens of Ukrainians killed within hours of Russian invasion
    2022-02-24 14:02 Russian airstrike kills six people outside Kyiv.
    2022-02-24 13:59 Timothy Ash: What Russia’s attack means for the world
    2022-02-24 13:44 Russia attacks border guard strongpoints in Kyiv Oblast.
    2022-02-24 13:34 Ukraine’s military take two Russian prisoners of war in Donbas.
    2022-02-24 13:26 Russia sent conscripts to Ukrainian border ahead of Feb. 24 invasion.
    2022-02-24 20:13 Mobile operators vow to keep providing services to clients with zero balance.
    2022-02-24 19:57 Health minister: 57 Ukrainians dead from Russian attack.
    2022-02-24 19:50 Russian forces take control over Antonov International Airport in Hostomel, roughly 10 kilometers away from Kyiv,
    2022-02-24 19:32 Russian forces seize control of the Chornobyl Nuclear Power Plant.
    2022-02-24 18:58 Journalist: 18 Russian Il-76 planes take off from Russia 2022-02-24 18:20 UK joins Poland, Estonia, Latvia in a call to cut Russia from SWIFT international payments system.
    2022-02-24 17:50 Ukrzaliznytsia evacuated more than 7,600 people from eastern Ukraine since 1 p.m. Kyiv time.
    2022-02-24 17:27 Curfew introduced in Kyiv from 10 p.m. to 7 a.m.
    2022-02-24 16:55 Number of people killed by Russian strike in Odesa Oblast reaches 22.
    2022-02-24 16:49 Russian forces capture Kakhovka Hydroelectric Power Plant.
    2022-02-24 16:45 Russian forces have pushed into the Chornobyl exclusion zone,
    2022-02-24 16:26 Von der Leyen: EU ready to host all refugees from Ukraine.
    2022-02-24 16:23 Kyiv administration asks residents to immediately take shelter due to Russian air attack.
    2022-02-24 16:10 Johnson: Western sanctions will stop Russian economy.
    2022-02-24 15:57 Russian billionaires lose $38 billion due to Kremlin 2022-02-24 15:54 Belarusian regime launches 4 ballistic missiles.
    2022-02-24 15:50 More than 100 Russian troops killed.
    2022-02-24 15:36 Ukrainian soldiers clash with Russians in a fierce fight near Sumy, a regional capital located near the border with Russia.
    2022-02-24 15:32 President 2022-02-24 15:25 Ukrainian military plane hit, deaths to be confirmed.
    2022-02-24 14:30 NATO puts warplanes on high alert, activates its defense plans.
    2022-02-24 14:02 Dozens of Ukrainians killed within hours of Russian invasion
    2022-02-24 14:02 Russian airstrike kills six people outside Kyiv.
    2022-02-24 13:59 Timothy Ash: What Russia’s attack means for the world
    2022-02-24 13:44 Russia attacks border guard strongpoints in Kyiv Oblast.
    2022-02-24 13:34 Ukraine’s military take two Russian prisoners of war in Donbas.
    2022-02-24 13:26 Russia sent conscripts to Ukrainian border ahead of Feb. 24 invasion.
    2022-02-24 13:07 Russian bombardment kills 18 people in Odesa Oblast.
    2022-02-24 12:55 Russian stock markets have collapsed with the start of the war.
    2022-02-24 12:40 Ukrzaliznytsia evacuates 3,100 civilians from Donetsk and Luhansk oblasts.
    2022-02-24 12:37 Interior ministry: Three civilians killed by Russian assault on Vuhledar.
    2022-02-24 12:01 Moldova announces state of emergency.
    2022-02-24 11:54 EU announces 2022-02-24 11:52 Russian bombardment kills a child in Chuhuiv city, Kharkiv Oblast.
    2022-02-24 11:26 Ukrzaliznytsia ready to evacuate civilians from front-line towns.
    2022-02-24 11:24 Lukashenko: Belarusian troops not taking part in Russian invasion of Ukraine.
    2022-02-24 11:18 Banking system will work but with limitations, according to President 2022-02-24 11:15 One civilian killed by Russian shelling in Uman, Cherkasy Oblast.
    2022-02-24 11:13 Ukrainian military destroys 6th Russian plane.
    2022-02-24 11:11 Defense Minister Oleksiy Reznikov called all Ukrainians who are able to hold a weapon to mobilize.
    2022-02-24 11:07 Russian hackers are conducting a massive cyber-attack on government sites.
    2022-02-24 10:58 Russia attacks wide range of targets in Ukraine (LIVE UPDATES)
    2022-02-24 10:34 Poland and Baltic countries trigger consultations under NATO article 4.
    2022-02-24 15:32 President 2022-02-24 15:25 Ukrainian military plane hit, deaths to be confirmed.
    2022-02-24 14:30 NATO puts warplanes on high alert, activates its defense plans.
    2022-02-24 14:02 Dozens of Ukrainians killed within hours of Russian invasion
    2022-02-24 14:02 Russian airstrike kills six people outside Kyiv.
    2022-02-24 13:59 Timothy Ash: What Russia’s attack means for the world
    2022-02-24 13:44 Russia attacks border guard strongpoints in Kyiv Oblast.
    2022-02-24 13:34 Ukraine’s military take two Russian prisoners of war in Donbas.
    2022-02-24 13:26 Russia sent conscripts to Ukrainian border ahead of Feb. 24 invasion.
    2022-02-24 13:07 Russian bombardment kills 18 people in Odesa Oblast.
    2022-02-24 12:55 Russian stock markets have collapsed with the start of the war.
    2022-02-24 12:40 Ukrzaliznytsia evacuates 3,100 civilians from Donetsk and Luhansk oblasts.
    2022-02-24 12:37 Interior ministry: Three civilians killed by Russian assault on Vuhledar.
    2022-02-24 12:01 Moldova announces state of emergency.
    2022-02-24 11:54 EU announces 2022-02-24 11:52 Russian bombardment kills a child in Chuhuiv city, Kharkiv Oblast.
    2022-02-24 11:26 Ukrzaliznytsia ready to evacuate civilians from front-line towns.
    2022-02-24 11:24 Lukashenko: Belarusian troops not taking part in Russian invasion of Ukraine.
    2022-02-24 11:18 Banking system will work but with limitations, according to President 2022-02-24 11:15 One civilian killed by Russian shelling in Uman, Cherkasy Oblast.
    2022-02-24 11:13 Ukrainian military destroys 6th Russian plane.
    2022-02-24 11:11 Defense Minister Oleksiy Reznikov called all Ukrainians who are able to hold a weapon to mobilize.
    2022-02-24 11:07 Russian hackers are conducting a massive cyber-attack on government sites.
    2022-02-24 10:58 Russia attacks wide range of targets in Ukraine (LIVE UPDATES)
    2022-02-24 10:34 Poland and Baltic countries trigger consultations under NATO article 4.
    2022-02-24 10:04 European Commission President says they will not let Putin tear down Europe 2022-02-24 09:33 Ukrainian border guard died as a result of a rocket shelling from occupied Crimea.
    2022-02-24 09:19 Interior Ministry: villages Horodyshche, Milove in Luhansk Oblast captured by Russian forces.
    2022-02-24 09:00 Ukraine imposes nationwide martial law.
    2022-02-24 06:35 Ukrainian ambassador to the UN urges Russian chair to step down, calls on Security Council to 2022-02-24 05:41 Biden warns that the world will hold Russia accountable as Putin declared war on Ukraine.
    2022-02-24 05:04 The U.S. ambassador to the U.N. confirms Russia 2022-02-24 04:57 PUTIN DECLARES WAR ON UKRAINE
    2022-02-24 04:42 The U.N. Security Council meets over Ukraine.
    2022-02-24 04:32 US State Department warns of Russian false flags.
    2022-02-24 04:15 Flights by civil aircraft now restricted inside Ukraine.
    2022-02-24 02:52 Blinken: Russia will launch a full-scale invasion of Ukraine before the end of the night.
    2022-02-24 02:27 Dzerkalo Tyzhnia: Airports in Kharkiv, Dnipro and Zaporizhzhia are blocking runways in case of attack.
    2022-02-24 02:20 UN Security Council will hold urgent meeting at 4:30 a.m. Kyiv time.
    2022-02-24 02:04 A new cyberattack hits several Ukrainian government websites.
    2022-02-24 01:19 Dnipro airport temporarily suspends flights following Kharkiv airport.
    2022-02-24 01:10 Zelensky calls on Russian people to prevent war against Ukraine in new address.
    2022-02-24 01:08 Zelensky delivers video address, says he called Putin, received no reply.
    2022-02-24 00:39 Ukraine requestes urgent UN Security Council meeting.
    2022-02-24 11:26 Ukrzaliznytsia ready to evacuate civilians from front-line towns.
    2022-02-24 11:24 Lukashenko: Belarusian troops not taking part in Russian invasion of Ukraine.
    2022-02-24 11:18 Banking system will work but with limitations, according to President 2022-02-24 11:15 One civilian killed by Russian shelling in Uman, Cherkasy Oblast.
    2022-02-24 11:13 Ukrainian military destroys 6th Russian plane.
    2022-02-24 11:11 Defense Minister Oleksiy Reznikov called all Ukrainians who are able to hold a weapon to mobilize.
    2022-02-24 11:07 Russian hackers are conducting a massive cyber-attack on government sites.
    2022-02-24 10:58 Russia attacks wide range of targets in Ukraine (LIVE UPDATES)
    2022-02-24 10:34 Poland and Baltic countries trigger consultations under NATO article 4.
    2022-02-24 10:04 European Commission President says they will not let Putin tear down Europe 2022-02-24 09:33 Ukrainian border guard died as a result of a rocket shelling from occupied Crimea.
    2022-02-24 09:19 Interior Ministry: villages Horodyshche, Milove in Luhansk Oblast captured by Russian forces.
    2022-02-24 09:00 Ukraine imposes nationwide martial law.
    2022-02-24 06:35 Ukrainian ambassador to the UN urges Russian chair to step down, calls on Security Council to 2022-02-24 05:41 Biden warns that the world will hold Russia accountable as Putin declared war on Ukraine.
    2022-02-24 05:04 The U.S. ambassador to the U.N. confirms Russia 2022-02-24 04:57 PUTIN DECLARES WAR ON UKRAINE
    2022-02-24 04:42 The U.N. Security Council meets over Ukraine.
    2022-02-24 04:32 US State Department warns of Russian false flags.
    2022-02-24 04:15 Flights by civil aircraft now restricted inside Ukraine.
    2022-02-24 02:52 Blinken: Russia will launch a full-scale invasion of Ukraine before the end of the night.
    2022-02-24 02:27 Dzerkalo Tyzhnia: Airports in Kharkiv, Dnipro and Zaporizhzhia are blocking runways in case of attack.
    2022-02-24 02:20 UN Security Council will hold urgent meeting at 4:30 a.m. Kyiv time.
    2022-02-24 02:04 A new cyberattack hits several Ukrainian government websites.
    2022-02-24 01:19 Dnipro airport temporarily suspends flights following Kharkiv airport.
    2022-02-24 01:10 Zelensky calls on Russian people to prevent war against Ukraine in new address.
    2022-02-24 01:08 Zelensky delivers video address, says he called Putin, received no reply.
    2022-02-24 00:39 Ukraine requestes urgent UN Security Council meeting.
    2022-02-23 23:53 Zelensky announces ‘economic patriotism’ program, meets with top 50 businesses
    2022-02-23 23:48 Kharkiv airport temporarily closed on Feb. 23.
    2022-02-23 23:11 Russia 2022-02-23 23:01 Pentagon: Russian troops ready for an imminent attack.
    2022-02-23 22:56 Ukraine’s Eurobonds fell to lowest level since 2015.
    2022-02-23 22:39 Kuleba: Potential staged Russian provocation being prepared in Crimea.
    2022-02-23 22:27 In a long-anticipated move, U.S. to impose sanctions on Nord Stream 2 operator
    2022-02-23 21:49 BREAKING: Ukraine imposes state of emergency
    2022-02-23 21:19 US to impose sanctions on Nord Stream 2 operator.
    2022-02-23 21:09 Russian proxy leader calls on Ukraine to withdraw forces from all of Donetsk Oblast.
    2022-02-23 20:56 Evacuees from occupied Donbas get cold welcome in Russia
    2022-02-23 20:02 Eugene Czolij: The Budapest Memorandum – reality check
    2022-02-23 20:02 Newsweek: US warns Ukraine of full-scale Russian invasion within 48 hours.
    2022-02-23 19:42 Ukraine allocates additional $793 million for defense.
    2022-02-23 19:03 Ukraine lifts Covid-19 entry restrictions for its nationals.
    2022-02-23 18:51 Russia accuses 85 Ukrainians of 2022-02-23 18:22 Kuleba: Large-scale war in Ukraine will be the end of world order as we know it.
    2022-02-23 18:17 1 soldier killed, 1 wounded by Russian-led militant shelling.
    2022-02-23 18:11 Ukraine detects Russian military units in occupied Donbas.
    2022-02-23 17:47 Civilian wounded in Russian shelling in Donbas.
    2022-02-24 04:32 US State Department warns of Russian false flags.
    2022-02-24 04:15 Flights by civil aircraft now restricted inside Ukraine.
    2022-02-24 02:52 Blinken: Russia will launch a full-scale invasion of Ukraine before the end of the night.
    2022-02-24 02:27 Dzerkalo Tyzhnia: Airports in Kharkiv, Dnipro and Zaporizhzhia are blocking runways in case of attack.
    2022-02-24 02:20 UN Security Council will hold urgent meeting at 4:30 a.m. Kyiv time.
    2022-02-24 02:04 A new cyberattack hits several Ukrainian government websites.
    2022-02-24 01:19 Dnipro airport temporarily suspends flights following Kharkiv airport.
    2022-02-24 01:10 Zelensky calls on Russian people to prevent war against Ukraine in new address.
    2022-02-24 01:08 Zelensky delivers video address, says he called Putin, received no reply.
    2022-02-24 00:39 Ukraine requestes urgent UN Security Council meeting.
    2022-02-23 23:53 Zelensky announces ‘economic patriotism’ program, meets with top 50 businesses
    2022-02-23 23:48 Kharkiv airport temporarily closed on Feb. 23.
    2022-02-23 23:11 Russia 2022-02-23 23:01 Pentagon: Russian troops ready for an imminent attack.
    2022-02-23 22:56 Ukraine’s Eurobonds fell to lowest level since 2015.
    2022-02-23 22:39 Kuleba: Potential staged Russian provocation being prepared in Crimea.
    2022-02-23 22:27 In a long-anticipated move, U.S. to impose sanctions on Nord Stream 2 operator
    2022-02-23 21:49 BREAKING: Ukraine imposes state of emergency
    2022-02-23 21:19 US to impose sanctions on Nord Stream 2 operator.
    2022-02-23 21:09 Russian proxy leader calls on Ukraine to withdraw forces from all of Donetsk Oblast.
    2022-02-23 20:56 Evacuees from occupied Donbas get cold welcome in Russia
    2022-02-23 20:02 Eugene Czolij: The Budapest Memorandum – reality check
    2022-02-23 20:02 Newsweek: US warns Ukraine of full-scale Russian invasion within 48 hours.
    2022-02-23 19:42 Ukraine allocates additional $793 million for defense.
    2022-02-23 19:03 Ukraine lifts Covid-19 entry restrictions for its nationals.
    2022-02-23 18:51 Russia accuses 85 Ukrainians of 2022-02-23 18:22 Kuleba: Large-scale war in Ukraine will be the end of world order as we know it.
    2022-02-23 18:17 1 soldier killed, 1 wounded by Russian-led militant shelling.
    2022-02-23 18:11 Ukraine detects Russian military units in occupied Donbas.
    2022-02-23 17:47 Civilian wounded in Russian shelling in Donbas.
    2022-02-23 17:38 Hryvnia falls following news about state of emergency.
    2022-02-23 17:33 DDoS attack also targets Ukrainian banks, some government websites back up.
    2022-02-23 17:09 Lithuania, Poland say Ukraine should get EU membership candidate status.
    2022-02-23 16:53 New cyberattack hits government websites in Ukraine.
    2022-02-23 16:46 Russia promises ‘strong response’ to US sanctions.
    2022-02-23 16:15 Ukraine to mobilize 36,000 reservists for Armed Forces, 5,000 for National Guard, 5,000 for Border Guard Service.
    2022-02-23 15:26 UK to send more weapons to Ukraine.
    2022-02-23 14:47 Canada, Japan, Australia join Western allies in Russia sanction sweep
    2022-02-23 14:31 Zelensky to meet with business owners to move past disagreements, establish support.
    2022-02-23 14:15 Russia gathered almost entire navy in Black, Azov seas.
    2022-02-23 13:50 Ukraine to impose state of emergency, creating a variety of restrictions
    2022-02-23 13:30 UK will provide $500 million in guaranteed credit to Ukraine.
    2022-02-23 13:26 Bill liberalizing gun ownership passes in first reading.
    2022-02-23 12:44 Before Ukraine, there was Georgia: How Russia recycles its 2008 playbook
    2022-02-23 12:22 Ukraine to impose state of emergency.
    2022-02-23 12:04 Ukrainian lawmakers approve sanctions on Russian officials involved in recognition of Ukraine’s occupied territories.
    2022-02-23 11:56 Foreign Ministry urges Ukrainians to leave Russia, avoid traveling there.
    2022-02-23 11:16 National Police to protect over 100 critical infrastructure objects.
    2022-02-23 11:09 Ukraine begins calling up reservists starting Feb. 23.
    2022-02-23 10:51 US calls off meetings with Putin, Russian foreign minister Lavrov.
    2022-02-23 20:56 Evacuees from occupied Donbas get cold welcome in Russia
    2022-02-23 20:02 Eugene Czolij: The Budapest Memorandum – reality check
    2022-02-23 20:02 Newsweek: US warns Ukraine of full-scale Russian invasion within 48 hours.
    2022-02-23 19:42 Ukraine allocates additional $793 million for defense.
    2022-02-23 19:03 Ukraine lifts Covid-19 entry restrictions for its nationals.
    2022-02-23 18:51 Russia accuses 85 Ukrainians of 2022-02-23 18:22 Kuleba: Large-scale war in Ukraine will be the end of world order as we know it.
    2022-02-23 18:17 1 soldier killed, 1 wounded by Russian-led militant shelling.
    2022-02-23 18:11 Ukraine detects Russian military units in occupied Donbas.
    2022-02-23 17:47 Civilian wounded in Russian shelling in Donbas.
    2022-02-23 17:38 Hryvnia falls following news about state of emergency.
    2022-02-23 17:33 DDoS attack also targets Ukrainian banks, some government websites back up.
    2022-02-23 17:09 Lithuania, Poland say Ukraine should get EU membership candidate status.
    2022-02-23 16:53 New cyberattack hits government websites in Ukraine.
    2022-02-23 16:46 Russia promises ‘strong response’ to US sanctions.
    2022-02-23 16:15 Ukraine to mobilize 36,000 reservists for Armed Forces, 5,000 for National Guard, 5,000 for Border Guard Service.
    2022-02-23 15:26 UK to send more weapons to Ukraine.
    2022-02-23 14:47 Canada, Japan, Australia join Western allies in Russia sanction sweep
    2022-02-23 14:31 Zelensky to meet with business owners to move past disagreements, establish support.
    2022-02-23 14:15 Russia gathered almost entire navy in Black, Azov seas.
    2022-02-23 13:50 Ukraine to impose state of emergency, creating a variety of restrictions
    2022-02-23 13:30 UK will provide $500 million in guaranteed credit to Ukraine.
    2022-02-23 13:26 Bill liberalizing gun ownership passes in first reading.
    2022-02-23 12:44 Before Ukraine, there was Georgia: How Russia recycles its 2008 playbook
    2022-02-23 12:22 Ukraine to impose state of emergency.
    2022-02-23 12:04 Ukrainian lawmakers approve sanctions on Russian officials involved in recognition of Ukraine’s occupied territories.
    2022-02-23 11:56 Foreign Ministry urges Ukrainians to leave Russia, avoid traveling there.
    2022-02-23 11:16 National Police to protect over 100 critical infrastructure objects.
    2022-02-23 11:09 Ukraine begins calling up reservists starting Feb. 23.
    2022-02-23 10:51 US calls off meetings with Putin, Russian foreign minister Lavrov.
    2022-02-23 10:34 Australia imposes sanctions, travel bans on 8 members of Russia’s security council.
    2022-02-23 10:26 Japan announces travel restrictions, asset freezes for Russian elite, sanctions on Russia 2022-02-23 10:01 Covid-19 in Ukraine:
    2022-02-23 01:36 Canada’s sanctions target Russian sovereign debt, 2 state banks, Russian lawmakers.
    2022-02-23 00:48 World reacts to Kremlin escalation with sanctions
    2022-02-23 00:27 Breakdown of Putin’s false narratives to justify aggression against Ukraine
    2022-02-23 00:17 Blinken calls off meeting with Russian foreign minister scheduled on Feb. 24.
    2022-02-22 23:36 1 Ukrainian soldier killed, 1 wounded in Donbas.
    2022-02-22 23:12 US Treasury imposes sanctions on Russian officials, financial institutions.
    2022-02-22 23:00 Zelensky mobilizes reserve personnel, schedules Territorial Defense drills.
    2022-02-22 22:36 Center for Defense Strategies: What Putin’s watershed decision means for Ukraine, world? (analysis)
    2022-02-22 21:30 Biden: Putin 2022-02-22 21:26 US imposes sanctions on 2 Russian banks, sovereign debt, and Russian elites.
    2022-02-22 21:16 Putin says Russia-backed illegitimate ‘states’ in eastern Ukraine have claim to entire regions of Donetsk, Luhansk
    2022-02-22 20:48 EU imposes sanctions on Russian lawmakers, officials, bans investors from trading Russian state bonds.
    2022-02-22 20:34 Russia pulls diplomatic staff from Ukraine.
    2022-02-22 20:01 Editorial: Sanction Russia now
    2022-02-22 18:53 Putin: Minsk Agreements don 2022-02-22 18:48 Ukrainian civilians fearlessly prepare for Russia’s offensive
    2022-02-22 18:37 Putin says Russia recognizes whole Donbas region in eastern Ukraine as belonging to illegitimate 2022-02-23 13:50 Ukraine to impose state of emergency, creating a variety of restrictions
    2022-02-23 13:30 UK will provide $500 million in guaranteed credit to Ukraine.
    2022-02-23 13:26 Bill liberalizing gun ownership passes in first reading.
    2022-02-23 12:44 Before Ukraine, there was Georgia: How Russia recycles its 2008 playbook
    2022-02-23 12:22 Ukraine to impose state of emergency.
    2022-02-23 12:04 Ukrainian lawmakers approve sanctions on Russian officials involved in recognition of Ukraine’s occupied territories.
    2022-02-23 11:56 Foreign Ministry urges Ukrainians to leave Russia, avoid traveling there.
    2022-02-23 11:16 National Police to protect over 100 critical infrastructure objects.
    2022-02-23 11:09 Ukraine begins calling up reservists starting Feb. 23.
    2022-02-23 10:51 US calls off meetings with Putin, Russian foreign minister Lavrov.
    2022-02-23 10:34 Australia imposes sanctions, travel bans on 8 members of Russia’s security council.
    2022-02-23 10:26 Japan announces travel restrictions, asset freezes for Russian elite, sanctions on Russia 2022-02-23 10:01 Covid-19 in Ukraine:
    2022-02-23 01:36 Canada’s sanctions target Russian sovereign debt, 2 state banks, Russian lawmakers.
    2022-02-23 00:48 World reacts to Kremlin escalation with sanctions
    2022-02-23 00:27 Breakdown of Putin’s false narratives to justify aggression against Ukraine
    2022-02-23 00:17 Blinken calls off meeting with Russian foreign minister scheduled on Feb. 24.
    2022-02-22 23:36 1 Ukrainian soldier killed, 1 wounded in Donbas.
    2022-02-22 23:12 US Treasury imposes sanctions on Russian officials, financial institutions.
    2022-02-22 23:00 Zelensky mobilizes reserve personnel, schedules Territorial Defense drills.
    2022-02-22 22:36 Center for Defense Strategies: What Putin’s watershed decision means for Ukraine, world? (analysis)
    2022-02-22 21:30 Biden: Putin 2022-02-22 21:26 US imposes sanctions on 2 Russian banks, sovereign debt, and Russian elites.
    2022-02-22 21:16 Putin says Russia-backed illegitimate ‘states’ in eastern Ukraine have claim to entire regions of Donetsk, Luhansk
    2022-02-22 20:48 EU imposes sanctions on Russian lawmakers, officials, bans investors from trading Russian state bonds.
    2022-02-22 20:34 Russia pulls diplomatic staff from Ukraine.
    2022-02-22 20:01 Editorial: Sanction Russia now
    2022-02-22 18:53 Putin: Minsk Agreements don 2022-02-22 18:48 Ukrainian civilians fearlessly prepare for Russia’s offensive
    2022-02-22 18:37 Putin says Russia recognizes whole Donbas region in eastern Ukraine as belonging to illegitimate 2022-02-22 18:13 Russian parliament grants Putin the right to use army abroad.
    2022-02-22 17:52 Putin asks parliament to use army abroad.
    2022-02-22 16:58 Kalush Orchestra to represent Ukraine at Eurovision 2022.
    2022-02-22 16:25 Russian-led militants shell Shchastia, force power plant to shut down.
    2022-02-22 16:11 Russia says gas prices for European consumers can reach $2,000 per thousand cubic meters.
    2022-02-22 15:51 Russia vague on whether militants will try to claim entirety of Donetsk, Luhansk oblasts
    2022-02-22 15:51 Head of Kremlin-controlled proxies says militants want entire Donbas region.
    2022-02-22 15:18 European Commission prepares sanctions against Russian lawmakers who voted for recognition of Ukraine’s occupied territories.
    2022-02-22 14:47 UK to sanction five Russian banks, three Russian billionaires.
    2022-02-22 14:09 Russian parliament ratifies agreements on cooperation with occupied territories in Donbas.
    2022-02-22 13:49 Ryanair removes Kharkiv and Kherson airports from flight booking.
    2022-02-22 13:32 BREAKING: German economy minister orders Nord Stream 2 certification halt
    2022-02-22 13:09 Russian Foreign Minister says Ukraine has no right to sovereignty.
    2022-02-22 12:49 Zelensky to consider severing diplomatic relations with Russia.
    2022-02-22 12:43 Ukraine initiates process for Russia’s expulsion from EBRD after its recognition of occupied territories in Donbas.
    2022-02-22 12:21 US sanctions against Russia-occupied territories are "mockery," says Ukrainian ruling party lawmaker.
    2022-02-22 11:06 Ukraine calls for "harsh sanctions" against Russia for recognizing occupied territories as independent states.
    2022-02-22 10:49 Russian-backed proxies killed two Ukrainian soldiers on Feb. 21.
    2022-02-22 10:39 Ukrainian literary critic Ivan Dziuba dies at 90.
    2022-02-22 22:36 Center for Defense Strategies: What Putin’s watershed decision means for Ukraine, world? (analysis)
    2022-02-22 21:30 Biden: Putin 2022-02-22 21:26 US imposes sanctions on 2 Russian banks, sovereign debt, and Russian elites.
    2022-02-22 21:16 Putin says Russia-backed illegitimate ‘states’ in eastern Ukraine have claim to entire regions of Donetsk, Luhansk
    2022-02-22 20:48 EU imposes sanctions on Russian lawmakers, officials, bans investors from trading Russian state bonds.
    2022-02-22 20:34 Russia pulls diplomatic staff from Ukraine.
    2022-02-22 20:01 Editorial: Sanction Russia now
    2022-02-22 18:53 Putin: Minsk Agreements don 2022-02-22 18:48 Ukrainian civilians fearlessly prepare for Russia’s offensive
    2022-02-22 18:37 Putin says Russia recognizes whole Donbas region in eastern Ukraine as belonging to illegitimate 2022-02-22 18:13 Russian parliament grants Putin the right to use army abroad.
    2022-02-22 17:52 Putin asks parliament to use army abroad.
    2022-02-22 16:58 Kalush Orchestra to represent Ukraine at Eurovision 2022.
    2022-02-22 16:25 Russian-led militants shell Shchastia, force power plant to shut down.
    2022-02-22 16:11 Russia says gas prices for European consumers can reach $2,000 per thousand cubic meters.
    2022-02-22 15:51 Russia vague on whether militants will try to claim entirety of Donetsk, Luhansk oblasts
    2022-02-22 15:51 Head of Kremlin-controlled proxies says militants want entire Donbas region.
    2022-02-22 15:18 European Commission prepares sanctions against Russian lawmakers who voted for recognition of Ukraine’s occupied territories.
    2022-02-22 14:47 UK to sanction five Russian banks, three Russian billionaires.
    2022-02-22 14:09 Russian parliament ratifies agreements on cooperation with occupied territories in Donbas.
    2022-02-22 13:49 Ryanair removes Kharkiv and Kherson airports from flight booking.
    2022-02-22 13:32 BREAKING: German economy minister orders Nord Stream 2 certification halt
    2022-02-22 13:09 Russian Foreign Minister says Ukraine has no right to sovereignty.
    2022-02-22 12:49 Zelensky to consider severing diplomatic relations with Russia.
    2022-02-22 12:43 Ukraine initiates process for Russia’s expulsion from EBRD after its recognition of occupied territories in Donbas.
    2022-02-22 12:21 US sanctions against Russia-occupied territories are "mockery," says Ukrainian ruling party lawmaker.
    2022-02-22 11:06 Ukraine calls for "harsh sanctions" against Russia for recognizing occupied territories as independent states.
    2022-02-22 10:49 Russian-backed proxies killed two Ukrainian soldiers on Feb. 21.
    2022-02-22 10:39 Ukrainian literary critic Ivan Dziuba dies at 90.
    2022-02-22 10:37 Covid-19 in Ukraine: 22,440 new cases, 287 new deaths, and 15,443 new vaccinations.
    2022-02-22 08:06 Vladimir Kara-Murza: It’s not just the West that opposes Putin’s war on Ukraine. A lot of Russians do, too
    2022-02-22 07:43 Australia closes its embassy in Ukraine.
    2022-02-22 07:25 Peter Dickinson: Putin escalates his Ukraine war with recognition of separatist republics
    2022-02-22 06:45 Ukraine 2022-02-22 05:52 Several nations condemn Russia at UN Security Council meeting.
    2022-02-22 04:27 UN Security Council meets over Ukraine.
    2022-02-22 03:08 Zelensky addresses nation as Russia officially moves troops into occupied Donbas
    2022-02-22 02:46 Trudeau: Canada will impose economic sanctions on Russia.
    2022-02-22 02:33 Macron demands ‘targeted European sanctions’ against Russia.
    2022-02-22 02:29 Zelensky: 2022-02-22 02:27 Zelensky lists his action plan following Russia 2022-02-22 02:24 Zelensky: Ukraine qualifies Russia 2022-02-22 02:08 Putin sends troops to Kremlin-occupied Donbas
    2022-02-22 02:00 RFE/RL: Tanks, armored vehicles begin arriving in Russian-occupied Donetsk.
    2022-02-22 01:45 UN Emergency Security Council to meet at 4 a.m. Kyiv time on Feb. 22.
    2022-02-22 01:22 Bloomberg: US embassy considers leaving Ukraine for Poland.
    2022-02-22 01:05 US: Moscow compiled a list of Ukrainians ‘to be killed or sent to camps following a military occupation.’
    2022-02-21 23:46 Putin instructs to officially send troops to the occupied parts of Donbas ‘to maintain peace.’
    2022-02-21 23:29 Polish PM: Sanctions should be immediately imposed against Russia.
    2022-02-22 14:09 Russian parliament ratifies agreements on cooperation with occupied territories in Donbas.
    2022-02-22 13:49 Ryanair removes Kharkiv and Kherson airports from flight booking.
    2022-02-22 13:32 BREAKING: German economy minister orders Nord Stream 2 certification halt
    2022-02-22 13:09 Russian Foreign Minister says Ukraine has no right to sovereignty.
    2022-02-22 12:49 Zelensky to consider severing diplomatic relations with Russia.
    2022-02-22 12:43 Ukraine initiates process for Russia’s expulsion from EBRD after its recognition of occupied territories in Donbas.
    2022-02-22 12:21 US sanctions against Russia-occupied territories are "mockery," says Ukrainian ruling party lawmaker.
    2022-02-22 11:06 Ukraine calls for "harsh sanctions" against Russia for recognizing occupied territories as independent states.
    2022-02-22 10:49 Russian-backed proxies killed two Ukrainian soldiers on Feb. 21.
    2022-02-22 10:39 Ukrainian literary critic Ivan Dziuba dies at 90.
    2022-02-22 10:37 Covid-19 in Ukraine: 22,440 new cases, 287 new deaths, and 15,443 new vaccinations.
    2022-02-22 08:06 Vladimir Kara-Murza: It’s not just the West that opposes Putin’s war on Ukraine. A lot of Russians do, too
    2022-02-22 07:43 Australia closes its embassy in Ukraine.
    2022-02-22 07:25 Peter Dickinson: Putin escalates his Ukraine war with recognition of separatist republics
    2022-02-22 06:45 Ukraine 2022-02-22 05:52 Several nations condemn Russia at UN Security Council meeting.
    2022-02-22 04:27 UN Security Council meets over Ukraine.
    2022-02-22 03:08 Zelensky addresses nation as Russia officially moves troops into occupied Donbas
    2022-02-22 02:46 Trudeau: Canada will impose economic sanctions on Russia.
    2022-02-22 02:33 Macron demands ‘targeted European sanctions’ against Russia.
    2022-02-22 02:29 Zelensky: 2022-02-22 02:27 Zelensky lists his action plan following Russia 2022-02-22 02:24 Zelensky: Ukraine qualifies Russia 2022-02-22 02:08 Putin sends troops to Kremlin-occupied Donbas
    2022-02-22 02:00 RFE/RL: Tanks, armored vehicles begin arriving in Russian-occupied Donetsk.
    2022-02-22 01:45 UN Emergency Security Council to meet at 4 a.m. Kyiv time on Feb. 22.
    2022-02-22 01:22 Bloomberg: US embassy considers leaving Ukraine for Poland.
    2022-02-22 01:05 US: Moscow compiled a list of Ukrainians ‘to be killed or sent to camps following a military occupation.’
    2022-02-21 23:46 Putin instructs to officially send troops to the occupied parts of Donbas ‘to maintain peace.’
    2022-02-21 23:29 Polish PM: Sanctions should be immediately imposed against Russia.
    2022-02-21 23:20 UK to announce new sanctions against Russia on Feb. 22.
    2022-02-21 23:14 3 killed, 5 injured in Russian shelling of Donbas
    2022-02-21 23:12 EU to impose sanctions against those involved in recognizing Russian-occupied territories as independent states.
    2022-02-21 22:51 NATO chief condemns Kremlin’s decision to recognize independence of Russian-occupied Donbas.
    2022-02-21 22:37 US to impose ban on investment and trade with Russian-occupied territories.
    2022-02-21 22:29 Latvia’s foreign minister: ‘EU must impose sanctions on Russia immediately.’
    2022-02-21 22:01 Johnson: Recognizing independence of Russian-occupied Donbas is violation of international law.
    2022-02-21 21:57 Zelensky speaks with Biden following Russia 2022-02-21 21:50 European Commission president promises firm reaction to Russia 2022-02-21 21:37 BREAKING: Russia recognizes occupied regions in Ukraine as independent states
    2022-02-21 21:06 EU to impose sanctions if Russia recognizes occupied parts of eastern Ukraine.
    2022-02-21 20:40 Kremlin: Putin to recognize the independence of Russian-occupied Donbas soon.
    2022-02-21 19:54 EU sanctions will target Nord Stream 2 if Ukraine is invaded, Austria says.
    2022-02-21 19:26 Politico: Ukraine asks EU to send in cyber forces.
    2022-02-21 19:24 Zelensky: No signs of de-escalation by Russia.
    2022-02-21 19:12 Civilian killed in shelling by Russian proxies in Donbas.
    2022-02-21 18:42 Ukraine requests UN Security Council consultations for de-escalation, security guarantees.
    2022-02-22 02:29 Zelensky: 2022-02-22 02:27 Zelensky lists his action plan following Russia 2022-02-22 02:24 Zelensky: Ukraine qualifies Russia 2022-02-22 02:08 Putin sends troops to Kremlin-occupied Donbas
    2022-02-22 02:00 RFE/RL: Tanks, armored vehicles begin arriving in Russian-occupied Donetsk.
    2022-02-22 01:45 UN Emergency Security Council to meet at 4 a.m. Kyiv time on Feb. 22.
    2022-02-22 01:22 Bloomberg: US embassy considers leaving Ukraine for Poland.
    2022-02-22 01:05 US: Moscow compiled a list of Ukrainians ‘to be killed or sent to camps following a military occupation.’
    2022-02-21 23:46 Putin instructs to officially send troops to the occupied parts of Donbas ‘to maintain peace.’
    2022-02-21 23:29 Polish PM: Sanctions should be immediately imposed against Russia.
    2022-02-21 23:20 UK to announce new sanctions against Russia on Feb. 22.
    2022-02-21 23:14 3 killed, 5 injured in Russian shelling of Donbas
    2022-02-21 23:12 EU to impose sanctions against those involved in recognizing Russian-occupied territories as independent states.
    2022-02-21 22:51 NATO chief condemns Kremlin’s decision to recognize independence of Russian-occupied Donbas.
    2022-02-21 22:37 US to impose ban on investment and trade with Russian-occupied territories.
    2022-02-21 22:29 Latvia’s foreign minister: ‘EU must impose sanctions on Russia immediately.’
    2022-02-21 22:01 Johnson: Recognizing independence of Russian-occupied Donbas is violation of international law.
    2022-02-21 21:57 Zelensky speaks with Biden following Russia 2022-02-21 21:50 European Commission president promises firm reaction to Russia 2022-02-21 21:37 BREAKING: Russia recognizes occupied regions in Ukraine as independent states
    2022-02-21 21:06 EU to impose sanctions if Russia recognizes occupied parts of eastern Ukraine.
    2022-02-21 20:40 Kremlin: Putin to recognize the independence of Russian-occupied Donbas soon.
    2022-02-21 19:54 EU sanctions will target Nord Stream 2 if Ukraine is invaded, Austria says.
    2022-02-21 19:26 Politico: Ukraine asks EU to send in cyber forces.
    2022-02-21 19:24 Zelensky: No signs of de-escalation by Russia.
    2022-02-21 19:12 Civilian killed in shelling by Russian proxies in Donbas.
    2022-02-21 18:42 Ukraine requests UN Security Council consultations for de-escalation, security guarantees.
    2022-02-21 18:20 Putin 2022-02-21 17:45 Ukraine loses up to $3 billion monthly amid threat of Russian invasion.
    2022-02-21 17:34 Russia may recognize occupied territories of eastern Ukraine as independent states today.
    2022-02-21 17:26 Russia 2022-02-21 17:19 Russia claims it captured Ukrainian soldier.
    2022-02-21 17:13 Russian stock market plunges 12-15%.
    2022-02-21 16:42 Regular Kyivans get ready to help repel Russian attack
    2022-02-21 16:28 Ukraine denies Moscow 2022-02-21 16:22 Conflict Intelligence Team: Russia 2022-02-21 15:49 Kremlin proxies in eastern Ukraine ask Russia to recognize occupied territories as independent states.
    2022-02-21 15:29 Kremlin propaganda claims that Ukrainian troops attacked Russia.
    2022-02-21 15:09 Heavy shelling of two settlements in Luhansk Oblast leaves locals without utility services.
    2022-02-21 14:40 Air France cancels flights between Kyiv and Paris scheduled for Feb. 22.
    2022-02-21 14:18 Cyber attacks on Ukrainian banks, government websites may occur on Feb. 22.
    2022-02-21 14:08 Ukraine denies Kremlin 2022-02-21 13:45 Gas transit through Ukraine from Russia continues to decline.
    2022-02-21 13:34 EU to send advisory military mission to Ukraine.
    2022-02-21 13:17 Kremlin proxies introduce state of emergency in occupied parts of Donbas.
    2022-02-21 12:59 EU says it 2022-02-21 12:49 EU Council adopts 1.2 billion euros of economic assistance to Ukraine.
    2022-02-21 21:06 EU to impose sanctions if Russia recognizes occupied parts of eastern Ukraine.
    2022-02-21 20:40 Kremlin: Putin to recognize the independence of Russian-occupied Donbas soon.
    2022-02-21 19:54 EU sanctions will target Nord Stream 2 if Ukraine is invaded, Austria says.
    2022-02-21 19:26 Politico: Ukraine asks EU to send in cyber forces.
    2022-02-21 19:24 Zelensky: No signs of de-escalation by Russia.
    2022-02-21 19:12 Civilian killed in shelling by Russian proxies in Donbas.
    2022-02-21 18:42 Ukraine requests UN Security Council consultations for de-escalation, security guarantees.
    2022-02-21 18:20 Putin 2022-02-21 17:45 Ukraine loses up to $3 billion monthly amid threat of Russian invasion.
    2022-02-21 17:34 Russia may recognize occupied territories of eastern Ukraine as independent states today.
    2022-02-21 17:26 Russia 2022-02-21 17:19 Russia claims it captured Ukrainian soldier.
    2022-02-21 17:13 Russian stock market plunges 12-15%.
    2022-02-21 16:42 Regular Kyivans get ready to help repel Russian attack
    2022-02-21 16:28 Ukraine denies Moscow 2022-02-21 16:22 Conflict Intelligence Team: Russia 2022-02-21 15:49 Kremlin proxies in eastern Ukraine ask Russia to recognize occupied territories as independent states.
    2022-02-21 15:29 Kremlin propaganda claims that Ukrainian troops attacked Russia.
    2022-02-21 15:09 Heavy shelling of two settlements in Luhansk Oblast leaves locals without utility services.
    2022-02-21 14:40 Air France cancels flights between Kyiv and Paris scheduled for Feb. 22.
    2022-02-21 14:18 Cyber attacks on Ukrainian banks, government websites may occur on Feb. 22.
    2022-02-21 14:08 Ukraine denies Kremlin 2022-02-21 13:45 Gas transit through Ukraine from Russia continues to decline.
    2022-02-21 13:34 EU to send advisory military mission to Ukraine.
    2022-02-21 13:17 Kremlin proxies introduce state of emergency in occupied parts of Donbas.
    2022-02-21 12:59 EU says it 2022-02-21 12:49 EU Council adopts 1.2 billion euros of economic assistance to Ukraine.
    2022-02-21 12:31 Russian media: Over 61,000 refugees evacuated from Russian-occupied Donbas.
    2022-02-21 12:26 Russia denies concrete plans for Biden-Putin meeting.
    2022-02-21 11:12 Biden, Putin to meet as tensions rise in Donbas.
    2022-02-21 11:07 Explainer: Why did Putin’s regime engineer current military crisis over Ukraine?
    2022-02-21 10:29 Covid-19 in Ukraine: 13,562 new cases, 127 new deaths, and 17,172 new vaccinations.
    2022-02-20 20:53 Defense minister: Evacuees from Donbas returning from Russia.
    2022-02-20 20:43 OCCRP: Owners of collapsed Mriya Agro Holding embezzeled money through Credit Suisse.
    2022-02-20 20:07 SAS, Austrian Airlines and SWISS cancel flights to Ukraine.
    2022-02-20 20:05 Ukrainian State-Owned Enterprises Weekly – Issue 65
    2022-02-20 19:44 Russian propaganda prepares video with fake Ukrainian drones.
    2022-02-20 19:27 Ombudsman: Evacuees from Russian-occupied areas stuck without food, sleep.
    2022-02-20 18:46 Putin, Macron agree to work on ceasefire in Ukraine.
    2022-02-20 18:34 Russia intensifies barrage of flimsy yet dangerous disinformation
    2022-02-20 17:55 US, Lithuania say prolonged Russian drills in Belarus indicate imminent attack on Ukraine.
    2022-02-20 17:40 Russian proxies kill infantry driver, intelligence officer in Donbas
    2022-02-20 16:22 Zelensky, Macron hold second phone call in two days.
    2022-02-20 15:49 Russian firms to be blocked from trading in pounds, dollars in case of escalation.
    2022-02-20 15:29 India advises its citizens to leave Ukraine amid military escalation.
    2022-02-20 14:33 Russian, Belarusian troops to continue exercises along Ukrainian border.
    2022-02-20 14:20 Major explosion reported in Russian-occupied Donetsk.
    2022-02-21 15:29 Kremlin propaganda claims that Ukrainian troops attacked Russia.
    2022-02-21 15:09 Heavy shelling of two settlements in Luhansk Oblast leaves locals without utility services.
    2022-02-21 14:40 Air France cancels flights between Kyiv and Paris scheduled for Feb. 22.
    2022-02-21 14:18 Cyber attacks on Ukrainian banks, government websites may occur on Feb. 22.
    2022-02-21 14:08 Ukraine denies Kremlin 2022-02-21 13:45 Gas transit through Ukraine from Russia continues to decline.
    2022-02-21 13:34 EU to send advisory military mission to Ukraine.
    2022-02-21 13:17 Kremlin proxies introduce state of emergency in occupied parts of Donbas.
    2022-02-21 12:59 EU says it 2022-02-21 12:49 EU Council adopts 1.2 billion euros of economic assistance to Ukraine.
    2022-02-21 12:31 Russian media: Over 61,000 refugees evacuated from Russian-occupied Donbas.
    2022-02-21 12:26 Russia denies concrete plans for Biden-Putin meeting.
    2022-02-21 11:12 Biden, Putin to meet as tensions rise in Donbas.
    2022-02-21 11:07 Explainer: Why did Putin’s regime engineer current military crisis over Ukraine?
    2022-02-21 10:29 Covid-19 in Ukraine: 13,562 new cases, 127 new deaths, and 17,172 new vaccinations.
    2022-02-20 20:53 Defense minister: Evacuees from Donbas returning from Russia.
    2022-02-20 20:43 OCCRP: Owners of collapsed Mriya Agro Holding embezzeled money through Credit Suisse.
    2022-02-20 20:07 SAS, Austrian Airlines and SWISS cancel flights to Ukraine.
    2022-02-20 20:05 Ukrainian State-Owned Enterprises Weekly – Issue 65
    2022-02-20 19:44 Russian propaganda prepares video with fake Ukrainian drones.
    2022-02-20 19:27 Ombudsman: Evacuees from Russian-occupied areas stuck without food, sleep.
    2022-02-20 18:46 Putin, Macron agree to work on ceasefire in Ukraine.
    2022-02-20 18:34 Russia intensifies barrage of flimsy yet dangerous disinformation
    2022-02-20 17:55 US, Lithuania say prolonged Russian drills in Belarus indicate imminent attack on Ukraine.
    2022-02-20 17:40 Russian proxies kill infantry driver, intelligence officer in Donbas
    2022-02-20 16:22 Zelensky, Macron hold second phone call in two days.
    2022-02-20 15:49 Russian firms to be blocked from trading in pounds, dollars in case of escalation.
    2022-02-20 15:29 India advises its citizens to leave Ukraine amid military escalation.
    2022-02-20 14:33 Russian, Belarusian troops to continue exercises along Ukrainian border.
    2022-02-20 14:20 Major explosion reported in Russian-occupied Donetsk.
    2022-02-20 13:59 Russian parliament to discuss military escalation in Donbas on Feb. 22.
    2022-02-20 12:56 Russia to investigate 2022-02-20 12:38 Russia deploys 134 units of heavy military equipment in occupied Donbas.
    2022-02-20 11:32 Washington Post: Biden’s confidence about Russian attack came from intel on order given to officials.
    2022-02-20 11:18 Russian-led military bloc could send peacekeepers to Donbas.
    2022-02-20 10:53 Russian proxies violated ceasefire 136 times on Feb. 19.
    2022-02-20 10:30 Covid-19 in Ukraine: 17,448 new cases, 152 new deaths, and 29,528 new vaccinations.
    2022-02-19 23:51 Canada delivers weapons to Ukraine.
    2022-02-19 23:22 Zelensky’s full speech at Munich Security Conference
    2022-02-19 22:40 Army Chief Zaluzhnyi: Russia plans terrorist attacks in occupied Donbas to frame Ukraine.
    2022-02-19 22:15 Ukrainians inside occupied Donbas stuck between hope and fear
    2022-02-19 20:21 Germany, Austria ask nationals to leave Ukraine immediately.
    2022-02-19 20:17 Truss: Russia could face 2022-02-19 19:51 Second Ukrainian soldier killed in Donbas today.
    2022-02-19 19:36 Zelensky: 2022-02-19 19:24 Ukrainian MPs, foreign journalists come under fire in eastern Ukraine.
    2022-02-19 18:36 Zelensky in Munich demands security guarantees, calls for preemptive sanctions against Russia
    2022-02-19 18:14 France calls on its nationals to leave Ukraine if they don 2022-02-19 17:20 Zelensky wants Budapest Memorandum signatories to give new security guarantees for Ukraine.
    2022-02-19 17:10 Zelensky: 2022-02-20 19:27 Ombudsman: Evacuees from Russian-occupied areas stuck without food, sleep.
    2022-02-20 18:46 Putin, Macron agree to work on ceasefire in Ukraine.
    2022-02-20 18:34 Russia intensifies barrage of flimsy yet dangerous disinformation
    2022-02-20 17:55 US, Lithuania say prolonged Russian drills in Belarus indicate imminent attack on Ukraine.
    2022-02-20 17:40 Russian proxies kill infantry driver, intelligence officer in Donbas
    2022-02-20 16:22 Zelensky, Macron hold second phone call in two days.
    2022-02-20 15:49 Russian firms to be blocked from trading in pounds, dollars in case of escalation.
    2022-02-20 15:29 India advises its citizens to leave Ukraine amid military escalation.
    2022-02-20 14:33 Russian, Belarusian troops to continue exercises along Ukrainian border.
    2022-02-20 14:20 Major explosion reported in Russian-occupied Donetsk.
    2022-02-20 13:59 Russian parliament to discuss military escalation in Donbas on Feb. 22.
    2022-02-20 12:56 Russia to investigate 2022-02-20 12:38 Russia deploys 134 units of heavy military equipment in occupied Donbas.
    2022-02-20 11:32 Washington Post: Biden’s confidence about Russian attack came from intel on order given to officials.
    2022-02-20 11:18 Russian-led military bloc could send peacekeepers to Donbas.
    2022-02-20 10:53 Russian proxies violated ceasefire 136 times on Feb. 19.
    2022-02-20 10:30 Covid-19 in Ukraine: 17,448 new cases, 152 new deaths, and 29,528 new vaccinations.
    2022-02-19 23:51 Canada delivers weapons to Ukraine.
    2022-02-19 23:22 Zelensky’s full speech at Munich Security Conference
    2022-02-19 22:40 Army Chief Zaluzhnyi: Russia plans terrorist attacks in occupied Donbas to frame Ukraine.
    2022-02-19 22:15 Ukrainians inside occupied Donbas stuck between hope and fear
    2022-02-19 20:21 Germany, Austria ask nationals to leave Ukraine immediately.
    2022-02-19 20:17 Truss: Russia could face 2022-02-19 19:51 Second Ukrainian soldier killed in Donbas today.
    2022-02-19 19:36 Zelensky: 2022-02-19 19:24 Ukrainian MPs, foreign journalists come under fire in eastern Ukraine.
    2022-02-19 18:36 Zelensky in Munich demands security guarantees, calls for preemptive sanctions against Russia
    2022-02-19 18:14 France calls on its nationals to leave Ukraine if they don 2022-02-19 17:20 Zelensky wants Budapest Memorandum signatories to give new security guarantees for Ukraine.
    2022-02-19 17:10 Zelensky: 2022-02-19 16:47 NATO relocates its Ukrainian mission to Lviv and Brussels.
    2022-02-19 16:30 Lufthansa suspends all flights to and from Ukraine between Feb. 21 and Feb. 28.
    2022-02-19 16:19 Chinese foreign minister: Ukraine 2022-02-19 15:40 Russian media claim exploded shells found near Ukrainian border.
    2022-02-19 15:23 In case of invasion, territorial defense forces will ambush attackers 2022-02-19 15:06 Boris Johnson: Ukraine invasion would 2022-02-19 14:28 Harris: Russia to face 2022-02-19 14:19 In the occupied parts of Donbas, people struggle to get cash.
    2022-02-19 14:02 Zelensky arrives in Munich for security conference.
    2022-02-19 13:47 Putin, Lukashenko oversee nuclear-capable missiles drills.
    2022-02-19 12:50 Russian-installed authorities in eastern Ukraine shut schools, colleges.
    2022-02-19 12:29 Russia’s proxies declare mobilization amid heightened tensions in Donbas
    2022-02-19 11:20 Russian parliament 2022-02-19 10:57 Chornobyl Exclusion Zone closes for tourists.
    2022-02-19 10:40 Ukrainian soldier killed in Donbas.
    2022-02-19 10:17 Russia’s proxies announce mobilization.
    2022-02-19 02:53 Biden warns that Putin decided to invade as Russian proxies in Ukraine appear to stage escalation
    2022-02-19 01:06 Biden: 2022-02-19 00:17 Biden: 2022-02-19 22:15 Ukrainians inside occupied Donbas stuck between hope and fear
    2022-02-19 20:21 Germany, Austria ask nationals to leave Ukraine immediately.
    2022-02-19 20:17 Truss: Russia could face 2022-02-19 19:51 Second Ukrainian soldier killed in Donbas today.
    2022-02-19 19:36 Zelensky: 2022-02-19 19:24 Ukrainian MPs, foreign journalists come under fire in eastern Ukraine.
    2022-02-19 18:36 Zelensky in Munich demands security guarantees, calls for preemptive sanctions against Russia
    2022-02-19 18:14 France calls on its nationals to leave Ukraine if they don 2022-02-19 17:20 Zelensky wants Budapest Memorandum signatories to give new security guarantees for Ukraine.
    2022-02-19 17:10 Zelensky: 2022-02-19 16:47 NATO relocates its Ukrainian mission to Lviv and Brussels.
    2022-02-19 16:30 Lufthansa suspends all flights to and from Ukraine between Feb. 21 and Feb. 28.
    2022-02-19 16:19 Chinese foreign minister: Ukraine 2022-02-19 15:40 Russian media claim exploded shells found near Ukrainian border.
    2022-02-19 15:23 In case of invasion, territorial defense forces will ambush attackers 2022-02-19 15:06 Boris Johnson: Ukraine invasion would 2022-02-19 14:28 Harris: Russia to face 2022-02-19 14:19 In the occupied parts of Donbas, people struggle to get cash.
    2022-02-19 14:02 Zelensky arrives in Munich for security conference.
    2022-02-19 13:47 Putin, Lukashenko oversee nuclear-capable missiles drills.
    2022-02-19 12:50 Russian-installed authorities in eastern Ukraine shut schools, colleges.
    2022-02-19 12:29 Russia’s proxies declare mobilization amid heightened tensions in Donbas
    2022-02-19 11:20 Russian parliament 2022-02-19 10:57 Chornobyl Exclusion Zone closes for tourists.
    2022-02-19 10:40 Ukrainian soldier killed in Donbas.
    2022-02-19 10:17 Russia’s proxies announce mobilization.
    2022-02-19 02:53 Biden warns that Putin decided to invade as Russian proxies in Ukraine appear to stage escalation
    2022-02-19 01:06 Biden: 2022-02-19 00:17 Biden: 2022-02-18 22:12 Journalists allege that Russia 2022-02-18 21:47 UK moves embassy from Kyiv to Lviv.
    2022-02-18 21:08 Ukraine 2022-02-18 20:23 Kyiv-Mohyla students seek to oust education minister over independence clash
    2022-02-18 20:04 Ukraine 2022-02-18 19:30 US, NATO, Baltic leaders discuss Russian threat.
    2022-02-18 19:18 Javelin anti-tank missiles arrive from Estonia to Ukraine.
    2022-02-18 18:49 Russian media report alleged major explosion in downtown Donetsk.
    2022-02-18 18:29 Russia to allocate funding for evacuees from Donbas.
    2022-02-18 17:55 Kremlin proxies in Luhansk also announce civilian evacuation.
    2022-02-18 17:28 Russian proxies continue shelling in Donbas.
    2022-02-18 16:48 Russian proxies shell Ukrainian checkpoint, UN convoy.
    2022-02-18 16:22 Netherlands to supply weapons, helmets, radar to Ukraine.
    2022-02-18 16:02 Pentagon says no evidence of Russian withdrawal from Ukrainian border.
    2022-02-18 15:45 Russia’s proxies announce civilian evacuation ahead of war.
    2022-02-18 15:34 Russian police arrest Crimean Tatars in Ukraine’s annexed Crimea.
    2022-02-18 15:20 Subvariant of Omicron discovered in Ukraine.
    2022-02-18 15:11 Lukashenko: Belarus ready to host ‘nuclear weapons’ if West threatens his country.
    2022-02-18 15:06 Ukrainian bobsledder Lidiia Hunko tests positive for banned substance at Beijing Winter Olympics.
    2022-02-18 14:58 US: Russia has deployed between 169,000 to 190,000 personnel near Ukraine and in its occupied territories.
    2022-02-19 12:50 Russian-installed authorities in eastern Ukraine shut schools, colleges.
    2022-02-19 12:29 Russia’s proxies declare mobilization amid heightened tensions in Donbas
    2022-02-19 11:20 Russian parliament 2022-02-19 10:57 Chornobyl Exclusion Zone closes for tourists.
    2022-02-19 10:40 Ukrainian soldier killed in Donbas.
    2022-02-19 10:17 Russia’s proxies announce mobilization.
    2022-02-19 02:53 Biden warns that Putin decided to invade as Russian proxies in Ukraine appear to stage escalation
    2022-02-19 01:06 Biden: 2022-02-19 00:17 Biden: 2022-02-18 22:12 Journalists allege that Russia 2022-02-18 21:47 UK moves embassy from Kyiv to Lviv.
    2022-02-18 21:08 Ukraine 2022-02-18 20:23 Kyiv-Mohyla students seek to oust education minister over independence clash
    2022-02-18 20:04 Ukraine 2022-02-18 19:30 US, NATO, Baltic leaders discuss Russian threat.
    2022-02-18 19:18 Javelin anti-tank missiles arrive from Estonia to Ukraine.
    2022-02-18 18:49 Russian media report alleged major explosion in downtown Donetsk.
    2022-02-18 18:29 Russia to allocate funding for evacuees from Donbas.
    2022-02-18 17:55 Kremlin proxies in Luhansk also announce civilian evacuation.
    2022-02-18 17:28 Russian proxies continue shelling in Donbas.
    2022-02-18 16:48 Russian proxies shell Ukrainian checkpoint, UN convoy.
    2022-02-18 16:22 Netherlands to supply weapons, helmets, radar to Ukraine.
    2022-02-18 16:02 Pentagon says no evidence of Russian withdrawal from Ukrainian border.
    2022-02-18 15:45 Russia’s proxies announce civilian evacuation ahead of war.
    2022-02-18 15:34 Russian police arrest Crimean Tatars in Ukraine’s annexed Crimea.
    2022-02-18 15:20 Subvariant of Omicron discovered in Ukraine.
    2022-02-18 15:11 Lukashenko: Belarus ready to host ‘nuclear weapons’ if West threatens his country.
    2022-02-18 15:06 Ukrainian bobsledder Lidiia Hunko tests positive for banned substance at Beijing Winter Olympics.
    2022-02-18 14:58 US: Russia has deployed between 169,000 to 190,000 personnel near Ukraine and in its occupied territories.
    2022-02-18 14:53 European Commission expects 20,000 to more than million refugees if Russia invades further.
    2022-02-18 13:47 Shmyhal: Сoal reserves twice as high as last year.
    2022-02-18 13:24 Reznikov says main target of Russia’s shellings on Feb. 17 were civilians, calls it “a war crime.”
    2022-02-18 12:48 Defense Minister: Russia has massed around 149,000 troops near Ukraine.
    2022-02-18 12:33 Finance Minister: Ukraine expects first 600 million euro tranche of EU macro-financial assistance in ‘late March-early April.’
    2022-02-18 12:12 Poll: 45% of Ukrainians think Russian escalation risk is low or non-existent.
    2022-02-18 11:18 Kyiv to evacuate all residents if Russia’s war reaches the capital.
    2022-02-18 11:07 Weekend in Kyiv – Feb. 18-20
    2022-02-18 11:06 Covid-19 in Ukraine: 34,938 new cases, 282 new deaths, and 17,796 new vaccinations.
    2022-02-18 00:38 Danilov: Russia provokes Ukraine to fire in response.
    2022-02-18 00:04 Despite Russian attack threat, these foreigners stay in Ukraine
    2022-02-17 23:25 Truss visits Kyiv, announces trilateral partnership with Ukraine, Poland
    2022-02-17 20:29 Pentagon: Russia stocks up on blood, moves troops closer to Ukraine.
    2022-02-17 20:17 UK scraps ‘golden visa’ used by rich Russian elites.
    2022-02-17 20:00 In Kyiv, businesses stay put despite threat of Russian invasion
    2022-02-17 19:58 47 shelling incidents leave 5 injured in Donbas
    2022-02-17 19:45 Poll: 62% of Ukrainians support joining NATO.
    2022-02-17 18:47 Biden: threat of Russian further invasion is 2022-02-17 18:42 Russia threatens US with military action.
    2022-02-17 17:37 Ukraine, UK, Poland announce creation of trilateral partnership.
    2022-02-18 17:28 Russian proxies continue shelling in Donbas.
    2022-02-18 16:48 Russian proxies shell Ukrainian checkpoint, UN convoy.
    2022-02-18 16:22 Netherlands to supply weapons, helmets, radar to Ukraine.
    2022-02-18 16:02 Pentagon says no evidence of Russian withdrawal from Ukrainian border.
    2022-02-18 15:45 Russia’s proxies announce civilian evacuation ahead of war.
    2022-02-18 15:34 Russian police arrest Crimean Tatars in Ukraine’s annexed Crimea.
    2022-02-18 15:20 Subvariant of Omicron discovered in Ukraine.
    2022-02-18 15:11 Lukashenko: Belarus ready to host ‘nuclear weapons’ if West threatens his country.
    2022-02-18 15:06 Ukrainian bobsledder Lidiia Hunko tests positive for banned substance at Beijing Winter Olympics.
    2022-02-18 14:58 US: Russia has deployed between 169,000 to 190,000 personnel near Ukraine and in its occupied territories.
    2022-02-18 14:53 European Commission expects 20,000 to more than million refugees if Russia invades further.
    2022-02-18 13:47 Shmyhal: Сoal reserves twice as high as last year.
    2022-02-18 13:24 Reznikov says main target of Russia’s shellings on Feb. 17 were civilians, calls it “a war crime.”
    2022-02-18 12:48 Defense Minister: Russia has massed around 149,000 troops near Ukraine.
    2022-02-18 12:33 Finance Minister: Ukraine expects first 600 million euro tranche of EU macro-financial assistance in ‘late March-early April.’
    2022-02-18 12:12 Poll: 45% of Ukrainians think Russian escalation risk is low or non-existent.
    2022-02-18 11:18 Kyiv to evacuate all residents if Russia’s war reaches the capital.
    2022-02-18 11:07 Weekend in Kyiv – Feb. 18-20
    2022-02-18 11:06 Covid-19 in Ukraine: 34,938 new cases, 282 new deaths, and 17,796 new vaccinations.
    2022-02-18 00:38 Danilov: Russia provokes Ukraine to fire in response.
    2022-02-18 00:04 Despite Russian attack threat, these foreigners stay in Ukraine
    2022-02-17 23:25 Truss visits Kyiv, announces trilateral partnership with Ukraine, Poland
    2022-02-17 20:29 Pentagon: Russia stocks up on blood, moves troops closer to Ukraine.
    2022-02-17 20:17 UK scraps ‘golden visa’ used by rich Russian elites.
    2022-02-17 20:00 In Kyiv, businesses stay put despite threat of Russian invasion
    2022-02-17 19:58 47 shelling incidents leave 5 injured in Donbas
    2022-02-17 19:45 Poll: 62% of Ukrainians support joining NATO.
    2022-02-17 18:47 Biden: threat of Russian further invasion is 2022-02-17 18:42 Russia threatens US with military action.
    2022-02-17 17:37 Ukraine, UK, Poland announce creation of trilateral partnership.
    2022-02-17 16:39 Zelensky visits frontline, amid Russian shelling.
    2022-02-17 16:31 Russia expels deputy U.S. Ambassador Bartle Gorman from Moscow.
    2022-02-17 15:59 Zelensky calls on OSCE to remain in Ukraine, record ceasefire violations.
    2022-02-17 15:18 Stoltenberg: Russia may try to 2022-02-17 14:28 Ukrainian Max Polyakov forced to sell majority stake of US company Firefly Aerospace.
    2022-02-17 14:23 US intelligence predicts that invasion of Ukraine may be delayed by 4-5 days.
    2022-02-17 13:57 Russian-led militants shell more than 20 spots in eastern Ukraine on Feb. 17.
    2022-02-17 13:35 Import of cars into Ukraine increased by 58% compared to 2020, reaching nearly 1 million in 2021.
    2022-02-17 13:16 Ukraine authorizes Covid-19 drug Paxlovid for emergency use.
    2022-02-17 13:01 Zelensky made multiple attempts to contact Putin, without success.
    2022-02-17 12:15 Head of State Property Fund removed from office by parliament.
    2022-02-17 11:51 UK plans to end 2022-02-17 11:33 Parliament ratifies Open Skies Treaty with EU.
    2022-02-17 11:14 Russian-led militants shell village in Donbas, injure three daycare workers.
    2022-02-17 10:55 AP: Russia moves additional 7,000 troops to Ukraine 2022-02-17 10:37 OSCE: Russian-led militants in Donbas deploy tanks and artillery in restricted zones.
    2022-02-17 10:27 Covid-19 in Ukraine: 33,330 new cases, 259 new deaths, and 18,504 new vaccinations.
    2022-02-17 04:38 Anders Åslund: Putin has seriously wounded Ukraine’s economy without firing a single shot
    2022-02-16 21:39 European Parliament approves 1.2 billion euro loan to Ukraine.
    2022-02-16 19:49 Zelensky leads presidential poll with 24.6%, Poroshenko polls second.
    2022-02-18 00:04 Despite Russian attack threat, these foreigners stay in Ukraine
    2022-02-17 23:25 Truss visits Kyiv, announces trilateral partnership with Ukraine, Poland
    2022-02-17 20:29 Pentagon: Russia stocks up on blood, moves troops closer to Ukraine.
    2022-02-17 20:17 UK scraps ‘golden visa’ used by rich Russian elites.
    2022-02-17 20:00 In Kyiv, businesses stay put despite threat of Russian invasion
    2022-02-17 19:58 47 shelling incidents leave 5 injured in Donbas
    2022-02-17 19:45 Poll: 62% of Ukrainians support joining NATO.
    2022-02-17 18:47 Biden: threat of Russian further invasion is 2022-02-17 18:42 Russia threatens US with military action.
    2022-02-17 17:37 Ukraine, UK, Poland announce creation of trilateral partnership.
    2022-02-17 16:39 Zelensky visits frontline, amid Russian shelling.
    2022-02-17 16:31 Russia expels deputy U.S. Ambassador Bartle Gorman from Moscow.
    2022-02-17 15:59 Zelensky calls on OSCE to remain in Ukraine, record ceasefire violations.
    2022-02-17 15:18 Stoltenberg: Russia may try to 2022-02-17 14:28 Ukrainian Max Polyakov forced to sell majority stake of US company Firefly Aerospace.
    2022-02-17 14:23 US intelligence predicts that invasion of Ukraine may be delayed by 4-5 days.
    2022-02-17 13:57 Russian-led militants shell more than 20 spots in eastern Ukraine on Feb. 17.
    2022-02-17 13:35 Import of cars into Ukraine increased by 58% compared to 2020, reaching nearly 1 million in 2021.
    2022-02-17 13:16 Ukraine authorizes Covid-19 drug Paxlovid for emergency use.
    2022-02-17 13:01 Zelensky made multiple attempts to contact Putin, without success.
    2022-02-17 12:15 Head of State Property Fund removed from office by parliament.
    2022-02-17 11:51 UK plans to end 2022-02-17 11:33 Parliament ratifies Open Skies Treaty with EU.
    2022-02-17 11:14 Russian-led militants shell village in Donbas, injure three daycare workers.
    2022-02-17 10:55 AP: Russia moves additional 7,000 troops to Ukraine 2022-02-17 10:37 OSCE: Russian-led militants in Donbas deploy tanks and artillery in restricted zones.
    2022-02-17 10:27 Covid-19 in Ukraine: 33,330 new cases, 259 new deaths, and 18,504 new vaccinations.
    2022-02-17 04:38 Anders Åslund: Putin has seriously wounded Ukraine’s economy without firing a single shot
    2022-02-16 21:39 European Parliament approves 1.2 billion euro loan to Ukraine.
    2022-02-16 19:49 Zelensky leads presidential poll with 24.6%, Poroshenko polls second.
    2022-02-16 18:53 Blinken: US intelligence sees no evidence of Russia withdrawal.
    2022-02-16 18:16 Russian disinformation targets Western support of Ukraine
    2022-02-16 18:13 Al Jazeera: The Russians going against Putin in Ukraine.
    2022-02-16 17:51 EU leaders to meet on Feb. 17 to discuss latest events in Russia’s war on Ukraine.
    2022-02-16 17:28 Reuters: Russian threat to Ukraine to remain critical in February, senior Western intelligence official says.
    2022-02-16 16:26 Government: Recent cyber attack caused no data leaks or financial losses.
    2022-02-16 16:13 Russia sentences Ukrainian journalist to 6 years in jail.
    2022-02-16 15:52 Zelensky: No evidence of Russia pulling back troops.
    2022-02-16 15:28 Singer Alina Pash, who won Ukraine 2022-02-16 15:01 Estonia’s foreign intelligence: Russia will be ready to attack Ukraine in the second half of February
    2022-02-16 14:46 Ukraine wins first medal in Winter Olympics in Beijing.
    2022-02-16 13:42 US assisted in repelling a DDoS attack on Ukraine’s Defense Ministry website.
    2022-02-16 13:36 The Kremlin: Russia’s recognition of occupied Donbas 2022-02-16 12:48 NATO: No signs of de-escalation near Ukraine’s border.
    2022-02-16 11:51 Oligarch Rinat Akhmetov makes rare public appearance in Mariupol.
    2022-02-16 11:00 Finland raises defense readiness level due to threat of Russia 2022-02-16 10:53 London maritime insurance market deems Russian, Ukrainian waters to be high risk.
    2022-02-16 10:37 Ukraine detects 60 Russian disinformation attacks in February.
    2022-02-16 10:27 Ukraine celebrates Unity Day as threat of Russia continues to loom.
    2022-02-16 03:14 Gyunduz Mamedov: Ukraine needs to strengthen effective prosecution of war crimes
    2022-02-17 12:15 Head of State Property Fund removed from office by parliament.
    2022-02-17 11:51 UK plans to end 2022-02-17 11:33 Parliament ratifies Open Skies Treaty with EU.
    2022-02-17 11:14 Russian-led militants shell village in Donbas, injure three daycare workers.
    2022-02-17 10:55 AP: Russia moves additional 7,000 troops to Ukraine 2022-02-17 10:37 OSCE: Russian-led militants in Donbas deploy tanks and artillery in restricted zones.
    2022-02-17 10:27 Covid-19 in Ukraine: 33,330 new cases, 259 new deaths, and 18,504 new vaccinations.
    2022-02-17 04:38 Anders Åslund: Putin has seriously wounded Ukraine’s economy without firing a single shot
    2022-02-16 21:39 European Parliament approves 1.2 billion euro loan to Ukraine.
    2022-02-16 19:49 Zelensky leads presidential poll with 24.6%, Poroshenko polls second.
    2022-02-16 18:53 Blinken: US intelligence sees no evidence of Russia withdrawal.
    2022-02-16 18:16 Russian disinformation targets Western support of Ukraine
    2022-02-16 18:13 Al Jazeera: The Russians going against Putin in Ukraine.
    2022-02-16 17:51 EU leaders to meet on Feb. 17 to discuss latest events in Russia’s war on Ukraine.
    2022-02-16 17:28 Reuters: Russian threat to Ukraine to remain critical in February, senior Western intelligence official says.
    2022-02-16 16:26 Government: Recent cyber attack caused no data leaks or financial losses.
    2022-02-16 16:13 Russia sentences Ukrainian journalist to 6 years in jail.
    2022-02-16 15:52 Zelensky: No evidence of Russia pulling back troops.
    2022-02-16 15:28 Singer Alina Pash, who won Ukraine 2022-02-16 15:01 Estonia’s foreign intelligence: Russia will be ready to attack Ukraine in the second half of February
    2022-02-16 14:46 Ukraine wins first medal in Winter Olympics in Beijing.
    2022-02-16 13:42 US assisted in repelling a DDoS attack on Ukraine’s Defense Ministry website.
    2022-02-16 13:36 The Kremlin: Russia’s recognition of occupied Donbas 2022-02-16 12:48 NATO: No signs of de-escalation near Ukraine’s border.
    2022-02-16 11:51 Oligarch Rinat Akhmetov makes rare public appearance in Mariupol.
    2022-02-16 11:00 Finland raises defense readiness level due to threat of Russia 2022-02-16 10:53 London maritime insurance market deems Russian, Ukrainian waters to be high risk.
    2022-02-16 10:37 Ukraine detects 60 Russian disinformation attacks in February.
    2022-02-16 10:27 Ukraine celebrates Unity Day as threat of Russia continues to loom.
    2022-02-16 03:14 Gyunduz Mamedov: Ukraine needs to strengthen effective prosecution of war crimes
    2022-02-15 22:44 Biden: 2022-02-15 21:20 Defense ministry, state banks suffer ‘powerful’ cyberattack
    2022-02-15 20:59 Sources: Germany, France ask Zelensky to comply with Russia’s spin of Minsk Agreements
    2022-02-15 19:52 Ukrainian Tanu Muino directs a music video for Foals in Kyiv.
    2022-02-15 19:45 Illia Ponomarenko: Even if Russia attacks, Ukraine’s fall is not predestined
    2022-02-15 19:38 Oksana Bashuk Hepburn: To win in diplomacy, take the advantage
    2022-02-15 19:25 Japan to provide Ukraine with up to $100 million in loans to 2022-02-15 19:03 Defense websites, state banks are under 2022-02-15 18:46 Scholz says current German, Russian leadership won 2022-02-15 18:25 Putin demands quick guarantees Ukraine won’t join NATO.
    2022-02-15 16:17 Ukraine criminalizes anti-Semitism.
    2022-02-15 15:58 Poll: Majority of Ukrainians will actively resist further Russian invasion.
    2022-02-15 14:59 Stoltenberg: No evidence yet of Russia pulling back troops.
    2022-02-15 14:44 Watchdog: Government blocks contest for NABU chief.
    2022-02-15 14:15 UK won 2022-02-15 13:42 Safe path for shipping plotted through Black Sea during Russian naval exercises.
    2022-02-15 13:29 Financial Times: Ex-lawmaker Tsaryov could head Russian-installed puppet government in Ukraine.
    2022-02-15 13:03 Russian parliament votes to recognize Donbas militant ‘republics’
    2022-02-15 12:31 Covid-19 in Ukraine: 29,724 new cases, 305 new deaths, and 24,205 new vaccinations.
    2022-02-16 14:46 Ukraine wins first medal in Winter Olympics in Beijing.
    2022-02-16 13:42 US assisted in repelling a DDoS attack on Ukraine’s Defense Ministry website.
    2022-02-16 13:36 The Kremlin: Russia’s recognition of occupied Donbas 2022-02-16 12:48 NATO: No signs of de-escalation near Ukraine’s border.
    2022-02-16 11:51 Oligarch Rinat Akhmetov makes rare public appearance in Mariupol.
    2022-02-16 11:00 Finland raises defense readiness level due to threat of Russia 2022-02-16 10:53 London maritime insurance market deems Russian, Ukrainian waters to be high risk.
    2022-02-16 10:37 Ukraine detects 60 Russian disinformation attacks in February.
    2022-02-16 10:27 Ukraine celebrates Unity Day as threat of Russia continues to loom.
    2022-02-16 03:14 Gyunduz Mamedov: Ukraine needs to strengthen effective prosecution of war crimes
    2022-02-15 22:44 Biden: 2022-02-15 21:20 Defense ministry, state banks suffer ‘powerful’ cyberattack
    2022-02-15 20:59 Sources: Germany, France ask Zelensky to comply with Russia’s spin of Minsk Agreements
    2022-02-15 19:52 Ukrainian Tanu Muino directs a music video for Foals in Kyiv.
    2022-02-15 19:45 Illia Ponomarenko: Even if Russia attacks, Ukraine’s fall is not predestined
    2022-02-15 19:38 Oksana Bashuk Hepburn: To win in diplomacy, take the advantage
    2022-02-15 19:25 Japan to provide Ukraine with up to $100 million in loans to 2022-02-15 19:03 Defense websites, state banks are under 2022-02-15 18:46 Scholz says current German, Russian leadership won 2022-02-15 18:25 Putin demands quick guarantees Ukraine won’t join NATO.
    2022-02-15 16:17 Ukraine criminalizes anti-Semitism.
    2022-02-15 15:58 Poll: Majority of Ukrainians will actively resist further Russian invasion.
    2022-02-15 14:59 Stoltenberg: No evidence yet of Russia pulling back troops.
    2022-02-15 14:44 Watchdog: Government blocks contest for NABU chief.
    2022-02-15 14:15 UK won 2022-02-15 13:42 Safe path for shipping plotted through Black Sea during Russian naval exercises.
    2022-02-15 13:29 Financial Times: Ex-lawmaker Tsaryov could head Russian-installed puppet government in Ukraine.
    2022-02-15 13:03 Russian parliament votes to recognize Donbas militant ‘republics’
    2022-02-15 12:31 Covid-19 in Ukraine: 29,724 new cases, 305 new deaths, and 24,205 new vaccinations.
    2022-02-15 12:01 US offers $1 billion sovereign loan guarantee to Ukraine.
    2022-02-15 11:58 Russian defense ministry: Some troops near Ukraine border returning to base.
    2022-02-15 11:55 YouTube blocks account of main Donetsk militant TV station.
    2022-02-15 00:29 Airlines to negotiate Ukraine insurance cover for flights daily.
    2022-02-15 00:24 Canada offers a $393 million loan to Ukraine, provides lethal weapons.
    2022-02-15 00:04 US officially moves Kyiv embassy to Lviv.
    2022-02-14 23:58 World Bank relocates some staff from Ukraine, continues operations.
    2022-02-14 23:33 Scholz warns Moscow of ‘wide-reaching’ consequences, stays silent on Nord Stream 2
    2022-02-14 22:01 Zelensky proclaims Feb. 16, stipulated date of Russian invasion, ‘unity day.’
    2022-02-14 21:37 Germany to give Ukraine new 150 million euro loan.
    2022-02-14 20:45 Zelensky asks oligarchs, MPs to come back to Ukraine in the next 24 hours.
    2022-02-14 19:44 Reuters: Russian mercenaries with spy links increasing presence in Ukraine.
    2022-02-14 19:10 ArcelorMittal Kryvyi Rih shuts down website due to fears of cyberattack.
    2022-02-14 17:42 23 lawmakers left Ukraine, many recently.
    2022-02-14 17:34 Ukraine to build new airport in Donetsk Oblast.
    2022-02-14 16:45 Greek foreign ministry: 2 diaspora Greeks killed, 2 injured near the front line in eastern Donetsk Oblast.
    2022-02-14 16:38 Over 50 IT companies join Ukraine’s ‘special tax regime’ Diia City in first three days
    2022-02-14 16:05 Johnson says evidence of Russia’s further invasion into Ukraine is 2022-02-14 14:47 Ukraine International Airlines starts moving aircraft abroad.
    2022-02-14 14:29 G7 warns of “massive” consequences for Russian economy in case of invasion.
    2022-02-15 16:17 Ukraine criminalizes anti-Semitism.
    2022-02-15 15:58 Poll: Majority of Ukrainians will actively resist further Russian invasion.
    2022-02-15 14:59 Stoltenberg: No evidence yet of Russia pulling back troops.
    2022-02-15 14:44 Watchdog: Government blocks contest for NABU chief.
    2022-02-15 14:15 UK won 2022-02-15 13:42 Safe path for shipping plotted through Black Sea during Russian naval exercises.
    2022-02-15 13:29 Financial Times: Ex-lawmaker Tsaryov could head Russian-installed puppet government in Ukraine.
    2022-02-15 13:03 Russian parliament votes to recognize Donbas militant ‘republics’
    2022-02-15 12:31 Covid-19 in Ukraine: 29,724 new cases, 305 new deaths, and 24,205 new vaccinations.
    2022-02-15 12:01 US offers $1 billion sovereign loan guarantee to Ukraine.
    2022-02-15 11:58 Russian defense ministry: Some troops near Ukraine border returning to base.
    2022-02-15 11:55 YouTube blocks account of main Donetsk militant TV station.
    2022-02-15 00:29 Airlines to negotiate Ukraine insurance cover for flights daily.
    2022-02-15 00:24 Canada offers a $393 million loan to Ukraine, provides lethal weapons.
    2022-02-15 00:04 US officially moves Kyiv embassy to Lviv.
    2022-02-14 23:58 World Bank relocates some staff from Ukraine, continues operations.
    2022-02-14 23:33 Scholz warns Moscow of ‘wide-reaching’ consequences, stays silent on Nord Stream 2
    2022-02-14 22:01 Zelensky proclaims Feb. 16, stipulated date of Russian invasion, ‘unity day.’
    2022-02-14 21:37 Germany to give Ukraine new 150 million euro loan.
    2022-02-14 20:45 Zelensky asks oligarchs, MPs to come back to Ukraine in the next 24 hours.
    2022-02-14 19:44 Reuters: Russian mercenaries with spy links increasing presence in Ukraine.
    2022-02-14 19:10 ArcelorMittal Kryvyi Rih shuts down website due to fears of cyberattack.
    2022-02-14 17:42 23 lawmakers left Ukraine, many recently.
    2022-02-14 17:34 Ukraine to build new airport in Donetsk Oblast.
    2022-02-14 16:45 Greek foreign ministry: 2 diaspora Greeks killed, 2 injured near the front line in eastern Donetsk Oblast.
    2022-02-14 16:38 Over 50 IT companies join Ukraine’s ‘special tax regime’ Diia City in first three days
    2022-02-14 16:05 Johnson says evidence of Russia’s further invasion into Ukraine is 2022-02-14 14:47 Ukraine International Airlines starts moving aircraft abroad.
    2022-02-14 14:29 G7 warns of “massive” consequences for Russian economy in case of invasion.
    2022-02-14 14:01 Russian media: Committee approves recognizing Russia’s Donbas proxies as independent states.
    2022-02-14 13:43 Ambassador walks back his statement that Ukraine might concede NATO membership plans
    2022-02-14 13:41 Many US citizens decide to stay in Ukraine despite potential Russian attack.
    2022-02-14 11:16 UK working on military, economic aid package for Ukraine.
    2022-02-14 10:11 Covid-19 in Ukraine: 16,993 new cases, 142 new deaths, and 18,938 new vaccinations.
    2022-02-14 00:24 Ukrainian government puts up $590 million in aircraft insurance to keep air traffic going amid war threat
    2022-02-14 00:17 Ukrainian oligarchs, businessmen leave country en masse.
    2022-02-14 00:13 Canada pulls out military trainers from Ukraine.
    2022-02-13 20:41 Zelensky, Biden speak on the phone.
    2022-02-13 20:00 Ukraine requests meeting of OSCE participating states within 48 hours.
    2022-02-13 18:34 Polish Interior Minister: Poland ready for wave of Ukrainian refugees.
    2022-02-13 18:12 Center for Defense Strategies: Can Ukraine really be invaded? Scenarios for a Russian attack (analysis)
    2022-02-13 18:08 Reznikov: Ukraine receives Stinger missile systems from Lithuania.
    2022-02-13 18:04 Russian submarine enters Black Sea.
    2022-02-13 17:21 Ukrainska Pravda journalist attacked in Dnipro.
    2022-02-13 16:44 Zelensky scheduled to talk to Biden by phone today.
    2022-02-13 16:33 Aircraft lessors demand return of Ukrainian aircraft.
    2022-02-13 15:45 Ukraine warns airlines against flying over Black Sea.
    2022-02-13 15:35 Ukrainian lawmakers told to return to Ukraine for parliamentary votes.
    2022-02-13 15:28 Two U.S. planes with ammunition, shoulder-fired grenades land in Kyiv.
    2022-02-14 20:45 Zelensky asks oligarchs, MPs to come back to Ukraine in the next 24 hours.
    2022-02-14 19:44 Reuters: Russian mercenaries with spy links increasing presence in Ukraine.
    2022-02-14 19:10 ArcelorMittal Kryvyi Rih shuts down website due to fears of cyberattack.
    2022-02-14 17:42 23 lawmakers left Ukraine, many recently.
    2022-02-14 17:34 Ukraine to build new airport in Donetsk Oblast.
    2022-02-14 16:45 Greek foreign ministry: 2 diaspora Greeks killed, 2 injured near the front line in eastern Donetsk Oblast.
    2022-02-14 16:38 Over 50 IT companies join Ukraine’s ‘special tax regime’ Diia City in first three days
    2022-02-14 16:05 Johnson says evidence of Russia’s further invasion into Ukraine is 2022-02-14 14:47 Ukraine International Airlines starts moving aircraft abroad.
    2022-02-14 14:29 G7 warns of “massive” consequences for Russian economy in case of invasion.
    2022-02-14 14:01 Russian media: Committee approves recognizing Russia’s Donbas proxies as independent states.
    2022-02-14 13:43 Ambassador walks back his statement that Ukraine might concede NATO membership plans
    2022-02-14 13:41 Many US citizens decide to stay in Ukraine despite potential Russian attack.
    2022-02-14 11:16 UK working on military, economic aid package for Ukraine.
    2022-02-14 10:11 Covid-19 in Ukraine: 16,993 new cases, 142 new deaths, and 18,938 new vaccinations.
    2022-02-14 00:24 Ukrainian government puts up $590 million in aircraft insurance to keep air traffic going amid war threat
    2022-02-14 00:17 Ukrainian oligarchs, businessmen leave country en masse.
    2022-02-14 00:13 Canada pulls out military trainers from Ukraine.
    2022-02-13 20:41 Zelensky, Biden speak on the phone.
    2022-02-13 20:00 Ukraine requests meeting of OSCE participating states within 48 hours.
    2022-02-13 18:34 Polish Interior Minister: Poland ready for wave of Ukrainian refugees.
    2022-02-13 18:12 Center for Defense Strategies: Can Ukraine really be invaded? Scenarios for a Russian attack (analysis)
    2022-02-13 18:08 Reznikov: Ukraine receives Stinger missile systems from Lithuania.
    2022-02-13 18:04 Russian submarine enters Black Sea.
    2022-02-13 17:21 Ukrainska Pravda journalist attacked in Dnipro.
    2022-02-13 16:44 Zelensky scheduled to talk to Biden by phone today.
    2022-02-13 16:33 Aircraft lessors demand return of Ukrainian aircraft.
    2022-02-13 15:45 Ukraine warns airlines against flying over Black Sea.
    2022-02-13 15:35 Ukrainian lawmakers told to return to Ukraine for parliamentary votes.
    2022-02-13 15:28 Two U.S. planes with ammunition, shoulder-fired grenades land in Kyiv.
    2022-02-13 14:35 Ukrainska Pravda: Mass cancellation of Ukraine flights from Feb. 14
    2022-02-13 13:31 Ukrainska Pravda: Mass cancellation of flights from Feb. 14.
    2022-02-13 13:03 Germany supplied $415 million of dual-use equipment to Russia in 2020.
    2022-02-13 12:03 Latvia ready to accept up to 10,000 Ukrainian refugees.
    2022-02-13 11:55 Australia temporarily relocates embassy from Kyiv to Lviv.
    2022-02-13 11:40 Kuleba: SWIFT severance not part of current Western sanctions package.
    2022-02-13 10:52 Covid-19 in Ukraine: 24,518 new cases, 140 new deaths, and 36,453 new vaccinations.
    2022-02-13 10:45 SkyUp flight to Kyiv forced to divert to Moldova over war fears.
    2022-02-13 10:32 Russian-backed militants threaten OSCE observers with imprisonment.
    2022-02-13 00:47 Canada relocates embassy to western Ukraine amid growing threats of further Russian invasion, others may follow
    2022-02-12 23:52 Canada to move its embassy from Kyiv to Lviv.
    2022-02-12 23:28 Hip-hop diva Alina Pash to represent Ukraine at Eurovision 2022.
    2022-02-12 23:15 Germany’s foreign minister: Tensions between Russia and Ukraine escalating.
    2022-02-12 22:14 Biden, Macron hold talks with Putin as tensions peak
    2022-02-12 21:36 Prime Minister Shmyhal addresses nation, says army is strong, will defend Ukraine.
    2022-02-12 20:50 Dutch airline KLM stops all flights to Ukraine.
    2022-02-12 18:05 US and UK pull out military trainers from Ukraine.
    2022-02-12 17:58 Blinken-Lavrov call results in another stalemate for Ukraine crisis
    2022-02-12 17:14 Thousands march in Kyiv to show unity against Russian threat.
    2022-02-12 15:03 Germany, Finland and Lithuania advise their citizens to leave Ukraine.
    2022-02-14 20:45 Zelensky asks oligarchs, MPs to come back to Ukraine in the next 24 hours.
    2022-02-14 19:44 Reuters: Russian mercenaries with spy links increasing presence in Ukraine.
    2022-02-14 19:10 ArcelorMittal Kryvyi Rih shuts down website due to fears of cyberattack.
    2022-02-14 17:42 23 lawmakers left Ukraine, many recently.
    2022-02-14 17:34 Ukraine to build new airport in Donetsk Oblast.
    2022-02-14 16:45 Greek foreign ministry: 2 diaspora Greeks killed, 2 injured near the front line in eastern Donetsk Oblast.
    2022-02-14 16:38 Over 50 IT companies join Ukraine’s ‘special tax regime’ Diia City in first three days
    2022-02-14 16:05 Johnson says evidence of Russia’s further invasion into Ukraine is 2022-02-14 14:47 Ukraine International Airlines starts moving aircraft abroad.
    2022-02-14 14:29 G7 warns of “massive” consequences for Russian economy in case of invasion.
    2022-02-14 14:01 Russian media: Committee approves recognizing Russia’s Donbas proxies as independent states.
    2022-02-14 13:43 Ambassador walks back his statement that Ukraine might concede NATO membership plans
    2022-02-14 13:41 Many US citizens decide to stay in Ukraine despite potential Russian attack.
    2022-02-14 11:16 UK working on military, economic aid package for Ukraine.
    2022-02-14 10:11 Covid-19 in Ukraine: 16,993 new cases, 142 new deaths, and 18,938 new vaccinations.
    2022-02-14 00:24 Ukrainian government puts up $590 million in aircraft insurance to keep air traffic going amid war threat
    2022-02-14 00:17 Ukrainian oligarchs, businessmen leave country en masse.
    2022-02-14 00:13 Canada pulls out military trainers from Ukraine.
    2022-02-13 20:41 Zelensky, Biden speak on the phone.
    2022-02-13 20:00 Ukraine requests meeting of OSCE participating states within 48 hours.
    2022-02-13 18:34 Polish Interior Minister: Poland ready for wave of Ukrainian refugees.
    2022-02-13 18:12 Center for Defense Strategies: Can Ukraine really be invaded? Scenarios for a Russian attack (analysis)
    2022-02-13 18:08 Reznikov: Ukraine receives Stinger missile systems from Lithuania.
    2022-02-13 18:04 Russian submarine enters Black Sea.
    2022-02-13 17:21 Ukrainska Pravda journalist attacked in Dnipro.
    2022-02-13 16:44 Zelensky scheduled to talk to Biden by phone today.
    2022-02-13 16:33 Aircraft lessors demand return of Ukrainian aircraft.
    2022-02-13 15:45 Ukraine warns airlines against flying over Black Sea.
    2022-02-13 15:35 Ukrainian lawmakers told to return to Ukraine for parliamentary votes.
    2022-02-13 15:28 Two U.S. planes with ammunition, shoulder-fired grenades land in Kyiv.
    2022-02-13 14:35 Ukrainska Pravda: Mass cancellation of Ukraine flights from Feb. 14
    2022-02-13 13:31 Ukrainska Pravda: Mass cancellation of flights from Feb. 14.
    2022-02-13 13:03 Germany supplied $415 million of dual-use equipment to Russia in 2020.
    2022-02-13 12:03 Latvia ready to accept up to 10,000 Ukrainian refugees.
    2022-02-13 11:55 Australia temporarily relocates embassy from Kyiv to Lviv.
    2022-02-13 11:40 Kuleba: SWIFT severance not part of current Western sanctions package.
    2022-02-13 10:52 Covid-19 in Ukraine: 24,518 new cases, 140 new deaths, and 36,453 new vaccinations.
    2022-02-13 10:45 SkyUp flight to Kyiv forced to divert to Moldova over war fears.
    2022-02-13 10:32 Russian-backed militants threaten OSCE observers with imprisonment.
    2022-02-13 00:47 Canada relocates embassy to western Ukraine amid growing threats of further Russian invasion, others may follow
    2022-02-12 23:52 Canada to move its embassy from Kyiv to Lviv.
    2022-02-12 23:28 Hip-hop diva Alina Pash to represent Ukraine at Eurovision 2022.
    2022-02-12 23:15 Germany’s foreign minister: Tensions between Russia and Ukraine escalating.
    2022-02-12 22:14 Biden, Macron hold talks with Putin as tensions peak
    2022-02-12 21:36 Prime Minister Shmyhal addresses nation, says army is strong, will defend Ukraine.
    2022-02-12 20:50 Dutch airline KLM stops all flights to Ukraine.
    2022-02-12 18:05 US and UK pull out military trainers from Ukraine.
    2022-02-12 17:58 Blinken-Lavrov call results in another stalemate for Ukraine crisis
    2022-02-12 17:14 Thousands march in Kyiv to show unity against Russian threat.
    2022-02-12 15:03 Germany, Finland and Lithuania advise their citizens to leave Ukraine.
    2022-02-13 20:00 Ukraine requests meeting of OSCE participating states within 48 hours.
    2022-02-13 18:34 Polish Interior Minister: Poland ready for wave of Ukrainian refugees.
    2022-02-13 18:12 Center for Defense Strategies: Can Ukraine really be invaded? Scenarios for a Russian attack (analysis)
    2022-02-13 18:08 Reznikov: Ukraine receives Stinger missile systems from Lithuania.
    2022-02-13 18:04 Russian submarine enters Black Sea.
    2022-02-13 17:21 Ukrainska Pravda journalist attacked in Dnipro.
    2022-02-13 16:44 Zelensky scheduled to talk to Biden by phone today.
    2022-02-13 16:33 Aircraft lessors demand return of Ukrainian aircraft.
    2022-02-13 15:45 Ukraine warns airlines against flying over Black Sea.
    2022-02-13 15:35 Ukrainian lawmakers told to return to Ukraine for parliamentary votes.
    2022-02-13 15:28 Two U.S. planes with ammunition, shoulder-fired grenades land in Kyiv.
    2022-02-13 14:35 Ukrainska Pravda: Mass cancellation of Ukraine flights from Feb. 14
    2022-02-13 13:31 Ukrainska Pravda: Mass cancellation of flights from Feb. 14.
    2022-02-13 13:03 Germany supplied $415 million of dual-use equipment to Russia in 2020.
    2022-02-13 12:03 Latvia ready to accept up to 10,000 Ukrainian refugees.
    2022-02-13 11:55 Australia temporarily relocates embassy from Kyiv to Lviv.
    2022-02-13 11:40 Kuleba: SWIFT severance not part of current Western sanctions package.
    2022-02-13 10:52 Covid-19 in Ukraine: 24,518 new cases, 140 new deaths, and 36,453 new vaccinations.
    2022-02-13 10:45 SkyUp flight to Kyiv forced to divert to Moldova over war fears.
    2022-02-13 10:32 Russian-backed militants threaten OSCE observers with imprisonment.
    2022-02-13 00:47 Canada relocates embassy to western Ukraine amid growing threats of further Russian invasion, others may follow
    2022-02-12 23:52 Canada to move its embassy from Kyiv to Lviv.
    2022-02-12 23:28 Hip-hop diva Alina Pash to represent Ukraine at Eurovision 2022.
    2022-02-12 23:15 Germany’s foreign minister: Tensions between Russia and Ukraine escalating.
    2022-02-12 22:14 Biden, Macron hold talks with Putin as tensions peak
    2022-02-12 21:36 Prime Minister Shmyhal addresses nation, says army is strong, will defend Ukraine.
    2022-02-12 20:50 Dutch airline KLM stops all flights to Ukraine.
    2022-02-12 18:05 US and UK pull out military trainers from Ukraine.
    2022-02-12 17:58 Blinken-Lavrov call results in another stalemate for Ukraine crisis
    2022-02-12 17:14 Thousands march in Kyiv to show unity against Russian threat.
    2022-02-12 15:03 Germany, Finland and Lithuania advise their citizens to leave Ukraine.
    2022-02-12 13:40 Media in Progress Ep. 8: Can independent media be sustainable? We want to prove it can
    2022-02-12 13:01 Ukraine’s foreign ministry urges calm.
    2022-02-12 12:35 US orders departure of non-emergency staff from Kyiv embassy.
    2022-02-12 12:25 Turkmenistan’s president said he will resign after 15 years in power.
    2022-02-12 12:01 Russia says it 2022-02-12 11:18 New Zealand urges its citizens to leave Ukraine.
    2022-02-12 11:02 Blinken to speak with Russian Foreign Minister Lavrov today.
    2022-02-12 10:53 Poroshenko demands Zelensky convene national security meeting to discuss threat of Russian invasion.
    2022-02-12 10:28 Ukrainian State-Owned Enterprises Weekly – Issue 64
    2022-02-12 09:54 Covid-19 in Ukraine: 38,212 new cases, 265 new deaths, and 20,260 new vaccinations.
    2022-02-12 01:26 Estonia urges citizens to leave Ukraine.
    2022-02-12 01:10 Michael Bociurkiw: Ukrainians are wondering if their comedian-turned-president can handle the world stage.
    2022-02-12 01:06 NBC: US intelligence reports 9 Russian routes into Ukraine, amid Russian possible full-scale invasion.
    2022-02-12 01:05 US warns that Russia could attack Ukraine at ‘any moment’
    2022-02-12 00:33 Foreign Minister Kuleba: Ukraine and its partners are ready to take decisive action to protect Ukraine.
    2022-02-11 23:56 Ukraine acknowledges threat of Russian 2022-02-11 23:33 European Commission not evacuating staff from Ukraine.
    2022-02-11 23:29 Emmanuel Macron will speak to Vladimir Putin on Feb. 12.
    2022-02-11 23:22 US: Russian invasion likely to start with air attack.
    2022-02-13 00:47 Canada relocates embassy to western Ukraine amid growing threats of further Russian invasion, others may follow
    2022-02-12 23:52 Canada to move its embassy from Kyiv to Lviv.
    2022-02-12 23:28 Hip-hop diva Alina Pash to represent Ukraine at Eurovision 2022.
    2022-02-12 23:15 Germany’s foreign minister: Tensions between Russia and Ukraine escalating.
    2022-02-12 22:14 Biden, Macron hold talks with Putin as tensions peak
    2022-02-12 21:36 Prime Minister Shmyhal addresses nation, says army is strong, will defend Ukraine.
    2022-02-12 20:50 Dutch airline KLM stops all flights to Ukraine.
    2022-02-12 18:05 US and UK pull out military trainers from Ukraine.
    2022-02-12 17:58 Blinken-Lavrov call results in another stalemate for Ukraine crisis
    2022-02-12 17:14 Thousands march in Kyiv to show unity against Russian threat.
    2022-02-12 15:03 Germany, Finland and Lithuania advise their citizens to leave Ukraine.
    2022-02-12 13:40 Media in Progress Ep. 8: Can independent media be sustainable? We want to prove it can
    2022-02-12 13:01 Ukraine’s foreign ministry urges calm.
    2022-02-12 12:35 US orders departure of non-emergency staff from Kyiv embassy.
    2022-02-12 12:25 Turkmenistan’s president said he will resign after 15 years in power.
    2022-02-12 12:01 Russia says it 2022-02-12 11:18 New Zealand urges its citizens to leave Ukraine.
    2022-02-12 11:02 Blinken to speak with Russian Foreign Minister Lavrov today.
    2022-02-12 10:53 Poroshenko demands Zelensky convene national security meeting to discuss threat of Russian invasion.
    2022-02-12 10:28 Ukrainian State-Owned Enterprises Weekly – Issue 64
    2022-02-12 09:54 Covid-19 in Ukraine: 38,212 new cases, 265 new deaths, and 20,260 new vaccinations.
    2022-02-12 01:26 Estonia urges citizens to leave Ukraine.
    2022-02-12 01:10 Michael Bociurkiw: Ukrainians are wondering if their comedian-turned-president can handle the world stage.
    2022-02-12 01:06 NBC: US intelligence reports 9 Russian routes into Ukraine, amid Russian possible full-scale invasion.
    2022-02-12 01:05 US warns that Russia could attack Ukraine at ‘any moment’
    2022-02-12 00:33 Foreign Minister Kuleba: Ukraine and its partners are ready to take decisive action to protect Ukraine.
    2022-02-11 23:56 Ukraine acknowledges threat of Russian 2022-02-11 23:33 European Commission not evacuating staff from Ukraine.
    2022-02-11 23:29 Emmanuel Macron will speak to Vladimir Putin on Feb. 12.
    2022-02-11 23:22 US: Russian invasion likely to start with air attack.
    2022-02-11 23:15 EU urges non-essential diplomats to leave Ukraine.
    2022-02-11 23:11 Biden will talk to Putin amid threat of imminent invasion.
    2022-02-11 23:05 US to send 3,000 additional troops to Poland in upcoming days.
    2022-02-11 22:58 White House: Russia now has enough forces to invade Ukraine.
    2022-02-11 22:52 CNN: US intelligence fueled alarming headlines to disrupt further Russian invasion, deter military action.
    2022-02-11 22:39 Embassies of 8 countries urge citizens to leave Ukraine immediately.
    2022-02-11 22:08 US warns of imminent Russian invasion.
    2022-02-11 21:14 UK Foreign Office advises British citizens against traveling to Ukraine.
    2022-02-11 20:06 Ukraine officially activates OSCE Risk Management as Russia’s military build-up continues.
    2022-02-11 19:47 Ukraine imposes sanctions against pro-Kremlin TV channel Nash.
    2022-02-11 19:02 Ukrainian Olympic athlete makes first major political statement at Beijing Games, asking for ‘no war.’
    2022-02-11 18:55 Blinken: Russia could launch renewed invasion of Ukraine during Olympics.
    2022-02-11 18:10 Instant noodle producer Rollton to invest $100 million to expand Ukraine plant.
    2022-02-11 17:44 Netherlands advises citizens to leave Ukraine.
    2022-02-11 17:24 Hungary will not accept any more NATO troops on its soil.
    2022-02-11 16:55 EU council approves additional 1.2 billion euro macro-financial assistance to Ukraine.
    2022-02-11 16:45 Samsung subsidiary Harman International has acquired Ukrainian startup Apostera.
    2022-02-11 16:31 World Bank to provide Ukraine $300 million for energy efficiency projects in various cities.
    2022-02-11 15:57 Czech Republic plans to officially support Ukraine’s path to EU membership.
    2022-02-12 09:54 Covid-19 in Ukraine: 38,212 new cases, 265 new deaths, and 20,260 new vaccinations.
    2022-02-12 01:26 Estonia urges citizens to leave Ukraine.
    2022-02-12 01:10 Michael Bociurkiw: Ukrainians are wondering if their comedian-turned-president can handle the world stage.
    2022-02-12 01:06 NBC: US intelligence reports 9 Russian routes into Ukraine, amid Russian possible full-scale invasion.
    2022-02-12 01:05 US warns that Russia could attack Ukraine at ‘any moment’
    2022-02-12 00:33 Foreign Minister Kuleba: Ukraine and its partners are ready to take decisive action to protect Ukraine.
    2022-02-11 23:56 Ukraine acknowledges threat of Russian 2022-02-11 23:33 European Commission not evacuating staff from Ukraine.
    2022-02-11 23:29 Emmanuel Macron will speak to Vladimir Putin on Feb. 12.
    2022-02-11 23:22 US: Russian invasion likely to start with air attack.
    2022-02-11 23:15 EU urges non-essential diplomats to leave Ukraine.
    2022-02-11 23:11 Biden will talk to Putin amid threat of imminent invasion.
    2022-02-11 23:05 US to send 3,000 additional troops to Poland in upcoming days.
    2022-02-11 22:58 White House: Russia now has enough forces to invade Ukraine.
    2022-02-11 22:52 CNN: US intelligence fueled alarming headlines to disrupt further Russian invasion, deter military action.
    2022-02-11 22:39 Embassies of 8 countries urge citizens to leave Ukraine immediately.
    2022-02-11 22:08 US warns of imminent Russian invasion.
    2022-02-11 21:14 UK Foreign Office advises British citizens against traveling to Ukraine.
    2022-02-11 20:06 Ukraine officially activates OSCE Risk Management as Russia’s military build-up continues.
    2022-02-11 19:47 Ukraine imposes sanctions against pro-Kremlin TV channel Nash.
    2022-02-11 19:02 Ukrainian Olympic athlete makes first major political statement at Beijing Games, asking for ‘no war.’
    2022-02-11 18:55 Blinken: Russia could launch renewed invasion of Ukraine during Olympics.
    2022-02-11 18:10 Instant noodle producer Rollton to invest $100 million to expand Ukraine plant.
    2022-02-11 17:44 Netherlands advises citizens to leave Ukraine.
    2022-02-11 17:24 Hungary will not accept any more NATO troops on its soil.
    2022-02-11 16:55 EU council approves additional 1.2 billion euro macro-financial assistance to Ukraine.
    2022-02-11 16:45 Samsung subsidiary Harman International has acquired Ukrainian startup Apostera.
    2022-02-11 16:31 World Bank to provide Ukraine $300 million for energy efficiency projects in various cities.
    2022-02-11 15:57 Czech Republic plans to officially support Ukraine’s path to EU membership.
    2022-02-11 15:40 Satellite images show new deployments of Russian troops and equipment on three sides of Ukraine.
    2022-02-11 15:28 NATO to send military contingent to Romania, Bulgaria and Slovakia.
    2022-02-11 14:48 NABU accuses ex-head of state electricity auction of embezzling $75,000.
    2022-02-11 14:42 UK introduces legislation to strengthen sanctions on Russia.
    2022-02-11 14:38 Blinken: Russian invasion could come at any time.
    2022-02-11 14:20 Ukrainian Coast Guard: Russia confirms it will not block Azov Sea traffic for exercises
    2022-02-11 13:22 No conclusions at Normandy Four meeting after nine hours of talks.
    2022-02-11 13:02 Parliamentary tax committee chair proposes significant income tax reduction.
    2022-02-11 12:52 Questions about Russia’s war against Ukraine with defense reporter Illia Ponomarenko
    2022-02-11 12:49 Ukrainian Coast Guard: Russia confirms it will not block Azov Sea traffic for exercises.
    2022-02-11 12:02 KSE professor: Russian naval blockade would cost Ukraine $25-170 million per day.
    2022-02-11 11:57 Covid-19 in Ukraine: 41,229 new cases, 236 new deaths, and 20,826 new vaccinations.
    2022-02-11 11:29 Government to conduct inspections of six national TV channels for broadcasting classified Wagnergate documents.
    2022-02-11 10:23 President Biden warns US citizens to leave Ukraine as "things could go crazy quickly."
    2022-02-11 04:22 Marco Levytsky: Why is Canada so reluctant to provide lethal arms to Ukraine?
    2022-02-10 19:53 Pro-Kremlin TV channel Nash fined, may be stripped of broadcast license.
    2022-02-10 19:48 Belarusian regime opens probe into Dnipro mayor for supporting opposition.
    2022-02-10 18:50 Most members of main judicial body to resign over reform
    2022-02-10 17:41 Russia restricts regular seaways to Ukrainian ports.
    2022-02-10 17:32 Fuel prices hike, Covid-19 slashes Kyiv traffic by 30%.
    2022-02-11 19:02 Ukrainian Olympic athlete makes first major political statement at Beijing Games, asking for ‘no war.’
    2022-02-11 18:55 Blinken: Russia could launch renewed invasion of Ukraine during Olympics.
    2022-02-11 18:10 Instant noodle producer Rollton to invest $100 million to expand Ukraine plant.
    2022-02-11 17:44 Netherlands advises citizens to leave Ukraine.
    2022-02-11 17:24 Hungary will not accept any more NATO troops on its soil.
    2022-02-11 16:55 EU council approves additional 1.2 billion euro macro-financial assistance to Ukraine.
    2022-02-11 16:45 Samsung subsidiary Harman International has acquired Ukrainian startup Apostera.
    2022-02-11 16:31 World Bank to provide Ukraine $300 million for energy efficiency projects in various cities.
    2022-02-11 15:57 Czech Republic plans to officially support Ukraine’s path to EU membership.
    2022-02-11 15:40 Satellite images show new deployments of Russian troops and equipment on three sides of Ukraine.
    2022-02-11 15:28 NATO to send military contingent to Romania, Bulgaria and Slovakia.
    2022-02-11 14:48 NABU accuses ex-head of state electricity auction of embezzling $75,000.
    2022-02-11 14:42 UK introduces legislation to strengthen sanctions on Russia.
    2022-02-11 14:38 Blinken: Russian invasion could come at any time.
    2022-02-11 14:20 Ukrainian Coast Guard: Russia confirms it will not block Azov Sea traffic for exercises
    2022-02-11 13:22 No conclusions at Normandy Four meeting after nine hours of talks.
    2022-02-11 13:02 Parliamentary tax committee chair proposes significant income tax reduction.
    2022-02-11 12:52 Questions about Russia’s war against Ukraine with defense reporter Illia Ponomarenko
    2022-02-11 12:49 Ukrainian Coast Guard: Russia confirms it will not block Azov Sea traffic for exercises.
    2022-02-11 12:02 KSE professor: Russian naval blockade would cost Ukraine $25-170 million per day.
    2022-02-11 11:57 Covid-19 in Ukraine: 41,229 new cases, 236 new deaths, and 20,826 new vaccinations.
    2022-02-11 11:29 Government to conduct inspections of six national TV channels for broadcasting classified Wagnergate documents.
    2022-02-11 10:23 President Biden warns US citizens to leave Ukraine as "things could go crazy quickly."
    2022-02-11 04:22 Marco Levytsky: Why is Canada so reluctant to provide lethal arms to Ukraine?
    2022-02-10 19:53 Pro-Kremlin TV channel Nash fined, may be stripped of broadcast license.
    2022-02-10 19:48 Belarusian regime opens probe into Dnipro mayor for supporting opposition.
    2022-02-10 18:50 Most members of main judicial body to resign over reform
    2022-02-10 17:41 Russia restricts regular seaways to Ukrainian ports.
    2022-02-10 17:32 Fuel prices hike, Covid-19 slashes Kyiv traffic by 30%.
    2022-02-10 16:41 Weekend in Kyiv – Feb. 11-13
    2022-02-10 16:34 Oligarch Oleksandr Yaroslavsky allegedly leaves Ukraine following a deadly traffic incident.
    2022-02-10 15:45 Western alarmism about Russia’s looming war threat harms Ukraine, says Turkish foreign minister.
    2022-02-10 15:21 Ukrainian scientists register heat record in Antarctica.
    2022-02-10 15:16 Russia considers withdrawing non-essential embassy staff from Ukraine.
    2022-02-10 14:48 Lithuania to send Ukraine Stinger anti-aircraft systems.
    2022-02-10 14:28 ​​Ukraine’s military strive for long service contracts
    2022-02-10 12:59 Russia’s war cost Ukraine $280 billion
    2022-02-10 12:44 Kyiv’s Cold War-era bomb shelters in dire state (PHOTOS)
    2022-02-10 12:16 SBU says it prevented terrorist attack in Kyiv.
    2022-02-10 11:43 Belarus starts joint military drills with Russia.
    2022-02-10 11:23 US Senators: Russia’s cyberattacks on Ukraine to prompt sanctions even before potential invasion.
    2022-02-10 10:31 Covid-19 in Ukraine: 41,694 new cases, 280 new deaths, and 21,367 new vaccinations.
    2022-02-10 02:19 Osnat Lubrani: UN will continue to deliver for all Ukrainians leaving no one behind
    2022-02-09 20:02 Russia plans a blockade in the Black Sea and the Azov Sea.
    2022-02-09 18:57 Kyiv City Council lawmaker suspected of bribery flees investigation.
    2022-02-09 18:21 Ukraine received $1.5 billion worth of military aid from allies since the beginning of Russia’s military build-up.
    2022-02-09 18:13 Ukraine receives 80 tons of ammunition from U.S. on Feb. 9.
    2022-02-09 17:56 Red Cross, Coca-Cola HBC deepen partnership to boost corporate volunteering, support youth development in Ukraine
    2022-02-09 17:40 Vegetable prices increase by 20% in January.
    2022-02-11 12:02 KSE professor: Russian naval blockade would cost Ukraine $25-170 million per day.
    2022-02-11 11:57 Covid-19 in Ukraine: 41,229 new cases, 236 new deaths, and 20,826 new vaccinations.
    2022-02-11 11:29 Government to conduct inspections of six national TV channels for broadcasting classified Wagnergate documents.
    2022-02-11 10:23 President Biden warns US citizens to leave Ukraine as "things could go crazy quickly."
    2022-02-11 04:22 Marco Levytsky: Why is Canada so reluctant to provide lethal arms to Ukraine?
    2022-02-10 19:53 Pro-Kremlin TV channel Nash fined, may be stripped of broadcast license.
    2022-02-10 19:48 Belarusian regime opens probe into Dnipro mayor for supporting opposition.
    2022-02-10 18:50 Most members of main judicial body to resign over reform
    2022-02-10 17:41 Russia restricts regular seaways to Ukrainian ports.
    2022-02-10 17:32 Fuel prices hike, Covid-19 slashes Kyiv traffic by 30%.
    2022-02-10 16:41 Weekend in Kyiv – Feb. 11-13
    2022-02-10 16:34 Oligarch Oleksandr Yaroslavsky allegedly leaves Ukraine following a deadly traffic incident.
    2022-02-10 15:45 Western alarmism about Russia’s looming war threat harms Ukraine, says Turkish foreign minister.
    2022-02-10 15:21 Ukrainian scientists register heat record in Antarctica.
    2022-02-10 15:16 Russia considers withdrawing non-essential embassy staff from Ukraine.
    2022-02-10 14:48 Lithuania to send Ukraine Stinger anti-aircraft systems.
    2022-02-10 14:28 ​​Ukraine’s military strive for long service contracts
    2022-02-10 12:59 Russia’s war cost Ukraine $280 billion
    2022-02-10 12:44 Kyiv’s Cold War-era bomb shelters in dire state (PHOTOS)
    2022-02-10 12:16 SBU says it prevented terrorist attack in Kyiv.
    2022-02-10 11:43 Belarus starts joint military drills with Russia.
    2022-02-10 11:23 US Senators: Russia’s cyberattacks on Ukraine to prompt sanctions even before potential invasion.
    2022-02-10 10:31 Covid-19 in Ukraine: 41,694 new cases, 280 new deaths, and 21,367 new vaccinations.
    2022-02-10 02:19 Osnat Lubrani: UN will continue to deliver for all Ukrainians leaving no one behind
    2022-02-09 20:02 Russia plans a blockade in the Black Sea and the Azov Sea.
    2022-02-09 18:57 Kyiv City Council lawmaker suspected of bribery flees investigation.
    2022-02-09 18:21 Ukraine received $1.5 billion worth of military aid from allies since the beginning of Russia’s military build-up.
    2022-02-09 18:13 Ukraine receives 80 tons of ammunition from U.S. on Feb. 9.
    2022-02-09 17:56 Red Cross, Coca-Cola HBC deepen partnership to boost corporate volunteering, support youth development in Ukraine
    2022-02-09 17:40 Vegetable prices increase by 20% in January.
    2022-02-09 17:01 U.S. approves evacuation plans for American citizens in Ukraine.
    2022-02-09 16:35 Japan to supply gas to Europe.
    2022-02-09 16:05 Lithuania prepares to accept Ukrainian refugees.
    2022-02-09 15:59 Ukraine receives military support from U.K.
    2022-02-09 15:07 EU pushes for de-escalation, offers more talks in letter to Russia.
    2022-02-09 14:37 Hazing being considered as motive for National Guardsman 2022-02-09 14:24 Kyiv’s Brookes CIL International School ranked among world’s best schools
    2022-02-09 13:48 Kremlin sees 2022-02-09 13:44 Is Ukraine ready for cyberwar with Russia? Fears mount as military build-up continues
    2022-02-09 12:42 Four more Crimean Tatars detained in Russian-occupied Crimea.
    2022-02-09 11:46 Ukraine’s embassy demands apologies after Slovak lawmakers mistreat the Ukrainian flag.
    2022-02-09 11:11 Ukraine’s GDP increased by 3.2% in 2021.
    2022-02-09 10:57 Poll: Most Europeans believe Russia will invade, want NATO, EU to defend Ukraine if it does.
    2022-02-09 10:09 Kyiv to update emergency warning system amid threat of another Russian invasion.
    2022-02-09 09:34 Covid-19 in Ukraine: 38,257 new cases, 240 new deaths, 21,212 new vaccinations.
    2022-02-09 04:30 Eugene Czolij: Still not too late for Germany to honor its commitment to Ukraine
    2022-02-08 23:33 Ukrainian bakery in Toronto vandalized with anti-Ukrainian graffiti
    2022-02-08 22:27 NATO chief: Risk of further Russian invasion of Ukraine increasing.
    2022-02-08 20:30 6 Russian warships enter Black Sea for naval drills.
    2022-02-08 20:22 Macron: Minsk Agreements only way to stop the war
    2022-02-10 11:43 Belarus starts joint military drills with Russia.
    2022-02-10 11:23 US Senators: Russia’s cyberattacks on Ukraine to prompt sanctions even before potential invasion.
    2022-02-10 10:31 Covid-19 in Ukraine: 41,694 new cases, 280 new deaths, and 21,367 new vaccinations.
    2022-02-10 02:19 Osnat Lubrani: UN will continue to deliver for all Ukrainians leaving no one behind
    2022-02-09 20:02 Russia plans a blockade in the Black Sea and the Azov Sea.
    2022-02-09 18:57 Kyiv City Council lawmaker suspected of bribery flees investigation.
    2022-02-09 18:21 Ukraine received $1.5 billion worth of military aid from allies since the beginning of Russia’s military build-up.
    2022-02-09 18:13 Ukraine receives 80 tons of ammunition from U.S. on Feb. 9.
    2022-02-09 17:56 Red Cross, Coca-Cola HBC deepen partnership to boost corporate volunteering, support youth development in Ukraine
    2022-02-09 17:40 Vegetable prices increase by 20% in January.
    2022-02-09 17:01 U.S. approves evacuation plans for American citizens in Ukraine.
    2022-02-09 16:35 Japan to supply gas to Europe.
    2022-02-09 16:05 Lithuania prepares to accept Ukrainian refugees.
    2022-02-09 15:59 Ukraine receives military support from U.K.
    2022-02-09 15:07 EU pushes for de-escalation, offers more talks in letter to Russia.
    2022-02-09 14:37 Hazing being considered as motive for National Guardsman 2022-02-09 14:24 Kyiv’s Brookes CIL International School ranked among world’s best schools
    2022-02-09 13:48 Kremlin sees 2022-02-09 13:44 Is Ukraine ready for cyberwar with Russia? Fears mount as military build-up continues
    2022-02-09 12:42 Four more Crimean Tatars detained in Russian-occupied Crimea.
    2022-02-09 11:46 Ukraine’s embassy demands apologies after Slovak lawmakers mistreat the Ukrainian flag.
    2022-02-09 11:11 Ukraine’s GDP increased by 3.2% in 2021.
    2022-02-09 10:57 Poll: Most Europeans believe Russia will invade, want NATO, EU to defend Ukraine if it does.
    2022-02-09 10:09 Kyiv to update emergency warning system amid threat of another Russian invasion.
    2022-02-09 09:34 Covid-19 in Ukraine: 38,257 new cases, 240 new deaths, 21,212 new vaccinations.
    2022-02-09 04:30 Eugene Czolij: Still not too late for Germany to honor its commitment to Ukraine
    2022-02-08 23:33 Ukrainian bakery in Toronto vandalized with anti-Ukrainian graffiti
    2022-02-08 22:27 NATO chief: Risk of further Russian invasion of Ukraine increasing.
    2022-02-08 20:30 6 Russian warships enter Black Sea for naval drills.
    2022-02-08 20:22 Macron: Minsk Agreements only way to stop the war
    2022-02-08 18:36 3 Ukrainian regions move into red quarantine zone.
    2022-02-08 18:24 Zelensky: Both sides will suffer if Belarus halts trade.
    2022-02-08 17:21 Macron: Minsk Agreements ‘only way’ to stop war in Ukraine.
    2022-02-08 15:51 Possible escalation in Ukraine may undermine global food security, provoke sky-high food prices
    2022-02-08 15:49 Security Service shuts down 2 Russian-backed ‘bot farms’ in Lviv.
    2022-02-08 14:55 Ukrainian foreign ministry denies Zelensky-Baerbock meeting cancelled over Nord Stream 2.
    2022-02-08 14:13 Reznikov: 140,000 Russian troops now on Ukraine 2022-02-08 13:35 Diia platform adds place of registration, criminal records, building permits.
    2022-02-08 13:01 Kremlin denies promising no further military maneuvres near Ukraine to Macron.
    2022-02-08 12:45 Ukraine to hold military exercises in response to Russian maneuvres in Belarus.
    2022-02-08 12:08 Vaccinated over 60s in Ukraine to be offered free smartphone and cheap tariff.
    2022-02-08 11:51 President’s office denies Macron transition period bill claims.
    2022-02-08 11:38 Polish government prepared to host up to a million Ukrainian refugees.
    2022-02-08 11:12 Macron offers "concrete security guarantees" to Russia.
    2022-02-08 10:51 Ukraine approves antigen tests for confirming asymptomatic Covid-19 infection.
    2022-02-08 10:43 Putin says Ukraine "must fulfil" Minsk protocol.
    2022-02-08 10:34 Covid-19 in Ukraine: 34,353 new cases, 255 new deaths, 19,526 new vaccinations.
    2022-02-08 10:26 CNN: Zelensky-Baerbock meeting cancelled over non-supply of arms and Nord Stream 2.
    2022-02-08 10:04 Biden promises to halt Nord Stream 2 if Russia further invades Ukraine.
    2022-02-07 22:37 Will Russia launch large-scale war or limited invasion? Analysis of 3 popular scenarios
    2022-02-09 11:46 Ukraine’s embassy demands apologies after Slovak lawmakers mistreat the Ukrainian flag.
    2022-02-09 11:11 Ukraine’s GDP increased by 3.2% in 2021.
    2022-02-09 10:57 Poll: Most Europeans believe Russia will invade, want NATO, EU to defend Ukraine if it does.
    2022-02-09 10:09 Kyiv to update emergency warning system amid threat of another Russian invasion.
    2022-02-09 09:34 Covid-19 in Ukraine: 38,257 new cases, 240 new deaths, 21,212 new vaccinations.
    2022-02-09 04:30 Eugene Czolij: Still not too late for Germany to honor its commitment to Ukraine
    2022-02-08 23:33 Ukrainian bakery in Toronto vandalized with anti-Ukrainian graffiti
    2022-02-08 22:27 NATO chief: Risk of further Russian invasion of Ukraine increasing.
    2022-02-08 20:30 6 Russian warships enter Black Sea for naval drills.
    2022-02-08 20:22 Macron: Minsk Agreements only way to stop the war
    2022-02-08 18:36 3 Ukrainian regions move into red quarantine zone.
    2022-02-08 18:24 Zelensky: Both sides will suffer if Belarus halts trade.
    2022-02-08 17:21 Macron: Minsk Agreements ‘only way’ to stop war in Ukraine.
    2022-02-08 15:51 Possible escalation in Ukraine may undermine global food security, provoke sky-high food prices
    2022-02-08 15:49 Security Service shuts down 2 Russian-backed ‘bot farms’ in Lviv.
    2022-02-08 14:55 Ukrainian foreign ministry denies Zelensky-Baerbock meeting cancelled over Nord Stream 2.
    2022-02-08 14:13 Reznikov: 140,000 Russian troops now on Ukraine 2022-02-08 13:35 Diia platform adds place of registration, criminal records, building permits.
    2022-02-08 13:01 Kremlin denies promising no further military maneuvres near Ukraine to Macron.
    2022-02-08 12:45 Ukraine to hold military exercises in response to Russian maneuvres in Belarus.
    2022-02-08 12:08 Vaccinated over 60s in Ukraine to be offered free smartphone and cheap tariff.
    2022-02-08 11:51 President’s office denies Macron transition period bill claims.
    2022-02-08 11:38 Polish government prepared to host up to a million Ukrainian refugees.
    2022-02-08 11:12 Macron offers "concrete security guarantees" to Russia.
    2022-02-08 10:51 Ukraine approves antigen tests for confirming asymptomatic Covid-19 infection.
    2022-02-08 10:43 Putin says Ukraine "must fulfil" Minsk protocol.
    2022-02-08 10:34 Covid-19 in Ukraine: 34,353 new cases, 255 new deaths, 19,526 new vaccinations.
    2022-02-08 10:26 CNN: Zelensky-Baerbock meeting cancelled over non-supply of arms and Nord Stream 2.
    2022-02-08 10:04 Biden promises to halt Nord Stream 2 if Russia further invades Ukraine.
    2022-02-07 22:37 Will Russia launch large-scale war or limited invasion? Analysis of 3 popular scenarios
    2022-02-07 20:14 UK to send 350 troops to Poland as Russia escalates aggression.
    2022-02-07 20:12 European Commission president: Russia uses gas supplies as political leverage.
    2022-02-07 20:02 NATO considers long-term military presence in eastern Europe.
    2022-02-07 19:52 Remittances to Ukraine from migrant workers reach all-time high.
    2022-02-07 19:35 Aide to prosecutor general assaulted in Kyiv.
    2022-02-07 19:20 Court delays arrest hearing for Zelensky 2022-02-07 18:30 Audit alleges financial violations worth $460 million at Energoatom.
    2022-02-07 18:04 Investigators claim ex-Zelensky 2022-02-07 17:26 Ukraine cuts Covid-19 self-isolation period.
    2022-02-07 16:56 US, NATO create air bridge to supply arms to Ukraine.
    2022-02-07 16:36 Ukraine supporters rally near White House
    2022-02-07 16:27 NATO considers increasing military presence in Baltic states.
    2022-02-07 15:37 Alexander Lukashenko says Belarusian troops may invade Ukraine.
    2022-02-07 14:28 Ukrainian film ‘Rhino’ to premiere on Netflix in spring.
    2022-02-07 14:20 PM Shmyhal: 95% of Ukrainian goods won 2022-02-07 14:14 Zelensky party’s MP detained for allegedly accepting $20,000 bribe.
    2022-02-07 14:09 Russia’s Gazprom increases gas transit through Ukraine.
    2022-02-07 12:47 Kryvyi Rih city council officials suspected of $25 million embezzlement.
    2022-02-07 12:11 Head of Lithuanian parliament says Russia’s growing military threat may accelerate Ukraine’s path to NATO membership.
    2022-02-07 12:07 Biden security advisor says Russia can annex Ukraine’s occupied Donbas region.
    2022-02-08 12:08 Vaccinated over 60s in Ukraine to be offered free smartphone and cheap tariff.
    2022-02-08 11:51 President’s office denies Macron transition period bill claims.
    2022-02-08 11:38 Polish government prepared to host up to a million Ukrainian refugees.
    2022-02-08 11:12 Macron offers "concrete security guarantees" to Russia.
    2022-02-08 10:51 Ukraine approves antigen tests for confirming asymptomatic Covid-19 infection.
    2022-02-08 10:43 Putin says Ukraine "must fulfil" Minsk protocol.
    2022-02-08 10:34 Covid-19 in Ukraine: 34,353 new cases, 255 new deaths, 19,526 new vaccinations.
    2022-02-08 10:26 CNN: Zelensky-Baerbock meeting cancelled over non-supply of arms and Nord Stream 2.
    2022-02-08 10:04 Biden promises to halt Nord Stream 2 if Russia further invades Ukraine.
    2022-02-07 22:37 Will Russia launch large-scale war or limited invasion? Analysis of 3 popular scenarios
    2022-02-07 20:14 UK to send 350 troops to Poland as Russia escalates aggression.
    2022-02-07 20:12 European Commission president: Russia uses gas supplies as political leverage.
    2022-02-07 20:02 NATO considers long-term military presence in eastern Europe.
    2022-02-07 19:52 Remittances to Ukraine from migrant workers reach all-time high.
    2022-02-07 19:35 Aide to prosecutor general assaulted in Kyiv.
    2022-02-07 19:20 Court delays arrest hearing for Zelensky 2022-02-07 18:30 Audit alleges financial violations worth $460 million at Energoatom.
    2022-02-07 18:04 Investigators claim ex-Zelensky 2022-02-07 17:26 Ukraine cuts Covid-19 self-isolation period.
    2022-02-07 16:56 US, NATO create air bridge to supply arms to Ukraine.
    2022-02-07 16:36 Ukraine supporters rally near White House
    2022-02-07 16:27 NATO considers increasing military presence in Baltic states.
    2022-02-07 15:37 Alexander Lukashenko says Belarusian troops may invade Ukraine.
    2022-02-07 14:28 Ukrainian film ‘Rhino’ to premiere on Netflix in spring.
    2022-02-07 14:20 PM Shmyhal: 95% of Ukrainian goods won 2022-02-07 14:14 Zelensky party’s MP detained for allegedly accepting $20,000 bribe.
    2022-02-07 14:09 Russia’s Gazprom increases gas transit through Ukraine.
    2022-02-07 12:47 Kryvyi Rih city council officials suspected of $25 million embezzlement.
    2022-02-07 12:11 Head of Lithuanian parliament says Russia’s growing military threat may accelerate Ukraine’s path to NATO membership.
    2022-02-07 12:07 Biden security advisor says Russia can annex Ukraine’s occupied Donbas region.
    2022-02-07 11:27 Covid-19 in Ukraine: 23,378 new cases, 115 new deaths, and 18,626 new vaccinations.
    2022-02-07 10:43 Scholz: Renewed Russian invasion will trigger ‘united and decisive response.’
    2022-02-07 05:44 William J. Broad: Ukraine gave up a giant nuclear arsenal 30 years ago. Today there are regrets
    2022-02-07 03:23 Canada’s Ukrainian communities rally in support of Ukraine
    2022-02-06 21:59 Defense minister: Probability of Russia’s military escalation remains low.
    2022-02-06 19:18 Ex-defense minister: Moscow has enough troops to capture Kyiv but not to occupy country.
    2022-02-06 18:40 Expats rally in Kyiv to show solidarity with Ukraine.
    2022-02-06 18:02 Macron hints that West may need to make diplomatic compromises with Kremlin.
    2022-02-06 17:32 New satellite images show Russian troop build-up at three locations in Belarus.
    2022-02-06 14:09 US troops reinforcing NATO allies in eastern Europe land in Poland.
    2022-02-06 13:41 Eighth US shipment of military aid carrying 86 tons of ammunition lands in Kyiv.
    2022-02-06 12:22 US urges Japan to consider sanctions on Russia if Kremlin further invades Ukraine.
    2022-02-06 11:50 Germany once again refuses weapon deliveries to Ukraine.
    2022-02-06 11:22 Russia assembles 70% of combat power needed for all-out invasion of Ukraine.
    2022-02-06 10:40 US warns of enormous human costs if Russia launches full-scale invasion of Ukraine.
    2022-02-06 05:58 Łukasz Adamski: Putin’s Ukraine playbook echoes the tactics of Russian imperialism.
    2022-02-05 19:17 Thousands of Ukrainians attend Unity March in downtown Kharkiv.
    2022-02-05 18:45 Ukrainian State-Owned Enterprises Weekly – Issue 63
    2022-02-05 18:27 Ukrainian comedian turned politician announces new party.
    2022-02-07 16:36 Ukraine supporters rally near White House
    2022-02-07 16:27 NATO considers increasing military presence in Baltic states.
    2022-02-07 15:37 Alexander Lukashenko says Belarusian troops may invade Ukraine.
    2022-02-07 14:28 Ukrainian film ‘Rhino’ to premiere on Netflix in spring.
    2022-02-07 14:20 PM Shmyhal: 95% of Ukrainian goods won 2022-02-07 14:14 Zelensky party’s MP detained for allegedly accepting $20,000 bribe.
    2022-02-07 14:09 Russia’s Gazprom increases gas transit through Ukraine.
    2022-02-07 12:47 Kryvyi Rih city council officials suspected of $25 million embezzlement.
    2022-02-07 12:11 Head of Lithuanian parliament says Russia’s growing military threat may accelerate Ukraine’s path to NATO membership.
    2022-02-07 12:07 Biden security advisor says Russia can annex Ukraine’s occupied Donbas region.
    2022-02-07 11:27 Covid-19 in Ukraine: 23,378 new cases, 115 new deaths, and 18,626 new vaccinations.
    2022-02-07 10:43 Scholz: Renewed Russian invasion will trigger ‘united and decisive response.’
    2022-02-07 05:44 William J. Broad: Ukraine gave up a giant nuclear arsenal 30 years ago. Today there are regrets
    2022-02-07 03:23 Canada’s Ukrainian communities rally in support of Ukraine
    2022-02-06 21:59 Defense minister: Probability of Russia’s military escalation remains low.
    2022-02-06 19:18 Ex-defense minister: Moscow has enough troops to capture Kyiv but not to occupy country.
    2022-02-06 18:40 Expats rally in Kyiv to show solidarity with Ukraine.
    2022-02-06 18:02 Macron hints that West may need to make diplomatic compromises with Kremlin.
    2022-02-06 17:32 New satellite images show Russian troop build-up at three locations in Belarus.
    2022-02-06 14:09 US troops reinforcing NATO allies in eastern Europe land in Poland.
    2022-02-06 13:41 Eighth US shipment of military aid carrying 86 tons of ammunition lands in Kyiv.
    2022-02-06 12:22 US urges Japan to consider sanctions on Russia if Kremlin further invades Ukraine.
    2022-02-06 11:50 Germany once again refuses weapon deliveries to Ukraine.
    2022-02-06 11:22 Russia assembles 70% of combat power needed for all-out invasion of Ukraine.
    2022-02-06 10:40 US warns of enormous human costs if Russia launches full-scale invasion of Ukraine.
    2022-02-06 05:58 Łukasz Adamski: Putin’s Ukraine playbook echoes the tactics of Russian imperialism.
    2022-02-05 19:17 Thousands of Ukrainians attend Unity March in downtown Kharkiv.
    2022-02-05 18:45 Ukrainian State-Owned Enterprises Weekly – Issue 63
    2022-02-05 18:27 Ukrainian comedian turned politician announces new party.
    2022-02-05 17:24 Ukraine holds urban warfare drills in Chornobyl Zone
    2022-02-05 16:00 Russia deployed 10,000 troops in occupied Crimea over past 2 weeks.
    2022-02-05 13:44 Ukraine registers nearly 90,000 deals on farmland sales after the launch of the land market in July.
    2022-02-05 13:09 US troops arrive in Wiesbaden following Pentagon 2022-02-05 12:13 German Bild publishes Kremlin 2022-02-05 11:24 Canada sends Ukraine non-lethal military equipment, amid growing war threats.
    2022-02-05 10:46 Fitch downgrades Ukraine 2022-02-05 03:49 Anne Applebaum: The reason Putin would risk war.
    2022-02-04 20:25 YouTube blocks channels associated with Medvedchuk.
    2022-02-04 20:14 Russia narrowly beats Ukraine, advances to UEFA Futsal Euro finals.
    2022-02-04 19:45 SBU busts Russia-backed bot farm in Zhytomyr Oblast.
    2022-02-04 19:23 Kuleba, Blinken discuss further steps to deter Russia.
    2022-02-04 18:48 Supreme Court rejects Anti-Monopoly Committee lawsuit against Ukrnafta for alleged anti-competitive behavior.
    2022-02-04 18:32 Media: 2022-02-04 18:31 Editorial: Western allies should do more for Ukraine to correct their past mistakes.
    2022-02-04 18:26 Poll: 71% of Germans are against sending weapons to Ukraine.
    2022-02-04 17:26 Kyiv mayor Vitali Klitschko’s party to run in next elections.
    2022-02-04 17:22 Poll: Ukrainians say they need at least $890 per month to live 2022-02-04 16:45 Maskless Putin appears asleep during Ukrainian team’s introduction at Winter Olympics.
    2022-02-04 15:47 Natalie Jaresko steps down as head of Puerto Rico’s financial board after successful debt restructuring.
    2022-02-06 13:41 Eighth US shipment of military aid carrying 86 tons of ammunition lands in Kyiv.
    2022-02-06 12:22 US urges Japan to consider sanctions on Russia if Kremlin further invades Ukraine.
    2022-02-06 11:50 Germany once again refuses weapon deliveries to Ukraine.
    2022-02-06 11:22 Russia assembles 70% of combat power needed for all-out invasion of Ukraine.
    2022-02-06 10:40 US warns of enormous human costs if Russia launches full-scale invasion of Ukraine.
    2022-02-06 05:58 Łukasz Adamski: Putin’s Ukraine playbook echoes the tactics of Russian imperialism.
    2022-02-05 19:17 Thousands of Ukrainians attend Unity March in downtown Kharkiv.
    2022-02-05 18:45 Ukrainian State-Owned Enterprises Weekly – Issue 63
    2022-02-05 18:27 Ukrainian comedian turned politician announces new party.
    2022-02-05 17:24 Ukraine holds urban warfare drills in Chornobyl Zone
    2022-02-05 16:00 Russia deployed 10,000 troops in occupied Crimea over past 2 weeks.
    2022-02-05 13:44 Ukraine registers nearly 90,000 deals on farmland sales after the launch of the land market in July.
    2022-02-05 13:09 US troops arrive in Wiesbaden following Pentagon 2022-02-05 12:13 German Bild publishes Kremlin 2022-02-05 11:24 Canada sends Ukraine non-lethal military equipment, amid growing war threats.
    2022-02-05 10:46 Fitch downgrades Ukraine 2022-02-05 03:49 Anne Applebaum: The reason Putin would risk war.
    2022-02-04 20:25 YouTube blocks channels associated with Medvedchuk.
    2022-02-04 20:14 Russia narrowly beats Ukraine, advances to UEFA Futsal Euro finals.
    2022-02-04 19:45 SBU busts Russia-backed bot farm in Zhytomyr Oblast.
    2022-02-04 19:23 Kuleba, Blinken discuss further steps to deter Russia.
    2022-02-04 18:48 Supreme Court rejects Anti-Monopoly Committee lawsuit against Ukrnafta for alleged anti-competitive behavior.
    2022-02-04 18:32 Media: 2022-02-04 18:31 Editorial: Western allies should do more for Ukraine to correct their past mistakes.
    2022-02-04 18:26 Poll: 71% of Germans are against sending weapons to Ukraine.
    2022-02-04 17:26 Kyiv mayor Vitali Klitschko’s party to run in next elections.
    2022-02-04 17:22 Poll: Ukrainians say they need at least $890 per month to live 2022-02-04 16:45 Maskless Putin appears asleep during Ukrainian team’s introduction at Winter Olympics.
    2022-02-04 15:47 Natalie Jaresko steps down as head of Puerto Rico’s financial board after successful debt restructuring.
    2022-02-04 15:41 German Chancellor Olaf Scholz to meet with Putin on Feb. 15.
    2022-02-04 15:38 Naftogaz to import 300 million cubic meters of gas from EU.
    2022-02-04 15:31 German Chancellor Olaf Scholz to meet with Putin on Feb. 15.
    2022-02-04 15:26 Weekend in Kyiv – Feb. 4-6
    2022-02-04 15:08 Putin, Xi sign joint statement condemning NATO enlargement.
    2022-02-04 14:59 Erdogan says Zelensky agreed to meet Putin in Turkey.
    2022-02-04 14:39 Polish senate unanimously passes resolution providing Ukraine with political, military support.
    2022-02-04 13:22 EU prepares sanctions package to be imposed should Russia invade Ukraine.
    2022-02-04 12:52 Students demand resignations over university election scandal.
    2022-02-04 12:23 What we know about Polish anti-aircraft weapons sent to Ukraine
    2022-02-04 12:03 Hungary blocks Ukraine from joining NATO 2022-02-04 11:38 US says closer relations with China will not alleviate economic sanctions imposed on Russia.
    2022-02-04 10:50 OSCE: Russian-led militants deploy additional tanks, artillery near frontline.
    2022-02-04 10:48 What’s going on in Ukraine? Watch these films to understand Russia’s ongoing war
    2022-02-04 10:38 Ukraine documents 43,778 new Covid-19 cases on Feb. 3, surpassing previous daily record.
    2022-02-04 10:25 Macron to meet Putin and Zelensky in separate talks next week.
    2022-02-04 04:19 Brian Whitmore: While the world watches Ukraine, Putin is quietly occupying Belarus.
    2022-02-04 01:01 Canadian military veteran wants his country to act on Ukraine
    2022-02-03 20:55 Commission fails to nominate anti-graft prosecutor for third time
    2022-02-03 20:28 New York Times: US exposes Russian effort to fabricate pretext for invasion.
    2022-02-04 19:45 SBU busts Russia-backed bot farm in Zhytomyr Oblast.
    2022-02-04 19:23 Kuleba, Blinken discuss further steps to deter Russia.
    2022-02-04 18:48 Supreme Court rejects Anti-Monopoly Committee lawsuit against Ukrnafta for alleged anti-competitive behavior.
    2022-02-04 18:32 Media: 2022-02-04 18:31 Editorial: Western allies should do more for Ukraine to correct their past mistakes.
    2022-02-04 18:26 Poll: 71% of Germans are against sending weapons to Ukraine.
    2022-02-04 17:26 Kyiv mayor Vitali Klitschko’s party to run in next elections.
    2022-02-04 17:22 Poll: Ukrainians say they need at least $890 per month to live 2022-02-04 16:45 Maskless Putin appears asleep during Ukrainian team’s introduction at Winter Olympics.
    2022-02-04 15:47 Natalie Jaresko steps down as head of Puerto Rico’s financial board after successful debt restructuring.
    2022-02-04 15:41 German Chancellor Olaf Scholz to meet with Putin on Feb. 15.
    2022-02-04 15:38 Naftogaz to import 300 million cubic meters of gas from EU.
    2022-02-04 15:31 German Chancellor Olaf Scholz to meet with Putin on Feb. 15.
    2022-02-04 15:26 Weekend in Kyiv – Feb. 4-6
    2022-02-04 15:08 Putin, Xi sign joint statement condemning NATO enlargement.
    2022-02-04 14:59 Erdogan says Zelensky agreed to meet Putin in Turkey.
    2022-02-04 14:39 Polish senate unanimously passes resolution providing Ukraine with political, military support.
    2022-02-04 13:22 EU prepares sanctions package to be imposed should Russia invade Ukraine.
    2022-02-04 12:52 Students demand resignations over university election scandal.
    2022-02-04 12:23 What we know about Polish anti-aircraft weapons sent to Ukraine
    2022-02-04 12:03 Hungary blocks Ukraine from joining NATO 2022-02-04 11:38 US says closer relations with China will not alleviate economic sanctions imposed on Russia.
    2022-02-04 10:50 OSCE: Russian-led militants deploy additional tanks, artillery near frontline.
    2022-02-04 10:48 What’s going on in Ukraine? Watch these films to understand Russia’s ongoing war
    2022-02-04 10:38 Ukraine documents 43,778 new Covid-19 cases on Feb. 3, surpassing previous daily record.
    2022-02-04 10:25 Macron to meet Putin and Zelensky in separate talks next week.
    2022-02-04 04:19 Brian Whitmore: While the world watches Ukraine, Putin is quietly occupying Belarus.
    2022-02-04 01:01 Canadian military veteran wants his country to act on Ukraine
    2022-02-03 20:55 Commission fails to nominate anti-graft prosecutor for third time
    2022-02-03 20:28 New York Times: US exposes Russian effort to fabricate pretext for invasion.
    2022-02-03 19:16 Erdogan visits Kyiv, signs long-anticipated free trade agreement
    2022-02-03 19:09 Spanish pop star Rosalia shoots music video in Kyiv.
    2022-02-03 18:14 Russia transfers patrol boats to Azov Sea.
    2022-02-03 17:48 Zelensky’s head of administration has Covid-19.
    2022-02-03 17:23 Ukraine signs free trade agreement with Turkey.
    2022-02-03 17:21 Vegetable oils prices hit record high.
    2022-02-03 16:35 US plane with 85 tons of ammunition for grenade launchers arrives in Kyiv.
    2022-02-03 16:10 Zaha Hadid Architects reveals design of 3 metro stations to be opened in Dnipro.
    2022-02-03 15:25 Lawmaker wanted for alleged corruption.
    2022-02-03 14:11 Voice party leadership investigated for misuse of funds.
    2022-02-03 13:36 NATO chief: Biggest Russian troop presence in Belarus since Cold War.
    2022-02-03 13:31 Supreme Court rules deputy NBU head reprimand illegal.
    2022-02-03 12:55 Ukraine denies sending drone to Belarusian territory.
    2022-02-03 11:52 Ukraine to fully staff peacetime territorial defense by March.
    2022-02-03 11:24 US confirms authenticity of leaked response to Russia.
    2022-02-03 11:20 Ukraine to sign deal with Turkey on producing military drones.
    2022-02-03 10:57 Defense minister: Significant escalation by Russia unlikely at present.
    2022-02-03 10:47 SBU stopped 121 cyberattacks in January.
    2022-02-03 10:23 New daily record of 39,620 Covid-19 cases.
    2022-02-03 04:30 Andrew D’Anieri: The new Ukraine needs a new census.
    2022-02-04 12:03 Hungary blocks Ukraine from joining NATO 2022-02-04 11:38 US says closer relations with China will not alleviate economic sanctions imposed on Russia.
    2022-02-04 10:50 OSCE: Russian-led militants deploy additional tanks, artillery near frontline.
    2022-02-04 10:48 What’s going on in Ukraine? Watch these films to understand Russia’s ongoing war
    2022-02-04 10:38 Ukraine documents 43,778 new Covid-19 cases on Feb. 3, surpassing previous daily record.
    2022-02-04 10:25 Macron to meet Putin and Zelensky in separate talks next week.
    2022-02-04 04:19 Brian Whitmore: While the world watches Ukraine, Putin is quietly occupying Belarus.
    2022-02-04 01:01 Canadian military veteran wants his country to act on Ukraine
    2022-02-03 20:55 Commission fails to nominate anti-graft prosecutor for third time
    2022-02-03 20:28 New York Times: US exposes Russian effort to fabricate pretext for invasion.
    2022-02-03 19:16 Erdogan visits Kyiv, signs long-anticipated free trade agreement
    2022-02-03 19:09 Spanish pop star Rosalia shoots music video in Kyiv.
    2022-02-03 18:14 Russia transfers patrol boats to Azov Sea.
    2022-02-03 17:48 Zelensky’s head of administration has Covid-19.
    2022-02-03 17:23 Ukraine signs free trade agreement with Turkey.
    2022-02-03 17:21 Vegetable oils prices hit record high.
    2022-02-03 16:35 US plane with 85 tons of ammunition for grenade launchers arrives in Kyiv.
    2022-02-03 16:10 Zaha Hadid Architects reveals design of 3 metro stations to be opened in Dnipro.
    2022-02-03 15:25 Lawmaker wanted for alleged corruption.
    2022-02-03 14:11 Voice party leadership investigated for misuse of funds.
    2022-02-03 13:36 NATO chief: Biggest Russian troop presence in Belarus since Cold War.
    2022-02-03 13:31 Supreme Court rules deputy NBU head reprimand illegal.
    2022-02-03 12:55 Ukraine denies sending drone to Belarusian territory.
    2022-02-03 11:52 Ukraine to fully staff peacetime territorial defense by March.
    2022-02-03 11:24 US confirms authenticity of leaked response to Russia.
    2022-02-03 11:20 Ukraine to sign deal with Turkey on producing military drones.
    2022-02-03 10:57 Defense minister: Significant escalation by Russia unlikely at present.
    2022-02-03 10:47 SBU stopped 121 cyberattacks in January.
    2022-02-03 10:23 New daily record of 39,620 Covid-19 cases.
    2022-02-03 04:30 Andrew D’Anieri: The new Ukraine needs a new census.
    2022-02-03 01:41 Feb. 2 news wrap-up.
    2022-02-03 01:01 Dutch PM urges justice for MH17 victims, pushes to continue dialogue with Russia.
    2022-02-02 23:07 Anna Myroniuk: I lost my home to Putin once. Now it can happen again.
    2022-02-02 22:57 Allan Pagh Kristensen: Fight against corruption requires more than anti-corruption institutions.
    2022-02-02 21:27 Kuleba: Russian attack is not 2022-02-02 15:34 Zelensky’s MP expelled from party after video showed him try to bribe police
    2022-02-02 08:41 Feb. 1 news wrap-up
    2022-02-02 05:24 Timothy Snyder: Putin’s case for invading Ukraine rests on phony grievances and ancient myths
    2022-02-02 02:24 Eugene Czolij: Canada’s package to Ukraine to ensure peace is missing a crucial piece
    2022-02-01 23:46 Putin says West ‘ignored’ Russian security concerns in rare public remarks
    2022-02-01 23:17 Johnson: Sanctions the minute Russian troops further invade Ukraine
    2022-02-01 23:07 US media: Bridget Brink to become ambassador to Ukraine
    2022-02-01 20:51 Authorities search ex-head of Naftogaz Kobolyev’s home over contentious debt settlement
    2022-02-01 18:30 UK provides Ukraine with $120 million in financial aid
    2022-02-01 18:06 Zelensky issues decree to bolster Ukraine’s military
    2022-02-01 17:57 UK’s defense aid to Ukraine skyrockets amid Russia threat
    2022-02-01 17:35 Zelensky announces trilateral partnership between Ukraine, Poland, UK
    2022-02-01 09:56 Jan. 31 news wrap-up
    2022-02-01 05:18 Yevhen Fedchenko: Western sanctions must target Putin’s propagandists
    2022-02-01 01:59 US, Russia clash at UN Security Council meeting over Ukraine
    2022-02-03 13:36 NATO chief: Biggest Russian troop presence in Belarus since Cold War.
    2022-02-03 13:31 Supreme Court rules deputy NBU head reprimand illegal.
    2022-02-03 12:55 Ukraine denies sending drone to Belarusian territory.
    2022-02-03 11:52 Ukraine to fully staff peacetime territorial defense by March.
    2022-02-03 11:24 US confirms authenticity of leaked response to Russia.
    2022-02-03 11:20 Ukraine to sign deal with Turkey on producing military drones.
    2022-02-03 10:57 Defense minister: Significant escalation by Russia unlikely at present.
    2022-02-03 10:47 SBU stopped 121 cyberattacks in January.
    2022-02-03 10:23 New daily record of 39,620 Covid-19 cases.
    2022-02-03 04:30 Andrew D’Anieri: The new Ukraine needs a new census.
    2022-02-03 01:41 Feb. 2 news wrap-up.
    2022-02-03 01:01 Dutch PM urges justice for MH17 victims, pushes to continue dialogue with Russia.
    2022-02-02 23:07 Anna Myroniuk: I lost my home to Putin once. Now it can happen again.
    2022-02-02 22:57 Allan Pagh Kristensen: Fight against corruption requires more than anti-corruption institutions.
    2022-02-02 21:27 Kuleba: Russian attack is not 2022-02-02 15:34 Zelensky’s MP expelled from party after video showed him try to bribe police
    2022-02-02 08:41 Feb. 1 news wrap-up
    2022-02-02 05:24 Timothy Snyder: Putin’s case for invading Ukraine rests on phony grievances and ancient myths
    2022-02-02 02:24 Eugene Czolij: Canada’s package to Ukraine to ensure peace is missing a crucial piece
    2022-02-01 23:46 Putin says West ‘ignored’ Russian security concerns in rare public remarks
    2022-02-01 23:17 Johnson: Sanctions the minute Russian troops further invade Ukraine
    2022-02-01 23:07 US media: Bridget Brink to become ambassador to Ukraine
    2022-02-01 20:51 Authorities search ex-head of Naftogaz Kobolyev’s home over contentious debt settlement
    2022-02-01 18:30 UK provides Ukraine with $120 million in financial aid
    2022-02-01 18:06 Zelensky issues decree to bolster Ukraine’s military
    2022-02-01 17:57 UK’s defense aid to Ukraine skyrockets amid Russia threat
    2022-02-01 17:35 Zelensky announces trilateral partnership between Ukraine, Poland, UK
    2022-02-01 09:56 Jan. 31 news wrap-up
    2022-02-01 05:18 Yevhen Fedchenko: Western sanctions must target Putin’s propagandists
    2022-02-01 01:59 US, Russia clash at UN Security Council meeting over Ukraine
    2022-01-31 21:32 Top Kyiv official Panteleyev charged with negligence over environmental damage
    2022-01-31 20:52 Poland to provide Ukraine with defensive weapons
    2022-01-31 14:19 Ukrainian State-Owned Enterprises Weekly – Issue 62
    2022-01-31 10:29 Want to help Ukraine’s military as a foreigner? Here’s what you can do
    2022-01-31 03:25 Aura Sabadus: Will Putin deploy his energy weapons against Europe?
    2022-01-30 23:57 Weekend news wrap-up
    2022-01-30 05:38 Yuri Polakiwsky: A lend-lease agreement as a way forward for Ukraine
    2022-01-29 19:22 Embassy advice guide for foreign nationals in Ukraine
    2022-01-29 18:48 Ukrainian director detained in Italy at Russia’s request removed from Interpol wanted list
    2022-01-29 15:45 Deputy economy minister: Ukraine’s GDP hit $200 billion for first time in 30 years
    2022-01-28 22:28 Zelensky party’s lawmaker reportedly caught taking bribe
    2022-01-28 21:34 Olena Goncharova: Ukraine is not ‘the Ukraine’ and why it matters now
    2022-01-28 20:36 Zelensky calls on other countries, media to not cause fear about Russian escalation
    2022-01-28 19:23 Analysts forecast up to 52,000 daily Covid-19 cases in early February
    2022-01-28 18:38 Defense minister downplays Russian threat, says it’s similar to that of spring 2021
    2022-01-28 06:49 Pavel Felgenhauer: Russia and NATO locked in high-risk standoff in Mediterranean and Black Seas
    2022-01-28 01:38 Joe Varner: Abandoning Ukraine means surrendering the rules-based liberal world order
    2022-01-28 00:57 Zelensky, Biden discuss security, diplomacy, aid to Ukraine
    2022-01-27 22:27 Stanislav Aseyev: Russia’s bluff of the century. Will there be a war?
    2022-01-27 22:08 Eugene Czolij: The US must show leadership in countering Putin’s imperial ambitions
    2022-02-01 23:17 Johnson: Sanctions the minute Russian troops further invade Ukraine
    2022-02-01 23:07 US media: Bridget Brink to become ambassador to Ukraine
    2022-02-01 20:51 Authorities search ex-head of Naftogaz Kobolyev’s home over contentious debt settlement
    2022-02-01 18:30 UK provides Ukraine with $120 million in financial aid
    2022-02-01 18:06 Zelensky issues decree to bolster Ukraine’s military
    2022-02-01 17:57 UK’s defense aid to Ukraine skyrockets amid Russia threat
    2022-02-01 17:35 Zelensky announces trilateral partnership between Ukraine, Poland, UK
    2022-02-01 09:56 Jan. 31 news wrap-up
    2022-02-01 05:18 Yevhen Fedchenko: Western sanctions must target Putin’s propagandists
    2022-02-01 01:59 US, Russia clash at UN Security Council meeting over Ukraine
    2022-01-31 21:32 Top Kyiv official Panteleyev charged with negligence over environmental damage
    2022-01-31 20:52 Poland to provide Ukraine with defensive weapons
    2022-01-31 14:19 Ukrainian State-Owned Enterprises Weekly – Issue 62
    2022-01-31 10:29 Want to help Ukraine’s military as a foreigner? Here’s what you can do
    2022-01-31 03:25 Aura Sabadus: Will Putin deploy his energy weapons against Europe?
    2022-01-30 23:57 Weekend news wrap-up
    2022-01-30 05:38 Yuri Polakiwsky: A lend-lease agreement as a way forward for Ukraine
    2022-01-29 19:22 Embassy advice guide for foreign nationals in Ukraine
    2022-01-29 18:48 Ukrainian director detained in Italy at Russia’s request removed from Interpol wanted list
    2022-01-29 15:45 Deputy economy minister: Ukraine’s GDP hit $200 billion for first time in 30 years
    2022-01-28 22:28 Zelensky party’s lawmaker reportedly caught taking bribe
    2022-01-28 21:34 Olena Goncharova: Ukraine is not ‘the Ukraine’ and why it matters now
    2022-01-28 20:36 Zelensky calls on other countries, media to not cause fear about Russian escalation
    2022-01-28 19:23 Analysts forecast up to 52,000 daily Covid-19 cases in early February
    2022-01-28 18:38 Defense minister downplays Russian threat, says it’s similar to that of spring 2021
    2022-01-28 06:49 Pavel Felgenhauer: Russia and NATO locked in high-risk standoff in Mediterranean and Black Seas
    2022-01-28 01:38 Joe Varner: Abandoning Ukraine means surrendering the rules-based liberal world order
    2022-01-28 00:57 Zelensky, Biden discuss security, diplomacy, aid to Ukraine
    2022-01-27 22:27 Stanislav Aseyev: Russia’s bluff of the century. Will there be a war?
    2022-01-27 22:08 Eugene Czolij: The US must show leadership in countering Putin’s imperial ambitions
    2022-01-27 21:04 Hryvnia hits 7 year low amid Russian threat
    2022-01-27 20:29 Ukraine ratifies deal with UK to boost navy
    2022-01-27 18:41 US shared response to Russia’s security demands with Ukraine before sending
    2022-01-27 16:01 Weekend in Kyiv – Jan. 28-30
    2022-01-27 15:35 Conscript arrested after allegedly killing 5 people in unprecedented shooting spree (UPDATED)
    2022-01-27 13:30 Ukraine records highest-ever daily number of new Covid-19 cases
    2022-01-27 11:08 What we know about US bunker busters sent to Ukraine
    2022-01-27 10:35 Nord Stream 2 registers German subsidiary, bringing project closer to certification
    2022-01-27 08:54 Media in Progress Ep. 7: Company culture – What can make or break a team
    2022-01-27 08:38 Olena Goncharova: Canada must stand on guard for Ukraine
    2022-01-27 03:39 Group of authors: Understanding, confronting Russian aggression toward Ukraine
    2022-01-27 02:03 Jan. 26 news wrap-up
    2022-01-27 00:48 Czech Republic provides Ukraine with artillery rounds
    2022-01-26 23:58 US, NATO don’t cave in to Russian demands
    2022-01-26 21:19 Russia considers openly supplying weapons to its proxies occupying eastern Ukraine
    2022-01-26 21:06 (UPDATED) Yermak travels to Paris, meets Normandy Format advisors amid Russian escalation
    2022-01-26 19:21 Bloomberg: Germany, others seek exemptions in possible sanctions on Russia
    2022-01-26 18:08 Covid-19 cases, deaths rise in Kyiv amid latest outbreak
    2022-01-26 06:18 Oleksandr Pankieiev: The West must stand together on Russian aggression
    2022-01-26 03:57 Oksana Bashuk Hepburn: With friends like Germany, Ukraine needs no enemies
    2022-01-28 22:28 Zelensky party’s lawmaker reportedly caught taking bribe
    2022-01-28 21:34 Olena Goncharova: Ukraine is not ‘the Ukraine’ and why it matters now
    2022-01-28 20:36 Zelensky calls on other countries, media to not cause fear about Russian escalation
    2022-01-28 19:23 Analysts forecast up to 52,000 daily Covid-19 cases in early February
    2022-01-28 18:38 Defense minister downplays Russian threat, says it’s similar to that of spring 2021
    2022-01-28 06:49 Pavel Felgenhauer: Russia and NATO locked in high-risk standoff in Mediterranean and Black Seas
    2022-01-28 01:38 Joe Varner: Abandoning Ukraine means surrendering the rules-based liberal world order
    2022-01-28 00:57 Zelensky, Biden discuss security, diplomacy, aid to Ukraine
    2022-01-27 22:27 Stanislav Aseyev: Russia’s bluff of the century. Will there be a war?
    2022-01-27 22:08 Eugene Czolij: The US must show leadership in countering Putin’s imperial ambitions
    2022-01-27 21:04 Hryvnia hits 7 year low amid Russian threat
    2022-01-27 20:29 Ukraine ratifies deal with UK to boost navy
    2022-01-27 18:41 US shared response to Russia’s security demands with Ukraine before sending
    2022-01-27 16:01 Weekend in Kyiv – Jan. 28-30
    2022-01-27 15:35 Conscript arrested after allegedly killing 5 people in unprecedented shooting spree (UPDATED)
    2022-01-27 13:30 Ukraine records highest-ever daily number of new Covid-19 cases
    2022-01-27 11:08 What we know about US bunker busters sent to Ukraine
    2022-01-27 10:35 Nord Stream 2 registers German subsidiary, bringing project closer to certification
    2022-01-27 08:54 Media in Progress Ep. 7: Company culture – What can make or break a team
    2022-01-27 08:38 Olena Goncharova: Canada must stand on guard for Ukraine
    2022-01-27 03:39 Group of authors: Understanding, confronting Russian aggression toward Ukraine
    2022-01-27 02:03 Jan. 26 news wrap-up
    2022-01-27 00:48 Czech Republic provides Ukraine with artillery rounds
    2022-01-26 23:58 US, NATO don’t cave in to Russian demands
    2022-01-26 21:19 Russia considers openly supplying weapons to its proxies occupying eastern Ukraine
    2022-01-26 21:06 (UPDATED) Yermak travels to Paris, meets Normandy Format advisors amid Russian escalation
    2022-01-26 19:21 Bloomberg: Germany, others seek exemptions in possible sanctions on Russia
    2022-01-26 18:08 Covid-19 cases, deaths rise in Kyiv amid latest outbreak
    2022-01-26 06:18 Oleksandr Pankieiev: The West must stand together on Russian aggression
    2022-01-26 03:57 Oksana Bashuk Hepburn: With friends like Germany, Ukraine needs no enemies
    2022-01-26 02:35 Jan. 25 news wrap-up
    2022-01-26 01:53 US delivers 300 more Javelins to Ukraine
    2022-01-25 21:30 Explainer: Is Poroshenko treason case justice or political persecution?
    2022-01-25 17:31 Transparency International: Ukraine’s fight against corruption stagnated in 2021
    2022-01-25 15:39 British instructors train Ukrainian military to operate NLAW tank killers (PHOTOS)
    2022-01-25 11:24 Early look at Ukraine’s exhibit at Venice Art Biennale – exploration of world’s exhaustion
    2022-01-25 04:58 Lubomyr Luciuk: When it comes to Ukraine’s national security, Vladimir Putin has already won
    2022-01-25 03:39 James Batchik & Doug Klain: It’s time for Europe to defend Ukraine — and itself
    2022-01-25 01:50 Jan. 24 news wrap-up
    2022-01-25 01:16 Ukrainian, Russian radio enthusiasts battle over alleged Russian military frequency
    2022-01-24 22:10 4 European airlines pull back from overnight stay in Kyiv amid invasion fears
    2022-01-24 20:37 Ukraine spends over $7 billion on road repairs in 2 years, plans to keep tempo going
    2022-01-24 18:25 UK begins to withdraw non-essential embassy staff, EU ‘won’t do the same,’ says Borrell
    2022-01-24 16:51 European Commission proposes new $1.35 billion loan to Ukraine amid invasion threat
    2022-01-24 16:28 Center for Defense Strategies: How likely is large-scale war in Ukraine? (analysis)
    2022-01-24 15:01 NATO sends more military power to eastern flank
    2022-01-24 08:34 Weekend news round-up
    2022-01-24 06:25 Bohdan Vitvitsky: Disarming Putin’s history weapon
    2022-01-24 03:32 Timothy Ash: Putin the gambler may have gone too far to back down
    2022-01-24 02:29 US orders diplomats’ families to leave Kyiv, citing ‘threat of Russian military action’
    2022-01-27 03:39 Group of authors: Understanding, confronting Russian aggression toward Ukraine
    2022-01-27 02:03 Jan. 26 news wrap-up
    2022-01-27 00:48 Czech Republic provides Ukraine with artillery rounds
    2022-01-26 23:58 US, NATO don’t cave in to Russian demands
    2022-01-26 21:19 Russia considers openly supplying weapons to its proxies occupying eastern Ukraine
    2022-01-26 21:06 (UPDATED) Yermak travels to Paris, meets Normandy Format advisors amid Russian escalation
    2022-01-26 19:21 Bloomberg: Germany, others seek exemptions in possible sanctions on Russia
    2022-01-26 18:08 Covid-19 cases, deaths rise in Kyiv amid latest outbreak
    2022-01-26 06:18 Oleksandr Pankieiev: The West must stand together on Russian aggression
    2022-01-26 03:57 Oksana Bashuk Hepburn: With friends like Germany, Ukraine needs no enemies
    2022-01-26 02:35 Jan. 25 news wrap-up
    2022-01-26 01:53 US delivers 300 more Javelins to Ukraine
    2022-01-25 21:30 Explainer: Is Poroshenko treason case justice or political persecution?
    2022-01-25 17:31 Transparency International: Ukraine’s fight against corruption stagnated in 2021
    2022-01-25 15:39 British instructors train Ukrainian military to operate NLAW tank killers (PHOTOS)
    2022-01-25 11:24 Early look at Ukraine’s exhibit at Venice Art Biennale – exploration of world’s exhaustion
    2022-01-25 04:58 Lubomyr Luciuk: When it comes to Ukraine’s national security, Vladimir Putin has already won
    2022-01-25 03:39 James Batchik & Doug Klain: It’s time for Europe to defend Ukraine — and itself
    2022-01-25 01:50 Jan. 24 news wrap-up
    2022-01-25 01:16 Ukrainian, Russian radio enthusiasts battle over alleged Russian military frequency
    2022-01-24 22:10 4 European airlines pull back from overnight stay in Kyiv amid invasion fears
    2022-01-24 20:37 Ukraine spends over $7 billion on road repairs in 2 years, plans to keep tempo going
    2022-01-24 18:25 UK begins to withdraw non-essential embassy staff, EU ‘won’t do the same,’ says Borrell
    2022-01-24 16:51 European Commission proposes new $1.35 billion loan to Ukraine amid invasion threat
    2022-01-24 16:28 Center for Defense Strategies: How likely is large-scale war in Ukraine? (analysis)
    2022-01-24 15:01 NATO sends more military power to eastern flank
    2022-01-24 08:34 Weekend news round-up
    2022-01-24 06:25 Bohdan Vitvitsky: Disarming Putin’s history weapon
    2022-01-24 03:32 Timothy Ash: Putin the gambler may have gone too far to back down
    2022-01-24 02:29 US orders diplomats’ families to leave Kyiv, citing ‘threat of Russian military action’
    2022-01-23 19:34 Glovo acquires Ukrainian grocery delivery Zakaz.ua
    2022-01-23 12:13 Who is Murayev, the man UK exposes as potential leader of Kremlin’s coup
    2022-01-23 03:59 Mychailo Wynnyckyj: Ukrainian voices are missing from the drama over Ukraine’s future
    2022-01-22 23:00 Kyrgyz journalist arrested after exposing alleged top-level corruption
    2022-01-22 20:26 Selection panel launches judicial reform
    2022-01-22 17:40 Ukrainian State-Owned Enterprises Weekly – Issue 61
    2022-01-22 12:31 Andrew Fink: Putin’s anti-democratic crusade
    2022-01-22 12:27 CNN: US Embassy in Kyiv asks authorization to evacuate non-essential staff
    2022-01-22 11:54 Jan. 21 news wrap-up
    2022-01-22 01:55 Amy Knight: On Ukraine, NATO and more, Russia’s Vladimir Putin lives in an alternative reality
    2022-01-21 22:42 US prosecutors move to seize $6 million from Kolomoisky in Texas
    2022-01-21 22:07 Poland pledges support to Ukraine in face of Russian threat
    2022-01-21 21:27 Nations expand defense aid to Ukraine as Russia threatens big war
    2022-01-21 20:43 US, Russia agree to continue Ukraine talks after Blinken-Lavrov meeting
    2022-01-21 15:10 Ukrainian rap diva Alina Pash wins EU award for young artists
    2022-01-21 14:15 Russian parliament to consider recognizing Donbas proxies as independent states
    2022-01-21 12:41 Zelensky: Russia could invade Kharkiv
    2022-01-21 08:30 Jan. 20 news wrap-up
    2022-01-21 05:15 Pavel Felgenhauer: Russian troops deploy to Belarus with fanfare
    2022-01-21 02:01 Harley Balzer: Don’t believe Putin’s propaganda. Sanctions are hurting Russia
    2022-01-24 22:10 4 European airlines pull back from overnight stay in Kyiv amid invasion fears
    2022-01-24 20:37 Ukraine spends over $7 billion on road repairs in 2 years, plans to keep tempo going
    2022-01-24 18:25 UK begins to withdraw non-essential embassy staff, EU ‘won’t do the same,’ says Borrell
    2022-01-24 16:51 European Commission proposes new $1.35 billion loan to Ukraine amid invasion threat
    2022-01-24 16:28 Center for Defense Strategies: How likely is large-scale war in Ukraine? (analysis)
    2022-01-24 15:01 NATO sends more military power to eastern flank
    2022-01-24 08:34 Weekend news round-up
    2022-01-24 06:25 Bohdan Vitvitsky: Disarming Putin’s history weapon
    2022-01-24 03:32 Timothy Ash: Putin the gambler may have gone too far to back down
    2022-01-24 02:29 US orders diplomats’ families to leave Kyiv, citing ‘threat of Russian military action’
    2022-01-23 19:34 Glovo acquires Ukrainian grocery delivery Zakaz.ua
    2022-01-23 12:13 Who is Murayev, the man UK exposes as potential leader of Kremlin’s coup
    2022-01-23 03:59 Mychailo Wynnyckyj: Ukrainian voices are missing from the drama over Ukraine’s future
    2022-01-22 23:00 Kyrgyz journalist arrested after exposing alleged top-level corruption
    2022-01-22 20:26 Selection panel launches judicial reform
    2022-01-22 17:40 Ukrainian State-Owned Enterprises Weekly – Issue 61
    2022-01-22 12:31 Andrew Fink: Putin’s anti-democratic crusade
    2022-01-22 12:27 CNN: US Embassy in Kyiv asks authorization to evacuate non-essential staff
    2022-01-22 11:54 Jan. 21 news wrap-up
    2022-01-22 01:55 Amy Knight: On Ukraine, NATO and more, Russia’s Vladimir Putin lives in an alternative reality
    2022-01-21 22:42 US prosecutors move to seize $6 million from Kolomoisky in Texas
    2022-01-21 22:07 Poland pledges support to Ukraine in face of Russian threat
    2022-01-21 21:27 Nations expand defense aid to Ukraine as Russia threatens big war
    2022-01-21 20:43 US, Russia agree to continue Ukraine talks after Blinken-Lavrov meeting
    2022-01-21 15:10 Ukrainian rap diva Alina Pash wins EU award for young artists
    2022-01-21 14:15 Russian parliament to consider recognizing Donbas proxies as independent states
    2022-01-21 12:41 Zelensky: Russia could invade Kharkiv
    2022-01-21 08:30 Jan. 20 news wrap-up
    2022-01-21 05:15 Pavel Felgenhauer: Russian troops deploy to Belarus with fanfare
    2022-01-21 02:01 Harley Balzer: Don’t believe Putin’s propaganda. Sanctions are hurting Russia
    2022-01-20 22:47 Investigation: Russian troops sent toward Ukraine for mysterious 6-9-month missions
    2022-01-20 21:32 Ukraine allegedly loses $40 million in taxes due to shady gasoline production
    2022-01-20 21:26 Weekend in Kyiv – Jan. 21-23
    2022-01-20 20:41 Alexander Query: Macron’s ill-timed bluff puts Ukraine at risk (op-ed)
    2022-01-20 20:13 Peskov makes veiled threat: Sanctions will lead to invasion of Ukraine
    2022-01-20 19:28 Tainted top judicial officials resign ahead of reform
    2022-01-20 18:53 Health ministry announces new Covid-19 outbreak in western Ukraine
    2022-01-20 18:46 Zelensky responds to Biden: ‘There are no minor incursions’
    2022-01-20 17:53 US Treasury announces sanctions against 4 pro-Kremlin Ukrainians
    2022-01-20 16:26 US Presbyterian minister’s mission to find a home for every orphan in Ukraine
    2022-01-20 14:53 James Rogers: Is Britain now Ukraine’s closest ally?
    2022-01-20 12:28 Reform watch: Rule of law reforms blocked by corrupt actors
    2022-01-20 06:09 OCCRP: How Kazakhstan’s Nazarbayev controls vast assets through charitable foundations (INVESTIGATION)
    2022-01-20 03:34 Victor Tregubov: Plunging into icy water on Epiphany isn’t an old Ukrainian tradition. It’s Russian
    2022-01-20 03:16 Jan. 19 news wrap-up
    2022-01-20 02:03 Biden predicts Russia will ‘move in’ on Ukraine, while Zelensky downplays invasion threat
    2022-01-19 21:36 Blinken visits Kyiv, warns Russia might attack ‘at very short notice,’ asks about reforms
    2022-01-19 20:43 People plunge into icy water in Kyiv to mark Epiphany (PHOTOS)
    2022-01-19 18:18 US to provide Ukraine with additional $200 million military aid package
    2022-01-19 16:41 White House: Russia was preparing evacuation from Kyiv embassy amid rising tensions
    2022-01-21 22:42 US prosecutors move to seize $6 million from Kolomoisky in Texas
    2022-01-21 22:07 Poland pledges support to Ukraine in face of Russian threat
    2022-01-21 21:27 Nations expand defense aid to Ukraine as Russia threatens big war
    2022-01-21 20:43 US, Russia agree to continue Ukraine talks after Blinken-Lavrov meeting
    2022-01-21 15:10 Ukrainian rap diva Alina Pash wins EU award for young artists
    2022-01-21 14:15 Russian parliament to consider recognizing Donbas proxies as independent states
    2022-01-21 12:41 Zelensky: Russia could invade Kharkiv
    2022-01-21 08:30 Jan. 20 news wrap-up
    2022-01-21 05:15 Pavel Felgenhauer: Russian troops deploy to Belarus with fanfare
    2022-01-21 02:01 Harley Balzer: Don’t believe Putin’s propaganda. Sanctions are hurting Russia
    2022-01-20 22:47 Investigation: Russian troops sent toward Ukraine for mysterious 6-9-month missions
    2022-01-20 21:32 Ukraine allegedly loses $40 million in taxes due to shady gasoline production
    2022-01-20 21:26 Weekend in Kyiv – Jan. 21-23
    2022-01-20 20:41 Alexander Query: Macron’s ill-timed bluff puts Ukraine at risk (op-ed)
    2022-01-20 20:13 Peskov makes veiled threat: Sanctions will lead to invasion of Ukraine
    2022-01-20 19:28 Tainted top judicial officials resign ahead of reform
    2022-01-20 18:53 Health ministry announces new Covid-19 outbreak in western Ukraine
    2022-01-20 18:46 Zelensky responds to Biden: ‘There are no minor incursions’
    2022-01-20 17:53 US Treasury announces sanctions against 4 pro-Kremlin Ukrainians
    2022-01-20 16:26 US Presbyterian minister’s mission to find a home for every orphan in Ukraine
    2022-01-20 14:53 James Rogers: Is Britain now Ukraine’s closest ally?
    2022-01-20 12:28 Reform watch: Rule of law reforms blocked by corrupt actors
    2022-01-20 06:09 OCCRP: How Kazakhstan’s Nazarbayev controls vast assets through charitable foundations (INVESTIGATION)
    2022-01-20 03:34 Victor Tregubov: Plunging into icy water on Epiphany isn’t an old Ukrainian tradition. It’s Russian
    2022-01-20 03:16 Jan. 19 news wrap-up
    2022-01-20 02:03 Biden predicts Russia will ‘move in’ on Ukraine, while Zelensky downplays invasion threat
    2022-01-19 21:36 Blinken visits Kyiv, warns Russia might attack ‘at very short notice,’ asks about reforms
    2022-01-19 20:43 People plunge into icy water in Kyiv to mark Epiphany (PHOTOS)
    2022-01-19 18:18 US to provide Ukraine with additional $200 million military aid package
    2022-01-19 16:41 White House: Russia was preparing evacuation from Kyiv embassy amid rising tensions
    2022-01-19 14:37 Court rules not to arrest Poroshenko
    2022-01-19 02:29 Ukraine Daily: Jan. 18 news wrap-up
    2022-01-19 00:24 Government to create 150 Territorial Defense battalions
    2022-01-19 00:09 Ukrainian bonds plunge amid Russia’s military buildup
    2022-01-18 23:06 Germany digs in heels against harder measures to restrain Russia
    2022-01-18 21:36 Russia moves troops to Belarus ahead of February joint military drills
    2022-01-18 18:34 Blinken to visit Kyiv, Berlin on Jan. 18-20
    2022-01-18 17:06 What we know about British tank killers likely sent to Ukraine
    2022-01-18 16:38 Omicron fears close 37 air routes in Ukraine
    2022-01-18 15:54 Michael Bociurkiw: For Putin, Kazakhstan is a domino too big to fall
    2022-01-18 10:54 Ukraine Daily: Jan. 17 news round-up
    2022-01-18 01:06 UK provides Ukraine with anti-tank weapons
    2022-01-17 23:51 Poroshenko arrest hearing: Prosecution asks for $37 million bail, court deliberates for 11 hours without result
    2022-01-17 22:30 US senators visit Kyiv in show of support
    2022-01-17 19:42 New localization law seeks to revive decayed machine building industry
    2022-01-17 18:37 Ukrainian electronic duo Artbat to perform at Coachella
    2022-01-17 17:54 Ukrainian scientists register Tonga eruption effects in Antarctica (VIDEO)
    2022-01-17 11:04 Eugene Czolij: West must decisively enforce basic principles of UN charter for global peace, security
    2022-01-17 10:34 Poroshenko returns to Ukraine, faces possible arrest
    2022-01-17 09:43 Ukraine Daily: Weekend news round-up
    2022-01-20 14:53 James Rogers: Is Britain now Ukraine’s closest ally?
    2022-01-20 12:28 Reform watch: Rule of law reforms blocked by corrupt actors
    2022-01-20 06:09 OCCRP: How Kazakhstan’s Nazarbayev controls vast assets through charitable foundations (INVESTIGATION)
    2022-01-20 03:34 Victor Tregubov: Plunging into icy water on Epiphany isn’t an old Ukrainian tradition. It’s Russian
    2022-01-20 03:16 Jan. 19 news wrap-up
    2022-01-20 02:03 Biden predicts Russia will ‘move in’ on Ukraine, while Zelensky downplays invasion threat
    2022-01-19 21:36 Blinken visits Kyiv, warns Russia might attack ‘at very short notice,’ asks about reforms
    2022-01-19 20:43 People plunge into icy water in Kyiv to mark Epiphany (PHOTOS)
    2022-01-19 18:18 US to provide Ukraine with additional $200 million military aid package
    2022-01-19 16:41 White House: Russia was preparing evacuation from Kyiv embassy amid rising tensions
    2022-01-19 14:37 Court rules not to arrest Poroshenko
    2022-01-19 02:29 Ukraine Daily: Jan. 18 news wrap-up
    2022-01-19 00:24 Government to create 150 Territorial Defense battalions
    2022-01-19 00:09 Ukrainian bonds plunge amid Russia’s military buildup
    2022-01-18 23:06 Germany digs in heels against harder measures to restrain Russia
    2022-01-18 21:36 Russia moves troops to Belarus ahead of February joint military drills
    2022-01-18 18:34 Blinken to visit Kyiv, Berlin on Jan. 18-20
    2022-01-18 17:06 What we know about British tank killers likely sent to Ukraine
    2022-01-18 16:38 Omicron fears close 37 air routes in Ukraine
    2022-01-18 15:54 Michael Bociurkiw: For Putin, Kazakhstan is a domino too big to fall
    2022-01-18 10:54 Ukraine Daily: Jan. 17 news round-up
    2022-01-18 01:06 UK provides Ukraine with anti-tank weapons
    2022-01-17 23:51 Poroshenko arrest hearing: Prosecution asks for $37 million bail, court deliberates for 11 hours without result
    2022-01-17 22:30 US senators visit Kyiv in show of support
    2022-01-17 19:42 New localization law seeks to revive decayed machine building industry
    2022-01-17 18:37 Ukrainian electronic duo Artbat to perform at Coachella
    2022-01-17 17:54 Ukrainian scientists register Tonga eruption effects in Antarctica (VIDEO)
    2022-01-17 11:04 Eugene Czolij: West must decisively enforce basic principles of UN charter for global peace, security
    2022-01-17 10:34 Poroshenko returns to Ukraine, faces possible arrest
    2022-01-17 09:43 Ukraine Daily: Weekend news round-up
    2022-01-16 20:29 Ukraine: Evidence implies Russia behind cyberattack on government websites
    2022-01-16 18:29 Oksana Bashuk Hepburn: How to ensure Putin’s withdrawal from Ukraine
    2022-01-16 17:15 Ukraine expects good winter crop harvest
    2022-01-15 18:34 Veronika Melkozerova: How looming Russian invasion changed lives of Ukrainians
    2022-01-15 16:42 US giant chipmaker Qualcomm acquires Ukrainian startup Augmented Pixels
    2022-01-15 15:45 Russian-led militants release toxic ammonia in Donbas, provoking false-flag fears
    2022-01-15 13:45 Ukrainian State-Owned Enterprises Weekly – Issue 60
    2022-01-15 10:49 Ukraine Daily: Jan. 14 news round-up
    2022-01-14 18:39 Yermak: Ukraine proposes trilateral talks with US, Russia
    2022-01-14 17:48 Russian military runs combat readiness tests in Far East
    2022-01-14 13:37 Biden aide: Russia prepares pretext for potential invasion of Ukraine
    2022-01-14 12:45 Ukraine Daily: Jan. 13 news round-up
    2022-01-14 09:07 Major cyberattack hits Ukrainian government websites (UPDATED)
    2022-01-13 21:38 Diplomacy week ends with no resolution, Russian threats
    2022-01-13 21:06 Gyunduz Mamedov: Ukraine must hold Iran to justice over flight PS752
    2022-01-13 19:45 Ukraine land sales reach $200 million six months after launch
    2022-01-13 19:16 First Ukrainian satellite in 10 years launched by SpaceX (VIDEO)
    2022-01-13 18:12 Weekend in Kyiv – Jan. 14-16
    2022-01-13 15:01 Investigators seize Bilshovyk shares over suspected corruption
    2022-01-13 14:23 Timothy Ash: Putin is preparing for war
    2022-01-18 10:54 Ukraine Daily: Jan. 17 news round-up
    2022-01-18 01:06 UK provides Ukraine with anti-tank weapons
    2022-01-17 23:51 Poroshenko arrest hearing: Prosecution asks for $37 million bail, court deliberates for 11 hours without result
    2022-01-17 22:30 US senators visit Kyiv in show of support
    2022-01-17 19:42 New localization law seeks to revive decayed machine building industry
    2022-01-17 18:37 Ukrainian electronic duo Artbat to perform at Coachella
    2022-01-17 17:54 Ukrainian scientists register Tonga eruption effects in Antarctica (VIDEO)
    2022-01-17 11:04 Eugene Czolij: West must decisively enforce basic principles of UN charter for global peace, security
    2022-01-17 10:34 Poroshenko returns to Ukraine, faces possible arrest
    2022-01-17 09:43 Ukraine Daily: Weekend news round-up
    2022-01-16 20:29 Ukraine: Evidence implies Russia behind cyberattack on government websites
    2022-01-16 18:29 Oksana Bashuk Hepburn: How to ensure Putin’s withdrawal from Ukraine
    2022-01-16 17:15 Ukraine expects good winter crop harvest
    2022-01-15 18:34 Veronika Melkozerova: How looming Russian invasion changed lives of Ukrainians
    2022-01-15 16:42 US giant chipmaker Qualcomm acquires Ukrainian startup Augmented Pixels
    2022-01-15 15:45 Russian-led militants release toxic ammonia in Donbas, provoking false-flag fears
    2022-01-15 13:45 Ukrainian State-Owned Enterprises Weekly – Issue 60
    2022-01-15 10:49 Ukraine Daily: Jan. 14 news round-up
    2022-01-14 18:39 Yermak: Ukraine proposes trilateral talks with US, Russia
    2022-01-14 17:48 Russian military runs combat readiness tests in Far East
    2022-01-14 13:37 Biden aide: Russia prepares pretext for potential invasion of Ukraine
    2022-01-14 12:45 Ukraine Daily: Jan. 13 news round-up
    2022-01-14 09:07 Major cyberattack hits Ukrainian government websites (UPDATED)
    2022-01-13 21:38 Diplomacy week ends with no resolution, Russian threats
    2022-01-13 21:06 Gyunduz Mamedov: Ukraine must hold Iran to justice over flight PS752
    2022-01-13 19:45 Ukraine land sales reach $200 million six months after launch
    2022-01-13 19:16 First Ukrainian satellite in 10 years launched by SpaceX (VIDEO)
    2022-01-13 18:12 Weekend in Kyiv – Jan. 14-16
    2022-01-13 15:01 Investigators seize Bilshovyk shares over suspected corruption
    2022-01-13 14:23 Timothy Ash: Putin is preparing for war
    2022-01-13 10:36 Media in Progress Ep. 6: Popular protest, inter-elite feuds or Russian intervention – What’s going on in Kazakhstan?
    2022-01-13 08:41 Ukraine Daily: Jan. 12 news round-up
    2022-01-12 22:08 Russia, NATO remain divided on key issues
    2022-01-12 21:29 Ukraine introduces price regulation for basic foodstuffs until end of pandemic
    2022-01-12 20:24 Michael Khodarkovsky: What is next for Russia
    2022-01-12 20:02 Court orders closure of bribery case against top member of Zelensky’s administration
    2022-01-12 19:22 Ukrainian company creates New Year light show for Burj Khalifa in Dubai
    2022-01-12 18:28 Ukrzaliznytsia generates $16 million profit in 2021 after pandemic losses
    2022-01-12 14:31 Bootleg booze costs Ukraine $330 million a year in uncollected taxes
    2022-01-12 10:41 How Zelensky’s administration moves to dismantle press freedom in Ukraine
    2022-01-12 08:40 Ukraine Daily: Jan. 11 news round-up
    2022-01-11 22:26 US Republicans draft bill to designate Ukraine a ‘NATO Plus’ state, sanction Russia
    2022-01-11 20:31 Registration of electric cars continues to grow in Ukraine
    2022-01-11 20:15 Ukraine receives $450 million in foreign defense aid in 2021
    2022-01-11 19:47 Ukraine produced 5% more electricity in 2021 despite coal shortage
    2022-01-11 16:09 Ukraine to consider vaccination of children over 5 years old
    2022-01-11 15:30 Ukrainian State-Owned Enterprises Weekly – Special issue: Top 2022 events to watch
    2022-01-11 11:28 Annual inflation reaches 10% in Ukraine for first time in 3 years
    2022-01-11 08:28 Ukraine Daily: Jan. 10 news round-up
    2022-01-10 22:20 After long silence, Ukraine makes first tepid statement on violence in Kazakhstan
    2022-01-14 13:37 Biden aide: Russia prepares pretext for potential invasion of Ukraine
    2022-01-14 12:45 Ukraine Daily: Jan. 13 news round-up
    2022-01-14 09:07 Major cyberattack hits Ukrainian government websites (UPDATED)
    2022-01-13 21:38 Diplomacy week ends with no resolution, Russian threats
    2022-01-13 21:06 Gyunduz Mamedov: Ukraine must hold Iran to justice over flight PS752
    2022-01-13 19:45 Ukraine land sales reach $200 million six months after launch
    2022-01-13 19:16 First Ukrainian satellite in 10 years launched by SpaceX (VIDEO)
    2022-01-13 18:12 Weekend in Kyiv – Jan. 14-16
    2022-01-13 15:01 Investigators seize Bilshovyk shares over suspected corruption
    2022-01-13 14:23 Timothy Ash: Putin is preparing for war
    2022-01-13 10:36 Media in Progress Ep. 6: Popular protest, inter-elite feuds or Russian intervention – What’s going on in Kazakhstan?
    2022-01-13 08:41 Ukraine Daily: Jan. 12 news round-up
    2022-01-12 22:08 Russia, NATO remain divided on key issues
    2022-01-12 21:29 Ukraine introduces price regulation for basic foodstuffs until end of pandemic
    2022-01-12 20:24 Michael Khodarkovsky: What is next for Russia
    2022-01-12 20:02 Court orders closure of bribery case against top member of Zelensky’s administration
    2022-01-12 19:22 Ukrainian company creates New Year light show for Burj Khalifa in Dubai
    2022-01-12 18:28 Ukrzaliznytsia generates $16 million profit in 2021 after pandemic losses
    2022-01-12 14:31 Bootleg booze costs Ukraine $330 million a year in uncollected taxes
    2022-01-12 10:41 How Zelensky’s administration moves to dismantle press freedom in Ukraine
    2022-01-12 08:40 Ukraine Daily: Jan. 11 news round-up
    2022-01-11 22:26 US Republicans draft bill to designate Ukraine a ‘NATO Plus’ state, sanction Russia
    2022-01-11 20:31 Registration of electric cars continues to grow in Ukraine
    2022-01-11 20:15 Ukraine receives $450 million in foreign defense aid in 2021
    2022-01-11 19:47 Ukraine produced 5% more electricity in 2021 despite coal shortage
    2022-01-11 16:09 Ukraine to consider vaccination of children over 5 years old
    2022-01-11 15:30 Ukrainian State-Owned Enterprises Weekly – Special issue: Top 2022 events to watch
    2022-01-11 11:28 Annual inflation reaches 10% in Ukraine for first time in 3 years
    2022-01-11 08:28 Ukraine Daily: Jan. 10 news round-up
    2022-01-10 22:20 After long silence, Ukraine makes first tepid statement on violence in Kazakhstan
    2022-01-10 21:43 Diplomacy week kicks off as US, Russia meet to discuss Ukraine
    2022-01-10 19:11 Mriya aircraft endures minor breakdown in Poland
    2022-01-10 19:01 Court extends Medvedchuk’s house arrest in treason case
    2022-01-10 17:09 Ukraine to reduce validity of Covid-19 certificates to 9 months
    2022-01-09 17:01 Robert A. McConnell: Talk won’t deter Putin. Here’s what West can do
    2022-01-09 16:38 Security Council head: PS752 shootdown was premeditated terror attack
    2022-01-09 15:53 Ukraine bans gender stereotyping, sexism in job listings, advertising
    2022-01-08 20:33 400,000 new jobs created in first 9 months of 2021, alleging resurgence in employment
    2022-01-08 20:23 Top ally of ex-President Nazarbayev arrested amid Kazakh uprising
    2022-01-08 20:15 Activists allege Ukraine’s SBU has launched crackdown on opponents of Kazakh regime (GRAPHIC)
    2022-01-08 16:42 John E. Herbst: How Kazakhstan could shift Putin’s calculus on Ukraine
    2022-01-08 08:49 Ukraine Daily: Jan. 7 news round-up
    2022-01-07 22:20 Who can and can’t join Ukraine’s Territorial Defense Force
    2022-01-07 21:07 Kazakh government regains control with Kremlin’s help amid uprising
    2022-01-07 19:56 How well do you know Ukrainian Christmas traditions? (QUIZ)
    2022-01-07 13:32 Timothy Ash: What Kazakhstan’s protests mean for the global economy
    2022-01-07 08:37 Ukraine Daily: Jan. 6 news round-up
    2022-01-06 20:28 Explainer: What happened in Kazakhstan, what comes next
    2022-01-06 17:59 Borrell: EU to apply full sanctions power if Russia escalates
    2022-01-06 17:53 Court seizes Poroshenko’s assets in treason case
    2022-01-12 08:40 Ukraine Daily: Jan. 11 news round-up
    2022-01-11 22:26 US Republicans draft bill to designate Ukraine a ‘NATO Plus’ state, sanction Russia
    2022-01-11 20:31 Registration of electric cars continues to grow in Ukraine
    2022-01-11 20:15 Ukraine receives $450 million in foreign defense aid in 2021
    2022-01-11 19:47 Ukraine produced 5% more electricity in 2021 despite coal shortage
    2022-01-11 16:09 Ukraine to consider vaccination of children over 5 years old
    2022-01-11 15:30 Ukrainian State-Owned Enterprises Weekly – Special issue: Top 2022 events to watch
    2022-01-11 11:28 Annual inflation reaches 10% in Ukraine for first time in 3 years
    2022-01-11 08:28 Ukraine Daily: Jan. 10 news round-up
    2022-01-10 22:20 After long silence, Ukraine makes first tepid statement on violence in Kazakhstan
    2022-01-10 21:43 Diplomacy week kicks off as US, Russia meet to discuss Ukraine
    2022-01-10 19:11 Mriya aircraft endures minor breakdown in Poland
    2022-01-10 19:01 Court extends Medvedchuk’s house arrest in treason case
    2022-01-10 17:09 Ukraine to reduce validity of Covid-19 certificates to 9 months
    2022-01-09 17:01 Robert A. McConnell: Talk won’t deter Putin. Here’s what West can do
    2022-01-09 16:38 Security Council head: PS752 shootdown was premeditated terror attack
    2022-01-09 15:53 Ukraine bans gender stereotyping, sexism in job listings, advertising
    2022-01-08 20:33 400,000 new jobs created in first 9 months of 2021, alleging resurgence in employment
    2022-01-08 20:23 Top ally of ex-President Nazarbayev arrested amid Kazakh uprising
    2022-01-08 20:15 Activists allege Ukraine’s SBU has launched crackdown on opponents of Kazakh regime (GRAPHIC)
    2022-01-08 16:42 John E. Herbst: How Kazakhstan could shift Putin’s calculus on Ukraine
    2022-01-08 08:49 Ukraine Daily: Jan. 7 news round-up
    2022-01-07 22:20 Who can and can’t join Ukraine’s Territorial Defense Force
    2022-01-07 21:07 Kazakh government regains control with Kremlin’s help amid uprising
    2022-01-07 19:56 How well do you know Ukrainian Christmas traditions? (QUIZ)
    2022-01-07 13:32 Timothy Ash: What Kazakhstan’s protests mean for the global economy
    2022-01-07 08:37 Ukraine Daily: Jan. 6 news round-up
    2022-01-06 20:28 Explainer: What happened in Kazakhstan, what comes next
    2022-01-06 17:59 Borrell: EU to apply full sanctions power if Russia escalates
    2022-01-06 17:53 Court seizes Poroshenko’s assets in treason case
    2022-01-06 16:13 Ukraine approves booster shots for adults
    2022-01-06 11:28 Prosecutors block accounts of steel giant ArcelorMittal in tax evasion probe
    2022-01-06 11:17 Ukraine Daily: Jan. 5 news round-up
    2022-01-05 21:05 Timothy Ash: What Kazakhstan unrest means for Ukraine amid upcoming US-Russia security talks
    2022-01-05 16:52 Stoltenberg to meet Kuleba before Russia security talks
    2022-01-05 15:43 Statement: 28 Ukrainian NGOs call for action against Russia’s closure of Memorial human rights group
    2022-01-05 08:28 Ukraine Daily: Jan. 4 news round-up
    2022-01-04 20:07 Canadian court awards $84 million ‘for lives lost to terrorism’ in 2020 Ukrainian plane downing in Iran
    2022-01-04 19:51 Ukraine to get at least 3 Mark VI boats in 2022
    2022-01-04 16:46 Netflix responds to criticism over ‘offensive’ Ukrainian character in ‘Emily in Paris’
    2022-01-04 15:59 Ukraine offers booster doses for people 60 and older, eases rules for mixing vaccines
    2022-01-04 08:35 Ukraine Daily: Jan. 3 news round-up
    2022-01-03 23:12 Ukrainian director detained in Italy on Russian extradition request
    2022-01-03 19:36 EU’s top diplomat Borrell to visit Ukraine on Jan. 4-6 in show of support
    2022-01-03 17:43 Scholz to meet with Putin in January to address military buildup, Nord Stream 2
    2022-01-03 06:12 Ukraine Daily: Jan. 2 news round-up
    2022-01-03 00:38 Biden promises Zelensky there will be no agreements about Ukraine behind its back
    2022-01-02 22:59 Ukraine closes inland waters to Russian ships
    2022-01-02 16:19 Health Ministry: Omicron infection wave to start in Ukraine in mid-January
    2022-01-01 19:33 Anti-corruption activists say head of Ukraine’s ‘FBI’ appointed after fake contest
    2022-01-08 20:15 Activists allege Ukraine’s SBU has launched crackdown on opponents of Kazakh regime (GRAPHIC)
    2022-01-08 16:42 John E. Herbst: How Kazakhstan could shift Putin’s calculus on Ukraine
    2022-01-08 08:49 Ukraine Daily: Jan. 7 news round-up
    2022-01-07 22:20 Who can and can’t join Ukraine’s Territorial Defense Force
    2022-01-07 21:07 Kazakh government regains control with Kremlin’s help amid uprising
    2022-01-07 19:56 How well do you know Ukrainian Christmas traditions? (QUIZ)
    2022-01-07 13:32 Timothy Ash: What Kazakhstan’s protests mean for the global economy
    2022-01-07 08:37 Ukraine Daily: Jan. 6 news round-up
    2022-01-06 20:28 Explainer: What happened in Kazakhstan, what comes next
    2022-01-06 17:59 Borrell: EU to apply full sanctions power if Russia escalates
    2022-01-06 17:53 Court seizes Poroshenko’s assets in treason case
    2022-01-06 16:13 Ukraine approves booster shots for adults
    2022-01-06 11:28 Prosecutors block accounts of steel giant ArcelorMittal in tax evasion probe
    2022-01-06 11:17 Ukraine Daily: Jan. 5 news round-up
    2022-01-05 21:05 Timothy Ash: What Kazakhstan unrest means for Ukraine amid upcoming US-Russia security talks
    2022-01-05 16:52 Stoltenberg to meet Kuleba before Russia security talks
    2022-01-05 15:43 Statement: 28 Ukrainian NGOs call for action against Russia’s closure of Memorial human rights group
    2022-01-05 08:28 Ukraine Daily: Jan. 4 news round-up
    2022-01-04 20:07 Canadian court awards $84 million ‘for lives lost to terrorism’ in 2020 Ukrainian plane downing in Iran
    2022-01-04 19:51 Ukraine to get at least 3 Mark VI boats in 2022
    2022-01-04 16:46 Netflix responds to criticism over ‘offensive’ Ukrainian character in ‘Emily in Paris’
    2022-01-04 15:59 Ukraine offers booster doses for people 60 and older, eases rules for mixing vaccines
    2022-01-04 08:35 Ukraine Daily: Jan. 3 news round-up
    2022-01-03 23:12 Ukrainian director detained in Italy on Russian extradition request
    2022-01-03 19:36 EU’s top diplomat Borrell to visit Ukraine on Jan. 4-6 in show of support
    2022-01-03 17:43 Scholz to meet with Putin in January to address military buildup, Nord Stream 2
    2022-01-03 06:12 Ukraine Daily: Jan. 2 news round-up
    2022-01-03 00:38 Biden promises Zelensky there will be no agreements about Ukraine behind its back
    2022-01-02 22:59 Ukraine closes inland waters to Russian ships
    2022-01-02 16:19 Health Ministry: Omicron infection wave to start in Ukraine in mid-January
    2022-01-01 19:33 Anti-corruption activists say head of Ukraine’s ‘FBI’ appointed after fake contest
    2022-01-31 13:11 These Ukrainian soldiers were killed in Donbas in 2021
    2022-01-31 10:08 Kyiv’s colorful, crowded Christmas markets in 10 best photos
    2022-01-30 21:30 Ukrainian State-Owned Enterprises Weekly – Special issue: Top 2021 events
    2022-01-30 21:00 10 companies that made Ukrainians’ lives easier in 2021
    2022-01-30 20:20 Ukraine’s soldiers may soon get better, warmer boots
    2022-01-30 19:52 Odesa among contenders to host World Expo 2030
    2022-01-30 17:43 EBRD loans Nova Poshta 13 million euros to build automated sorting center in Dnipro
    2022-01-30 15:43 War, crime, teenage angst: Best Ukrainian movies of 2021
    2022-01-30 10:26 Ukraine Daily: Dec. 30
    2022-01-30 08:30 Ukraine’s top achievements in 2021 from sports victories to reforms
    2021-12-29 21:00 World Bank study reveals effects of global warming on Ukraine’s agriculture, forests
    2021-12-29 18:45 Government extends duty on Russian fuel imports
    2021-12-29 17:53 Hospital fire kills 3 patients, injures staff in Ivano-Frankivsk Oblast
    2021-12-29 17:17 Year of musical introspection: Ukraine’s best albums of 2021
    2021-12-29 16:01 Letter: Panel members call on Venediktova to help unblock anti-graft prosecutor selection
    2021-12-29 14:03 Parliament proposes new duty on energy, fuel imports from Russia
    2021-12-29 10:35 Ukraine Daily: Dec. 29
    2021-12-28 22:58 Top 10 political scandals of 2021 in Ukraine
    2021-12-28 21:58 Antonov rolls out new An-178 aircraft
    2021-12-04 16:46 Netflix responds to criticism over ‘offensive’ Ukrainian character in ‘Emily in Paris’
    2021-12-04 15:59 Ukraine offers booster doses for people 60 and older, eases rules for mixing vaccines
    2021-12-04 08:35 Ukraine Daily: Jan. 3 news round-up
    2021-12-03 23:12 Ukrainian director detained in Italy on Russian extradition request
    2021-12-03 19:36 EU’s top diplomat Borrell to visit Ukraine on Jan. 4-6 in show of support
    2021-12-03 17:43 Scholz to meet with Putin in January to address military buildup, Nord Stream 2
    2021-12-03 06:12 Ukraine Daily: Jan. 2 news round-up
    2021-12-03 00:38 Biden promises Zelensky there will be no agreements about Ukraine behind its back
    2021-12-02 22:59 Ukraine closes inland waters to Russian ships
    2021-12-02 16:19 Health Ministry: Omicron infection wave to start in Ukraine in mid-January
    2021-12-01 19:33 Anti-corruption activists say head of Ukraine’s ‘FBI’ appointed after fake contest
    2021-12-31 13:11 These Ukrainian soldiers were killed in Donbas in 2021
    2021-12-31 10:08 Kyiv’s colorful, crowded Christmas markets in 10 best photos
    2021-12-30 21:30 Ukrainian State-Owned Enterprises Weekly – Special issue: Top 2021 events
    2021-12-30 21:00 10 companies that made Ukrainians’ lives easier in 2021
    2021-12-30 20:20 Ukraine’s soldiers may soon get better, warmer boots
    2021-12-30 19:52 Odesa among contenders to host World Expo 2030
    2021-12-30 17:43 EBRD loans Nova Poshta 13 million euros to build automated sorting center in Dnipro
    2021-12-30 15:43 War, crime, teenage angst: Best Ukrainian movies of 2021
    2021-12-30 10:26 Ukraine Daily: Dec. 30
    2021-12-30 08:30 Ukraine’s top achievements in 2021 from sports victories to reforms
    2021-12-29 21:00 World Bank study reveals effects of global warming on Ukraine’s agriculture, forests
    2021-12-29 18:45 Government extends duty on Russian fuel imports
    2021-12-29 17:53 Hospital fire kills 3 patients, injures staff in Ivano-Frankivsk Oblast
    2021-12-29 17:17 Year of musical introspection: Ukraine’s best albums of 2021
    2021-12-29 16:01 Letter: Panel members call on Venediktova to help unblock anti-graft prosecutor selection
    2021-12-29 14:03 Parliament proposes new duty on energy, fuel imports from Russia
    2021-12-29 10:35 Ukraine Daily: Dec. 29
    2021-12-28 22:58 Top 10 political scandals of 2021 in Ukraine
    2021-12-28 21:58 Antonov rolls out new An-178 aircraft
    2021-12-28 21:13 US will give $20 million to strengthen Ukraine’s border with Russia, Belarus
    2021-12-28 20:04 Kyiv reports first cases of Omicron Covid-19 variant
    2021-12-28 19:28 Zelensky’s party lawmaker buys nationwide television channel
    2021-12-28 17:02 Ukrainian documentary ‘Home Games’ available on Netflix in Europe
    2021-12-28 15:39 Ukraine was repeatedly bartered by great powers in the past. Will it happen again?
    2021-12-28 08:42 Ukraine Daily: Dec. 28
    2021-12-27 21:22 Kyiv to create territorial defense headquarters ahead of Russia’s potential invasion
    2021-12-27 20:22 Label of Kyiv’s techno club named among the best in 2021
    2021-12-27 19:24 Russia will meet with US to discuss Ukraine on Jan. 11
    2021-12-27 18:00 Security Council chief: Russia moved 600,000 people to Crimea since occupation
    2021-12-27 14:33 Tech in 2021: Ukrainian startups go public, partner with Elon Musk, raise millions
    2021-12-27 08:37 Weekend news wrap-up
    2021-12-26 12:29 Anatoly Motkin: Ukraine will become a role model for Eurasian IT markets
    2021-12-25 22:14 Explained: New requirement for Ukrainianwomen to register for possible military, civil defense service
    2021-12-25 19:46 Ukrainian State-Owned Enterprises Weekly – Issue 57
    2021-12-24 22:42 Panel head blocks appointment of anti-graft prosecutor, wants SBU to check winner
    2021-12-24 20:57 Putin’s press conference statements about Ukraine, debunked
    2021-12-24 19:22 Survey: Majority of Ukrainians support joining EU, NATO
    2021-12-24 17:38 Prosecutors reportedly seek to arrest ex-president Poroshenko with $37 million bail
    2021-12-23 23:48 State Ecological Inspectorate: Dnipro River might dry up in 300 years
    2021-12-30 08:30 Ukraine’s top achievements in 2021 from sports victories to reforms
    2021-12-29 21:00 World Bank study reveals effects of global warming on Ukraine’s agriculture, forests
    2021-12-29 18:45 Government extends duty on Russian fuel imports
    2021-12-29 17:53 Hospital fire kills 3 patients, injures staff in Ivano-Frankivsk Oblast
    2021-12-29 17:17 Year of musical introspection: Ukraine’s best albums of 2021
    2021-12-29 16:01 Letter: Panel members call on Venediktova to help unblock anti-graft prosecutor selection
    2021-12-29 14:03 Parliament proposes new duty on energy, fuel imports from Russia
    2021-12-29 10:35 Ukraine Daily: Dec. 29
    2021-12-28 22:58 Top 10 political scandals of 2021 in Ukraine
    2021-12-28 21:58 Antonov rolls out new An-178 aircraft
    2021-12-28 21:13 US will give $20 million to strengthen Ukraine’s border with Russia, Belarus
    2021-12-28 20:04 Kyiv reports first cases of Omicron Covid-19 variant
    2021-12-28 19:28 Zelensky’s party lawmaker buys nationwide television channel
    2021-12-28 17:02 Ukrainian documentary ‘Home Games’ available on Netflix in Europe
    2021-12-28 15:39 Ukraine was repeatedly bartered by great powers in the past. Will it happen again?
    2021-12-28 08:42 Ukraine Daily: Dec. 28
    2021-12-27 21:22 Kyiv to create territorial defense headquarters ahead of Russia’s potential invasion
    2021-12-27 20:22 Label of Kyiv’s techno club named among the best in 2021
    2021-12-27 19:24 Russia will meet with US to discuss Ukraine on Jan. 11
    2021-12-27 18:00 Security Council chief: Russia moved 600,000 people to Crimea since occupation
    2021-12-27 14:33 Tech in 2021: Ukrainian startups go public, partner with Elon Musk, raise millions
    2021-12-27 08:37 Weekend news wrap-up
    2021-12-26 12:29 Anatoly Motkin: Ukraine will become a role model for Eurasian IT markets
    2021-12-25 22:14 Explained: New requirement for Ukrainianwomen to register for possible military, civil defense service
    2021-12-25 19:46 Ukrainian State-Owned Enterprises Weekly – Issue 57
    2021-12-24 22:42 Panel head blocks appointment of anti-graft prosecutor, wants SBU to check winner
    2021-12-24 20:57 Putin’s press conference statements about Ukraine, debunked
    2021-12-24 19:22 Survey: Majority of Ukrainians support joining EU, NATO
    2021-12-24 17:38 Prosecutors reportedly seek to arrest ex-president Poroshenko with $37 million bail
    2021-12-23 23:48 State Ecological Inspectorate: Dnipro River might dry up in 300 years
    2021-12-23 22:15 Top general: Ukraine’s military will respond to enemy fire
    2021-12-23 20:27 Ukraine International Airlines to resume transatlantic flights in 2022
    2021-12-23 17:46 6 last-minute gift ideas from Ukrainian brands
    2021-12-23 17:15 Ukraine approves booster doses for healthcare, orphanage employees
    2021-12-23 08:38 Media in Progress Ep. 5: Will Russia invade again?
    2021-12-22 21:47 Russia has 122,000 troops close to Ukraine’s border
    2021-12-22 21:40 Supreme Court rejects prosecutor general’s libel suit against newspaper, anti-graft watchdog
    2021-12-22 20:52 HBO acquires Ukrainian war drama ‘Bad Roads’
    2021-12-22 18:30 Naftogaz complains of Gazprom market manipulation
    2021-12-22 18:20 5 key events during PrivatBank’s 5 years in state hands
    2021-12-22 15:45 Happy holidays in Kyiv: best parties, concerts, winter villages
    2021-12-22 14:55 Selection panel fails to appoint anti-graft prosecutor, undermines ties with West
    2021-12-22 10:38 Oleg Sukhov: US should sanction these 2 symbols of Ukraine’s corruption
    2021-12-22 03:50 Michael Bociurkiw: Checkmate. Putin has the West cornered
    2021-12-22 00:31 Slovakia sends emergency electricity to Ukraine over unit shutdown at DTEK plant
    2021-12-21 21:59 Poroshenko family’s companies fined Hr 283 million by Anti-Monopoly Committee
    2021-12-21 20:42 Ukraine boasts biggest harvest since independence
    2021-12-21 19:56 NATO chief Stoltenberg calls for summit with Russia in early 2022
    2021-12-21 18:23 First heavy snowfall hits Kyiv (PHOTOS)
    2021-12-21 17:39 Two Ukrainian political prisoners in occupied Donbas in critical condition
    2021-12-27 14:33 Tech in 2021: Ukrainian startups go public, partner with Elon Musk, raise millions
    2021-12-27 08:37 Weekend news wrap-up
    2021-12-26 12:29 Anatoly Motkin: Ukraine will become a role model for Eurasian IT markets
    2021-12-25 22:14 Explained: New requirement for Ukrainianwomen to register for possible military, civil defense service
    2021-12-25 19:46 Ukrainian State-Owned Enterprises Weekly – Issue 57
    2021-12-24 22:42 Panel head blocks appointment of anti-graft prosecutor, wants SBU to check winner
    2021-12-24 20:57 Putin’s press conference statements about Ukraine, debunked
    2021-12-24 19:22 Survey: Majority of Ukrainians support joining EU, NATO
    2021-12-24 17:38 Prosecutors reportedly seek to arrest ex-president Poroshenko with $37 million bail
    2021-12-23 23:48 State Ecological Inspectorate: Dnipro River might dry up in 300 years
    2021-12-23 22:15 Top general: Ukraine’s military will respond to enemy fire
    2021-12-23 20:27 Ukraine International Airlines to resume transatlantic flights in 2022
    2021-12-23 17:46 6 last-minute gift ideas from Ukrainian brands
    2021-12-23 17:15 Ukraine approves booster doses for healthcare, orphanage employees
    2021-12-23 08:38 Media in Progress Ep. 5: Will Russia invade again?
    2021-12-22 21:47 Russia has 122,000 troops close to Ukraine’s border
    2021-12-22 21:40 Supreme Court rejects prosecutor general’s libel suit against newspaper, anti-graft watchdog
    2021-12-22 20:52 HBO acquires Ukrainian war drama ‘Bad Roads’
    2021-12-22 18:30 Naftogaz complains of Gazprom market manipulation
    2021-12-22 18:20 5 key events during PrivatBank’s 5 years in state hands
    2021-12-22 15:45 Happy holidays in Kyiv: best parties, concerts, winter villages
    2021-12-22 14:55 Selection panel fails to appoint anti-graft prosecutor, undermines ties with West
    2021-12-22 10:38 Oleg Sukhov: US should sanction these 2 symbols of Ukraine’s corruption
    2021-12-22 03:50 Michael Bociurkiw: Checkmate. Putin has the West cornered
    2021-12-22 00:31 Slovakia sends emergency electricity to Ukraine over unit shutdown at DTEK plant
    2021-12-21 21:59 Poroshenko family’s companies fined Hr 283 million by Anti-Monopoly Committee
    2021-12-21 20:42 Ukraine boasts biggest harvest since independence
    2021-12-21 19:56 NATO chief Stoltenberg calls for summit with Russia in early 2022
    2021-12-21 18:23 First heavy snowfall hits Kyiv (PHOTOS)
    2021-12-21 17:39 Two Ukrainian political prisoners in occupied Donbas in critical condition
    2021-12-21 16:01 Number of individual entrepreneurs continued increasing sharply in 2021
    2021-12-21 00:47 Controversial court’s new ruling might cancel anti-corruption prosecutor contest
    2021-12-20 23:08 Ukraine launches its official Spotify account
    2021-12-20 19:18 Accounting Chamber outlines reasons for Ukrzaliznytsia’s Hr 12 billion in losses in 2020
    2021-12-20 19:03 Ukrainians spend Hr 1,000 vaccination bounty on cinemas, books, theaters
    2021-12-20 17:35 Ukrainian regional capitals will get new SkyUp air routes to Europe in 2022
    2021-12-20 16:30 BREAKING: Ex-President Poroshenko charged with high treason
    2021-12-19 19:46 More than 600,000 Ukrainians left homeland in 2021
    2021-12-19 19:19 World Bank to lend 300 million euros to back reforms, mitigate pandemic in Ukraine
    2021-12-19 18:21 Ukrainian startup Reface partners with Warner Bros. to promote new films
    2021-12-19 17:52 Deal reached in US Senate for vote on Nord Stream 2 sanctions
    2021-12-19 16:00 Zelensky signs Diia City bill that changes taxation for tech firms
    2021-12-19 13:11 Ukraine to purchase Molnupiravir pills to treat Covid-19
    2021-12-18 20:01 Ukrainian State-Owned Enterprises Weekly – Issue 56
    2021-12-18 17:28 Detectives try to summon Poroshenko, may charge him in Medvedchuk case
    2021-12-18 13:20 Omicron, new highly transmissible Covid-19 variant, detected in Ukraine
    2021-12-17 22:06 Poll: Over half of Ukrainians will actively resist Russian invasion
    2021-12-17 21:09 Ukraine might soon recognize Jerusalem as Israel’s capital
    2021-12-17 20:17 ​​Kremlin drafts pacts to limit Western influence in post-Soviet countries
    2021-12-17 18:08 PM Shmyhal says coal shortage is resolved
    2021-12-22 15:45 Happy holidays in Kyiv: best parties, concerts, winter villages
    2021-12-22 14:55 Selection panel fails to appoint anti-graft prosecutor, undermines ties with West
    2021-12-22 10:38 Oleg Sukhov: US should sanction these 2 symbols of Ukraine’s corruption
    2021-12-22 03:50 Michael Bociurkiw: Checkmate. Putin has the West cornered
    2021-12-22 00:31 Slovakia sends emergency electricity to Ukraine over unit shutdown at DTEK plant
    2021-12-21 21:59 Poroshenko family’s companies fined Hr 283 million by Anti-Monopoly Committee
    2021-12-21 20:42 Ukraine boasts biggest harvest since independence
    2021-12-21 19:56 NATO chief Stoltenberg calls for summit with Russia in early 2022
    2021-12-21 18:23 First heavy snowfall hits Kyiv (PHOTOS)
    2021-12-21 17:39 Two Ukrainian political prisoners in occupied Donbas in critical condition
    2021-12-21 16:01 Number of individual entrepreneurs continued increasing sharply in 2021
    2021-12-21 00:47 Controversial court’s new ruling might cancel anti-corruption prosecutor contest
    2021-12-20 23:08 Ukraine launches its official Spotify account
    2021-12-20 19:18 Accounting Chamber outlines reasons for Ukrzaliznytsia’s Hr 12 billion in losses in 2020
    2021-12-20 19:03 Ukrainians spend Hr 1,000 vaccination bounty on cinemas, books, theaters
    2021-12-20 17:35 Ukrainian regional capitals will get new SkyUp air routes to Europe in 2022
    2021-12-20 16:30 BREAKING: Ex-President Poroshenko charged with high treason
    2021-12-19 19:46 More than 600,000 Ukrainians left homeland in 2021
    2021-12-19 19:19 World Bank to lend 300 million euros to back reforms, mitigate pandemic in Ukraine
    2021-12-19 18:21 Ukrainian startup Reface partners with Warner Bros. to promote new films
    2021-12-19 17:52 Deal reached in US Senate for vote on Nord Stream 2 sanctions
    2021-12-19 16:00 Zelensky signs Diia City bill that changes taxation for tech firms
    2021-12-19 13:11 Ukraine to purchase Molnupiravir pills to treat Covid-19
    2021-12-18 20:01 Ukrainian State-Owned Enterprises Weekly – Issue 56
    2021-12-18 17:28 Detectives try to summon Poroshenko, may charge him in Medvedchuk case
    2021-12-18 13:20 Omicron, new highly transmissible Covid-19 variant, detected in Ukraine
    2021-12-17 22:06 Poll: Over half of Ukrainians will actively resist Russian invasion
    2021-12-17 21:09 Ukraine might soon recognize Jerusalem as Israel’s capital
    2021-12-17 20:17 ​​Kremlin drafts pacts to limit Western influence in post-Soviet countries
    2021-12-17 18:08 PM Shmyhal says coal shortage is resolved
    2021-12-17 16:57 SBU uncovers international criminal group that helped smuggle migrants to Europe
    2021-12-17 13:53 US Senate introduces bill to give Ukraine $450 million in military aid in 2022
    2021-12-17 08:42 Weekend in Kyiv – Dec. 17-19
    2021-12-17 01:44 Russian court accidentally documents Moscow’s military presence in Donbas
    2021-12-16 23:04 EU Parliament: Russia must be suspended from SWIFT if it invades Ukraine
    2021-12-16 21:37 Kurt Volker: Don’t let Russia fool you about Minsk agreements
    2021-12-16 20:28 Crimean Tatar traditional ornament added to UNESCO heritage list
    2021-12-16 19:21 NATO chief Stoltenberg says Russian military buildup continues unabated, as he meets Zelensky in Brussels
    2021-12-16 18:54 Kyiv Independent wins ‘Journalist of the Year’ award by Ukrainska Pravda
    2021-12-16 16:21 SBU busts Islamic State cell in Kyiv that could be led by top IS commander
    2021-12-16 16:17 NABU head Artem Sytnyk interview: 6 years of success or failure?
    2021-12-16 16:10 Ukraine to buy Pfizer pills that reduce risk of Covid-19 hospitalization, death
    2021-12-16 15:25 Ukrainian startup Awesomic attracts $2 million to expand its designer marketplace
    2021-12-16 08:17 Ukraine’s 2022 state budget: Defense among top priorities but still underfunded
    2021-12-16 07:40 Media in Progress Ep. 4: With you on our side
    2021-12-16 00:25 Coal shortages force Ukraine to switch to gas
    2021-12-15 23:56 Zelensky meets Macron, Scholz on sidelines of Eastern Partnership summit in Brussels
    2021-12-15 20:49 Ukraine extends adaptive quarantine until March 31
    2021-12-15 19:08 Infamous Ukrainian judge’s brother released on bail after bribery charge
    2021-12-15 17:32 Explainer: Why Russia wants autonomy for occupied Donbas (and why Ukraine doesn’t)
    2021-12-19 17:52 Deal reached in US Senate for vote on Nord Stream 2 sanctions
    2021-12-19 16:00 Zelensky signs Diia City bill that changes taxation for tech firms
    2021-12-19 13:11 Ukraine to purchase Molnupiravir pills to treat Covid-19
    2021-12-18 20:01 Ukrainian State-Owned Enterprises Weekly – Issue 56
    2021-12-18 17:28 Detectives try to summon Poroshenko, may charge him in Medvedchuk case
    2021-12-18 13:20 Omicron, new highly transmissible Covid-19 variant, detected in Ukraine
    2021-12-17 22:06 Poll: Over half of Ukrainians will actively resist Russian invasion
    2021-12-17 21:09 Ukraine might soon recognize Jerusalem as Israel’s capital
    2021-12-17 20:17 ​​Kremlin drafts pacts to limit Western influence in post-Soviet countries
    2021-12-17 18:08 PM Shmyhal says coal shortage is resolved
    2021-12-17 16:57 SBU uncovers international criminal group that helped smuggle migrants to Europe
    2021-12-17 13:53 US Senate introduces bill to give Ukraine $450 million in military aid in 2022
    2021-12-17 08:42 Weekend in Kyiv – Dec. 17-19
    2021-12-17 01:44 Russian court accidentally documents Moscow’s military presence in Donbas
    2021-12-16 23:04 EU Parliament: Russia must be suspended from SWIFT if it invades Ukraine
    2021-12-16 21:37 Kurt Volker: Don’t let Russia fool you about Minsk agreements
    2021-12-16 20:28 Crimean Tatar traditional ornament added to UNESCO heritage list
    2021-12-16 19:21 NATO chief Stoltenberg says Russian military buildup continues unabated, as he meets Zelensky in Brussels
    2021-12-16 18:54 Kyiv Independent wins ‘Journalist of the Year’ award by Ukrainska Pravda
    2021-12-16 16:21 SBU busts Islamic State cell in Kyiv that could be led by top IS commander
    2021-12-16 16:17 NABU head Artem Sytnyk interview: 6 years of success or failure?
    2021-12-16 16:10 Ukraine to buy Pfizer pills that reduce risk of Covid-19 hospitalization, death
    2021-12-16 15:25 Ukrainian startup Awesomic attracts $2 million to expand its designer marketplace
    2021-12-16 08:17 Ukraine’s 2022 state budget: Defense among top priorities but still underfunded
    2021-12-16 07:40 Media in Progress Ep. 4: With you on our side
    2021-12-16 00:25 Coal shortages force Ukraine to switch to gas
    2021-12-15 23:56 Zelensky meets Macron, Scholz on sidelines of Eastern Partnership summit in Brussels
    2021-12-15 20:49 Ukraine extends adaptive quarantine until March 31
    2021-12-15 19:08 Infamous Ukrainian judge’s brother released on bail after bribery charge
    2021-12-15 17:32 Explainer: Why Russia wants autonomy for occupied Donbas (and why Ukraine doesn’t)
    2021-12-15 16:22 Court of appeal overturns decision favoring Kolomoisky’s company in PrivatBank case
    2021-12-15 12:02 Legendary Petrivka book market will be torn down, replaced by shopping mall
    2021-12-15 11:43 $1 billion suit against Bakhmatyuk is latest entry in Ukraine’s long chronicle of alleged bank fraud
    2021-12-14 19:35 SBU uncovers largest operation selling fraudulent Covid-19 certification
    2021-12-14 16:19 Health ministry: Unvaccinated patients are 9 times more likely to die from Covid-19
    2021-12-14 15:36 One of Lukashenko’s main rivals in 2020 election jailed for 18 years
    2021-12-14 14:38 German foreign minister: Ukraine ‘a factor’ in Nord Stream 2 certification delay
    2021-12-14 00:12 Deputy minister resigns after video of his altercation with police goes viral
    2021-12-14 00:01 Health minister falsely claims Ukraine reached WHO’s target of vaccinating 40% of population
    2021-12-13 20:19 Why Nord Stream 2 is centerpiece of Ukraine crisis with Russia (EXPLAINER)
    2021-12-13 16:05 Defense minister: Germany blocked weapon sales to Ukraine
    2021-12-13 15:29 Third Ukrainian graduates with distinction from top UK military academy
    2021-12-13 10:30 US think tank: Russia unlikely to invade Ukraine, wants to force concessions
    2021-12-12 19:59 Lomachenko defeats Commey, eyes title bout
    2021-12-12 19:02 G7 foreign ministers: Russia will face ‘massive consequences’ if it invades Ukraine
    2021-12-12 18:25 World Bank to give Ukraine $150 million for Covid-19 vaccination drive
    2021-12-12 17:15 NBU head complains of political pressure but isn’t worried about central bank’s independence
    2021-12-12 14:28 Ukrainian State-Owned Enterprises Weekly – Issue 55
    2021-12-11 20:50 Timothy Ash: Ukraine has much stronger macro, better able to endure Russian aggression
    2021-12-11 20:23 Ukraine imposes additional taxes on foreign tech: What changes for Google, Facebook users
    2021-12-16 16:17 NABU head Artem Sytnyk interview: 6 years of success or failure?
    2021-12-16 16:10 Ukraine to buy Pfizer pills that reduce risk of Covid-19 hospitalization, death
    2021-12-16 15:25 Ukrainian startup Awesomic attracts $2 million to expand its designer marketplace
    2021-12-16 08:17 Ukraine’s 2022 state budget: Defense among top priorities but still underfunded
    2021-12-16 07:40 Media in Progress Ep. 4: With you on our side
    2021-12-16 00:25 Coal shortages force Ukraine to switch to gas
    2021-12-15 23:56 Zelensky meets Macron, Scholz on sidelines of Eastern Partnership summit in Brussels
    2021-12-15 20:49 Ukraine extends adaptive quarantine until March 31
    2021-12-15 19:08 Infamous Ukrainian judge’s brother released on bail after bribery charge
    2021-12-15 17:32 Explainer: Why Russia wants autonomy for occupied Donbas (and why Ukraine doesn’t)
    2021-12-15 16:22 Court of appeal overturns decision favoring Kolomoisky’s company in PrivatBank case
    2021-12-15 12:02 Legendary Petrivka book market will be torn down, replaced by shopping mall
    2021-12-15 11:43 $1 billion suit against Bakhmatyuk is latest entry in Ukraine’s long chronicle of alleged bank fraud
    2021-12-14 19:35 SBU uncovers largest operation selling fraudulent Covid-19 certification
    2021-12-14 16:19 Health ministry: Unvaccinated patients are 9 times more likely to die from Covid-19
    2021-12-14 15:36 One of Lukashenko’s main rivals in 2020 election jailed for 18 years
    2021-12-14 14:38 German foreign minister: Ukraine ‘a factor’ in Nord Stream 2 certification delay
    2021-12-14 00:12 Deputy minister resigns after video of his altercation with police goes viral
    2021-12-14 00:01 Health minister falsely claims Ukraine reached WHO’s target of vaccinating 40% of population
    2021-12-13 20:19 Why Nord Stream 2 is centerpiece of Ukraine crisis with Russia (EXPLAINER)
    2021-12-13 16:05 Defense minister: Germany blocked weapon sales to Ukraine
    2021-12-13 15:29 Third Ukrainian graduates with distinction from top UK military academy
    2021-12-13 10:30 US think tank: Russia unlikely to invade Ukraine, wants to force concessions
    2021-12-12 19:59 Lomachenko defeats Commey, eyes title bout
    2021-12-12 19:02 G7 foreign ministers: Russia will face ‘massive consequences’ if it invades Ukraine
    2021-12-12 18:25 World Bank to give Ukraine $150 million for Covid-19 vaccination drive
    2021-12-12 17:15 NBU head complains of political pressure but isn’t worried about central bank’s independence
    2021-12-12 14:28 Ukrainian State-Owned Enterprises Weekly – Issue 55
    2021-12-11 20:50 Timothy Ash: Ukraine has much stronger macro, better able to endure Russian aggression
    2021-12-11 20:23 Ukraine imposes additional taxes on foreign tech: What changes for Google, Facebook users
    2021-12-11 15:25 Kyiv city administration illuminates its building with LGBTQ colors on Human Rights Day
    2021-12-10 22:37 Police officer, medical worker suspected ofCovid-19 vaccination forgery
    2021-12-10 19:53 Zelensky leads presidential poll with 23.5%, Poroshenko polls second
    2021-12-10 17:41 Zelensky, Macron evoke unblocking Normandy Format amid security crisis
    2021-12-10 16:26 White House: US will not pressure Ukraine to grant autonomy to Donbas
    2021-12-10 12:24 West urges Ukraine to complete key judicial, anti-corruption reforms quickly
    2021-12-10 00:29 Zelensky, Biden hold talks on potential Russian invasion
    2021-12-09 22:38 US imposes sanctions on Constitutional Court chief, ex-Yanukovych official
    2021-12-09 21:04 Belgian green energy producer launches arbitration to recover $79 million from Ukraine
    2021-12-09 18:59 State mine workers block roads in Lviv Oblast over unpaid wages
    2021-12-09 17:52 Javelin anti-tank missiles seen deployed to Donbas
    2021-12-09 16:37 Ukraine’s Twitter meme ridiculing Russia goes viral
    2021-12-09 16:06 UK will grant extra $1.3 billion of export financing for investments in Ukraine
    2021-12-09 13:42 US fund Gramercy sues exiled tycoon Bakhmatyuk over alleged $1 billion fraud
    2021-12-09 12:30 Anti-corruption body: Medvedchuk failed to declare $2.68 million in assets
    2021-12-09 11:35 Media in Progress Ep. 3: Who, what, where
    2021-12-08 21:39 US defense bill includes $300 million aid for Ukraine
    2021-12-08 21:16 What we know about Biden-Putin call about looming invasion of Ukraine
    2021-12-08 18:14 Journalist: EU imposes sanctions on Kremlin’s mercenary Wagner Group that fought in Donbas
    2021-12-08 16:14 Kyiv, 8 oblasts leave ‘red’ quarantine zone
    2021-12-13 16:05 Defense minister: Germany blocked weapon sales to Ukraine
    2021-12-13 15:29 Third Ukrainian graduates with distinction from top UK military academy
    2021-12-13 10:30 US think tank: Russia unlikely to invade Ukraine, wants to force concessions
    2021-12-12 19:59 Lomachenko defeats Commey, eyes title bout
    2021-12-12 19:02 G7 foreign ministers: Russia will face ‘massive consequences’ if it invades Ukraine
    2021-12-12 18:25 World Bank to give Ukraine $150 million for Covid-19 vaccination drive
    2021-12-12 17:15 NBU head complains of political pressure but isn’t worried about central bank’s independence
    2021-12-12 14:28 Ukrainian State-Owned Enterprises Weekly – Issue 55
    2021-12-11 20:50 Timothy Ash: Ukraine has much stronger macro, better able to endure Russian aggression
    2021-12-11 20:23 Ukraine imposes additional taxes on foreign tech: What changes for Google, Facebook users
    2021-12-11 15:25 Kyiv city administration illuminates its building with LGBTQ colors on Human Rights Day
    2021-12-10 22:37 Police officer, medical worker suspected ofCovid-19 vaccination forgery
    2021-12-10 19:53 Zelensky leads presidential poll with 23.5%, Poroshenko polls second
    2021-12-10 17:41 Zelensky, Macron evoke unblocking Normandy Format amid security crisis
    2021-12-10 16:26 White House: US will not pressure Ukraine to grant autonomy to Donbas
    2021-12-10 12:24 West urges Ukraine to complete key judicial, anti-corruption reforms quickly
    2021-12-10 00:29 Zelensky, Biden hold talks on potential Russian invasion
    2021-12-09 22:38 US imposes sanctions on Constitutional Court chief, ex-Yanukovych official
    2021-12-09 21:04 Belgian green energy producer launches arbitration to recover $79 million from Ukraine
    2021-12-09 18:59 State mine workers block roads in Lviv Oblast over unpaid wages
    2021-12-09 17:52 Javelin anti-tank missiles seen deployed to Donbas
    2021-12-09 16:37 Ukraine’s Twitter meme ridiculing Russia goes viral
    2021-12-09 16:06 UK will grant extra $1.3 billion of export financing for investments in Ukraine
    2021-12-09 13:42 US fund Gramercy sues exiled tycoon Bakhmatyuk over alleged $1 billion fraud
    2021-12-09 12:30 Anti-corruption body: Medvedchuk failed to declare $2.68 million in assets
    2021-12-09 11:35 Media in Progress Ep. 3: Who, what, where
    2021-12-08 21:39 US defense bill includes $300 million aid for Ukraine
    2021-12-08 21:16 What we know about Biden-Putin call about looming invasion of Ukraine
    2021-12-08 18:14 Journalist: EU imposes sanctions on Kremlin’s mercenary Wagner Group that fought in Donbas
    2021-12-08 16:14 Kyiv, 8 oblasts leave ‘red’ quarantine zone
    2021-12-08 14:46 ‘Ukrainians Will Resist’ hashtag trends amid looming Russian invasion
    2021-12-08 13:18 Ukrainian sports tech startup raises $6 million in cryptocurrency offering
    2021-12-08 12:50 Osnat Lubrani: UN’s approach to combating gender-based violence in Ukraine
    2021-12-07 22:40 Biden, Putin hold talks about Russia’s potential invasion of Ukraine
    2021-12-07 22:13 After months of delay, anti-corruption prosecutor selection moves forward
    2021-12-07 21:15 Media: Ukrenergo to replace supervisory board
    2021-12-07 20:51 13 people killed, 7 injured in car crash outside Chernihiv
    2021-12-07 19:22 Russia brings more tanks, artillery to contact line in Donbas
    2021-12-07 14:52 Ukrainian World War II drama wins at two film festivals in Europe
    2021-12-07 12:14 Reform Watch: Judicial reform moves forward but corruption still reigns supreme
    2021-12-07 11:05 Security Service arrests 13 Russian planes for flying to Crimea illegally
    2021-12-07 01:55 Media: US, Europe prepare harsh sanctions against Russia in case of Ukraine invasion
    2021-12-06 21:54 Health Ministry warns of new scam targeting vaccinated people
    2021-12-06 20:58 Canada decides against sending more military personnel to Ukraine
    2021-12-06 17:17 Second shipment of coal arrives in Ukraine to meet demand
    2021-12-06 15:49 Ukrainians borrowed record $1.8 billion in microloans in 2021
    2021-12-06 13:53 Police suspect arson after journalist’s cars found burned
    2021-12-05 21:14 Bloomberg: Turkey ready to sell over 20 Bayraktar drones to Ukraine
    2021-12-05 19:55 Belarus claims Ukrainian military helicopter violated its airspace, threatens consequences
    2021-12-05 19:13 Dota 2 developer changes tournament logo because of Ukraine’s decommunization law
    2021-12-09 17:52 Javelin anti-tank missiles seen deployed to Donbas
    2021-12-09 16:37 Ukraine’s Twitter meme ridiculing Russia goes viral
    2021-12-09 16:06 UK will grant extra $1.3 billion of export financing for investments in Ukraine
    2021-12-09 13:42 US fund Gramercy sues exiled tycoon Bakhmatyuk over alleged $1 billion fraud
    2021-12-09 12:30 Anti-corruption body: Medvedchuk failed to declare $2.68 million in assets
    2021-12-09 11:35 Media in Progress Ep. 3: Who, what, where
    2021-12-08 21:39 US defense bill includes $300 million aid for Ukraine
    2021-12-08 21:16 What we know about Biden-Putin call about looming invasion of Ukraine
    2021-12-08 18:14 Journalist: EU imposes sanctions on Kremlin’s mercenary Wagner Group that fought in Donbas
    2021-12-08 16:14 Kyiv, 8 oblasts leave ‘red’ quarantine zone
    2021-12-08 14:46 ‘Ukrainians Will Resist’ hashtag trends amid looming Russian invasion
    2021-12-08 13:18 Ukrainian sports tech startup raises $6 million in cryptocurrency offering
    2021-12-08 12:50 Osnat Lubrani: UN’s approach to combating gender-based violence in Ukraine
    2021-12-07 22:40 Biden, Putin hold talks about Russia’s potential invasion of Ukraine
    2021-12-07 22:13 After months of delay, anti-corruption prosecutor selection moves forward
    2021-12-07 21:15 Media: Ukrenergo to replace supervisory board
    2021-12-07 20:51 13 people killed, 7 injured in car crash outside Chernihiv
    2021-12-07 19:22 Russia brings more tanks, artillery to contact line in Donbas
    2021-12-07 14:52 Ukrainian World War II drama wins at two film festivals in Europe
    2021-12-07 12:14 Reform Watch: Judicial reform moves forward but corruption still reigns supreme
    2021-12-07 11:05 Security Service arrests 13 Russian planes for flying to Crimea illegally
    2021-12-07 01:55 Media: US, Europe prepare harsh sanctions against Russia in case of Ukraine invasion
    2021-12-06 21:54 Health Ministry warns of new scam targeting vaccinated people
    2021-12-06 20:58 Canada decides against sending more military personnel to Ukraine
    2021-12-06 17:17 Second shipment of coal arrives in Ukraine to meet demand
    2021-12-06 15:49 Ukrainians borrowed record $1.8 billion in microloans in 2021
    2021-12-06 13:53 Police suspect arson after journalist’s cars found burned
    2021-12-05 21:14 Bloomberg: Turkey ready to sell over 20 Bayraktar drones to Ukraine
    2021-12-05 19:55 Belarus claims Ukrainian military helicopter violated its airspace, threatens consequences
    2021-12-05 19:13 Dota 2 developer changes tournament logo because of Ukraine’s decommunization law
    2021-12-05 15:13 Ukraine to intensify Covid-19 restrictions in ‘yellow’ zones on Dec. 6
    2021-12-04 15:44 Biden: ‘I will not accept Russia’s red lines on Ukraine’
    2021-12-04 11:37 Q&A with Brian Bonner, ex-chief editor of Kyiv Post
    2021-12-03 21:50 Ukrainians declare assets worth $9.25 million in tax amnesty
    2021-12-03 20:36 Ukraine to buy world’s most expensive drugs to treat SMA
    2021-12-03 18:25 China’s Skyrizon sues Ukrainian government for $4.5 billion over failed Motor Sich bid
    2021-12-03 17:34 Chill Ukrainian cat Stepan goes viral, does ad for Valentino
    2021-12-03 15:52 Defense minister Reznikov: Russia might use 94,300 troops to invade Ukraine
    2021-12-03 14:53 Prosecutor general vows to step up 200 criminal cases against top oligarch over his media attacks
    2021-12-03 09:39 DTEK energy company may have hidden hazardous accident at power plant
    2021-12-03 00:36 EU to provide 31 million euros to enhance Ukraine’s defense capabilities
    2021-12-02 23:22 2022 state budget passes in parliament
    2021-12-02 22:21 Foreign panel members slam delay in selection of anti-graft prosecutor
    2021-12-02 21:53 G7 urges Ukraine not to sabotage selection of anti-corruption prosecutor
    2021-12-02 20:00 Ukraine bans entry of foreign visitors from African countries with Omicron cases
    2021-12-02 18:45 Top diplomats of US, Russia meet to discuss Ukraine amid looming invasion
    2021-12-02 17:10 Civic watchdogs say Zelensky appoints 21 tainted judges
    2021-12-02 11:33 Media in Progress Ep. 2: What’s in a name?
    2021-12-02 07:22 If Russia launches blitzkrieg into Ukraine, how would it look?
    2021-12-01 07:20 Zelensky’s address to parliament: What you need to know
    2021-12-07 11:05 Security Service arrests 13 Russian planes for flying to Crimea illegally
    2021-12-07 01:55 Media: US, Europe prepare harsh sanctions against Russia in case of Ukraine invasion
    2021-12-06 21:54 Health Ministry warns of new scam targeting vaccinated people
    2021-12-06 20:58 Canada decides against sending more military personnel to Ukraine
    2021-12-06 17:17 Second shipment of coal arrives in Ukraine to meet demand
    2021-12-06 15:49 Ukrainians borrowed record $1.8 billion in microloans in 2021
    2021-12-06 13:53 Police suspect arson after journalist’s cars found burned
    2021-12-05 21:14 Bloomberg: Turkey ready to sell over 20 Bayraktar drones to Ukraine
    2021-12-05 19:55 Belarus claims Ukrainian military helicopter violated its airspace, threatens consequences
    2021-12-05 19:13 Dota 2 developer changes tournament logo because of Ukraine’s decommunization law
    2021-12-05 15:13 Ukraine to intensify Covid-19 restrictions in ‘yellow’ zones on Dec. 6
    2021-12-04 15:44 Biden: ‘I will not accept Russia’s red lines on Ukraine’
    2021-12-04 11:37 Q&A with Brian Bonner, ex-chief editor of Kyiv Post
    2021-12-03 21:50 Ukrainians declare assets worth $9.25 million in tax amnesty
    2021-12-03 20:36 Ukraine to buy world’s most expensive drugs to treat SMA
    2021-12-03 18:25 China’s Skyrizon sues Ukrainian government for $4.5 billion over failed Motor Sich bid
    2021-12-03 17:34 Chill Ukrainian cat Stepan goes viral, does ad for Valentino
    2021-12-03 15:52 Defense minister Reznikov: Russia might use 94,300 troops to invade Ukraine
    2021-12-03 14:53 Prosecutor general vows to step up 200 criminal cases against top oligarch over his media attacks
    2021-12-03 09:39 DTEK energy company may have hidden hazardous accident at power plant
    2021-12-03 00:36 EU to provide 31 million euros to enhance Ukraine’s defense capabilities
    2021-12-02 23:22 2022 state budget passes in parliament
    2021-12-02 22:21 Foreign panel members slam delay in selection of anti-graft prosecutor
    2021-12-02 21:53 G7 urges Ukraine not to sabotage selection of anti-corruption prosecutor
    2021-12-02 20:00 Ukraine bans entry of foreign visitors from African countries with Omicron cases
    2021-12-02 18:45 Top diplomats of US, Russia meet to discuss Ukraine amid looming invasion
    2021-12-02 17:10 Civic watchdogs say Zelensky appoints 21 tainted judges
    2021-12-02 11:33 Media in Progress Ep. 2: What’s in a name?
    2021-12-02 07:22 If Russia launches blitzkrieg into Ukraine, how would it look?
    2021-12-01 07:20 Zelensky’s address to parliament: What you need to know
    2021-12-01 07:18 Thousands of protesters demand Zelensky’s resignation
    2021-12-01 07:16 Russia conducts military exercises near Ukraine’s borders as NATO allies meet in Riga
    2021-12-01 07:14 Former judiciary official suspected of embezzling $1.9 million
    2021-12-01 07:12 National Bank survey: Ukrainian businesses’ pessimism grew in November
    2021-12-01 07:10 Ukrainian startup Pibox raises $400,000 in Toronto
    2021-11-30 04:52 Illia Ponomarenko: Is Russia really about to invade Ukraine?
    2021-11-30 04:49 Disney casts Ukrainian actress in ‘Star Wars: Ahsoka’
    2021-11-30 04:47 Ministry of Digital Transformation partners with Apple to conduct 2023 census
    2021-11-30 04:45 Ukrainian company signs €136.5 million contract with French shipbuilder
    2021-11-30 04:44 Constitutional Court delays swearing in judges appointed by Zelensky
    2021-11-30 04:42 Parliament passes ‘anti-Akhmetov’ tax law that raises iron ore rents, triples carbon tax
    2021-11-30 04:40 Yellow quarantine zone restrictions will be tightened as of Dec. 6
    2021-11-30 04:38 Ukraine to introduce booster shots in 2022
    2021-11-30 04:35 West promises consequences if Russia invades Ukraine
    2021-11-29 04:21 Ukrainian team NaVi wins $225,000 in Counter-Strike tournament
    2021-11-29 04:18 These are best restaurants, bars in Kyiv, according to Salt awards
    2021-11-29 04:15 Ukrainian startup MySat raises $9,700 on Kickstarter to develop satellite kit
    2021-11-29 04:13 Naftogaz records $91 million profit in third quarter of 2021
    2021-11-29 04:10 German ambassador: Nord Stream 2 won’t enter use for at least 6 months
    2021-11-29 04:05 Government launches new digital services, including pension and subsidy registration
    2021-11-03 00:36 EU to provide 31 million euros to enhance Ukraine’s defense capabilities
    2021-11-02 23:22 2022 state budget passes in parliament
    2021-11-02 22:21 Foreign panel members slam delay in selection of anti-graft prosecutor
    2021-11-02 21:53 G7 urges Ukraine not to sabotage selection of anti-corruption prosecutor
    2021-11-02 20:00 Ukraine bans entry of foreign visitors from African countries with Omicron cases
    2021-11-02 18:45 Top diplomats of US, Russia meet to discuss Ukraine amid looming invasion
    2021-11-02 17:10 Civic watchdogs say Zelensky appoints 21 tainted judges
    2021-11-02 11:33 Media in Progress Ep. 2: What’s in a name?
    2021-11-02 07:22 If Russia launches blitzkrieg into Ukraine, how would it look?
    2021-11-01 07:20 Zelensky’s address to parliament: What you need to know
    2021-11-01 07:18 Thousands of protesters demand Zelensky’s resignation
    2021-11-01 07:16 Russia conducts military exercises near Ukraine’s borders as NATO allies meet in Riga
    2021-11-01 07:14 Former judiciary official suspected of embezzling $1.9 million
    2021-11-01 07:12 National Bank survey: Ukrainian businesses’ pessimism grew in November
    2021-11-01 07:10 Ukrainian startup Pibox raises $400,000 in Toronto
    2021-11-30 04:52 Illia Ponomarenko: Is Russia really about to invade Ukraine?
    2021-11-30 04:49 Disney casts Ukrainian actress in ‘Star Wars: Ahsoka’
    2021-11-30 04:47 Ministry of Digital Transformation partners with Apple to conduct 2023 census
    2021-11-30 04:45 Ukrainian company signs €136.5 million contract with French shipbuilder
    2021-11-30 04:44 Constitutional Court delays swearing in judges appointed by Zelensky
    2021-11-30 04:42 Parliament passes ‘anti-Akhmetov’ tax law that raises iron ore rents, triples carbon tax
    2021-11-30 04:40 Yellow quarantine zone restrictions will be tightened as of Dec. 6
    2021-11-30 04:38 Ukraine to introduce booster shots in 2022
    2021-11-30 04:35 West promises consequences if Russia invades Ukraine
    2021-11-29 04:21 Ukrainian team NaVi wins $225,000 in Counter-Strike tournament
    2021-11-29 04:18 These are best restaurants, bars in Kyiv, according to Salt awards
    2021-11-29 04:15 Ukrainian startup MySat raises $9,700 on Kickstarter to develop satellite kit
    2021-11-29 04:13 Naftogaz records $91 million profit in third quarter of 2021
    2021-11-29 04:10 German ambassador: Nord Stream 2 won’t enter use for at least 6 months
    2021-11-29 04:05 Government launches new digital services, including pension and subsidy registration
    2021-11-29 04:00 Ukraine’s judicial reform, explained
    2021-11-29 03:57 Ukraine imposes travel restrictions to prevent spread of Covid-19 Omicron strain
    2021-11-28 03:51 Ukrainian studio films music video for French artist Kavinsky
    2021-11-28 03:49 Naftogaz breaks gas contract with Firtash’s Ye Energy, says it could have cost taxpayers $4 billion
    2021-11-28 03:46 State Guard Administration denies firing servicemen for not shooting down journalist’s drone
    2021-11-28 03:44 State Investigation Bureau opens probe into famous journalist days after he argues with Zelensky on TV
    2021-11-28 03:41 Ukraine’s population shrinking fast, can drop by 6 million in 30 years
    2021-11-28 03:39 NATO believes Russian military buildup near Ukraine’s border is not a bluff
    2021-11-26 03:34 UK pop star Ed Sheeran spotted in Kyiv
    2021-11-26 03:32 DTEK Eurobonds fall 11.6% after Zelensky’s allegations about Akhmetov
    2021-11-26 03:29 Unknown men brutally attack famous Kyiv bar
    2021-11-26 03:26 Police shut down fraudulent Covid-19 documents operation in Zakarpattia Oblast
    2021-11-26 03:24 Lawyers say Zelensky illegally appointed Constitutional Court judges
    2021-11-26 03:21 Zelensky holds 5-hour press marathon with handpicked media
    2021-11-25 11:30 Media in Progress Ep. 1: Our fight for independent journalism in Ukraine
    2021-11-25 01:47 German-based tech company Avenga buys Ukrainian outsourcer Perfectial
    2021-11-25 01:45 State-owned wine producer Odesavynprom privatized for $8.7 million
    2021-11-25 01:38 Government to launch state-owned Ukrainian National Airlines
    2021-11-25 01:35 NGOs call on Zelensky not to appoint corrupt judges
    2021-11-25 01:34 Security Service busts Russia-backed bot farm in Kherson Oblast
    2021-11-30 04:42 Parliament passes ‘anti-Akhmetov’ tax law that raises iron ore rents, triples carbon tax
    2021-11-30 04:40 Yellow quarantine zone restrictions will be tightened as of Dec. 6
    2021-11-30 04:38 Ukraine to introduce booster shots in 2022
    2021-11-30 04:35 West promises consequences if Russia invades Ukraine
    2021-11-29 04:21 Ukrainian team NaVi wins $225,000 in Counter-Strike tournament
    2021-11-29 04:18 These are best restaurants, bars in Kyiv, according to Salt awards
    2021-11-29 04:15 Ukrainian startup MySat raises $9,700 on Kickstarter to develop satellite kit
    2021-11-29 04:13 Naftogaz records $91 million profit in third quarter of 2021
    2021-11-29 04:10 German ambassador: Nord Stream 2 won’t enter use for at least 6 months
    2021-11-29 04:05 Government launches new digital services, including pension and subsidy registration
    2021-11-29 04:00 Ukraine’s judicial reform, explained
    2021-11-29 03:57 Ukraine imposes travel restrictions to prevent spread of Covid-19 Omicron strain
    2021-11-28 03:51 Ukrainian studio films music video for French artist Kavinsky
    2021-11-28 03:49 Naftogaz breaks gas contract with Firtash’s Ye Energy, says it could have cost taxpayers $4 billion
    2021-11-28 03:46 State Guard Administration denies firing servicemen for not shooting down journalist’s drone
    2021-11-28 03:44 State Investigation Bureau opens probe into famous journalist days after he argues with Zelensky on TV
    2021-11-28 03:41 Ukraine’s population shrinking fast, can drop by 6 million in 30 years
    2021-11-28 03:39 NATO believes Russian military buildup near Ukraine’s border is not a bluff
    2021-11-26 03:34 UK pop star Ed Sheeran spotted in Kyiv
    2021-11-26 03:32 DTEK Eurobonds fall 11.6% after Zelensky’s allegations about Akhmetov
    2021-11-26 03:29 Unknown men brutally attack famous Kyiv bar
    2021-11-26 03:26 Police shut down fraudulent Covid-19 documents operation in Zakarpattia Oblast
    2021-11-26 03:24 Lawyers say Zelensky illegally appointed Constitutional Court judges
    2021-11-26 03:21 Zelensky holds 5-hour press marathon with handpicked media
    2021-11-25 11:30 Media in Progress Ep. 1: Our fight for independent journalism in Ukraine
    2021-11-25 01:47 German-based tech company Avenga buys Ukrainian outsourcer Perfectial
    2021-11-25 01:45 State-owned wine producer Odesavynprom privatized for $8.7 million
    2021-11-25 01:38 Government to launch state-owned Ukrainian National Airlines
    2021-11-25 01:35 NGOs call on Zelensky not to appoint corrupt judges
    2021-11-25 01:34 Security Service busts Russia-backed bot farm in Kherson Oblast
    2021-11-25 01:32 RFE/RL: 5 intelligence agents involved with Wagnergate had their passports revoked
    2021-11-25 01:28 Media: Zelensky’s chief of staff throws secretive birthday celebration at state residence
    2021-11-25 01:25 US Embassy warns Americans in Ukraine of possible invasion by Russia
    2021-11-25 01:22 Foreign Policy: Biden wants to kill Nord Stream 2 sanctions amendments
    2021-11-24 01:18 Music video director Tanu Muiño nominated for Grammy
    2021-11-24 01:16 Eight-year-old girl creates illustration for European Space Agency’s rocket
    2021-11-24 01:14 Ukrainian becomes world’s top online gamer
    2021-11-24 01:12 Grammarly founders become billionaires, richer than oligarchs Kolomoisky, Pinchuk
    2021-11-24 01:10 Ukraine to get $28.5 million loan to make schools, kindergartens energy efficient
    2021-11-24 01:08 Pro-Kremlin lawmaker pays bail for Odesa mayor Trukhanov
    2021-11-24 01:05 Ukraine launches operation to stop migrants coming from Belarus
    2021-11-24 01:03 Opponent of Nord Stream 2 to become German foreign minister
    2021-11-24 01:00 Russian authorities arrest 31 Crimean Tatars in occupied peninsula
    2021-11-23 00:53 New app to help victims report domestic abuse
    2021-11-23 00:51 Ukrainian banks’ net profits up by 47% in first 10 months of 2021
    2021-11-23 00:49 Center for Social and Economic Research: Ukraine loses up to $17 billion per year to tax avoidance
    2021-11-23 00:46 Kyiv administration sues subway contractor for using state funds to make a buck
    2021-11-23 00:44 Former judge under investigation for wrongdoing found shot dead in Kyiv
    2021-11-23 00:40 Ukraine extends deal with Pfizer to secure 25 million doses of Covid-19 vaccine
    2021-11-23 00:36 Ukraine gets two more patrol boats from US
    2021-11-26 03:29 Unknown men brutally attack famous Kyiv bar
    2021-11-26 03:26 Police shut down fraudulent Covid-19 documents operation in Zakarpattia Oblast
    2021-11-26 03:24 Lawyers say Zelensky illegally appointed Constitutional Court judges
    2021-11-26 03:21 Zelensky holds 5-hour press marathon with handpicked media
    2021-11-25 11:30 Media in Progress Ep. 1: Our fight for independent journalism in Ukraine
    2021-11-25 01:47 German-based tech company Avenga buys Ukrainian outsourcer Perfectial
    2021-11-25 01:45 State-owned wine producer Odesavynprom privatized for $8.7 million
    2021-11-25 01:38 Government to launch state-owned Ukrainian National Airlines
    2021-11-25 01:35 NGOs call on Zelensky not to appoint corrupt judges
    2021-11-25 01:34 Security Service busts Russia-backed bot farm in Kherson Oblast
    2021-11-25 01:32 RFE/RL: 5 intelligence agents involved with Wagnergate had their passports revoked
    2021-11-25 01:28 Media: Zelensky’s chief of staff throws secretive birthday celebration at state residence
    2021-11-25 01:25 US Embassy warns Americans in Ukraine of possible invasion by Russia
    2021-11-25 01:22 Foreign Policy: Biden wants to kill Nord Stream 2 sanctions amendments
    2021-11-24 01:18 Music video director Tanu Muiño nominated for Grammy
    2021-11-24 01:16 Eight-year-old girl creates illustration for European Space Agency’s rocket
    2021-11-24 01:14 Ukrainian becomes world’s top online gamer
    2021-11-24 01:12 Grammarly founders become billionaires, richer than oligarchs Kolomoisky, Pinchuk
    2021-11-24 01:10 Ukraine to get $28.5 million loan to make schools, kindergartens energy efficient
    2021-11-24 01:08 Pro-Kremlin lawmaker pays bail for Odesa mayor Trukhanov
    2021-11-24 01:05 Ukraine launches operation to stop migrants coming from Belarus
    2021-11-24 01:03 Opponent of Nord Stream 2 to become German foreign minister
    2021-11-24 01:00 Russian authorities arrest 31 Crimean Tatars in occupied peninsula
    2021-11-23 00:53 New app to help victims report domestic abuse
    2021-11-23 00:51 Ukrainian banks’ net profits up by 47% in first 10 months of 2021
    2021-11-23 00:49 Center for Social and Economic Research: Ukraine loses up to $17 billion per year to tax avoidance
    2021-11-23 00:46 Kyiv administration sues subway contractor for using state funds to make a buck
    2021-11-23 00:44 Former judge under investigation for wrongdoing found shot dead in Kyiv
    2021-11-23 00:40 Ukraine extends deal with Pfizer to secure 25 million doses of Covid-19 vaccine
    2021-11-23 00:36 Ukraine gets two more patrol boats from US
    2021-11-23 00:29 US imposes new sanctions over Nord Stream 2
    2021-11-22 23:59 IMF approves $700 million loan tranche to Ukraine, extends stand-by program through June 2022
    2021-11-22 23:49 US intelligence paints clearer picture of Russia’s looming Ukraine invasion
    2021-11-22 00:23 Ukraine’s wearable transcribing device raises $111,000 on Kickstarter
    2021-11-22 00:22 OECD: Ukrainians have some of lowest retirement savings globally
    2021-11-22 00:19 US Westinghouse to build 2 reactors at Khmelnytskyi Nuclear Power Plant
    2021-11-22 00:09 New Ukrainian carrier Air Ocean Airlines starts regular domestic flights
    2021-11-22 00:06 Why is Ukraine still missing a chief anti-corruption prosecutor?
    2021-11-22 00:04 Media: At new state-owned TV station, reporters are told to promote Zelensky
    2021-11-22 00:01 3 Ukrainian women, 11 children repatriated from Syrian refugee camp al-Hawl
    2021-11-21 23:41 Education platform about Chornobyl disaster available in English
    2021-11-21 23:36 Belarus resumes supplying electricity to Ukraine
    2021-11-21 23:33 First of 7 ships carrying emergency coal deliveries arrives in Odesa
    2021-11-21 23:29 Saakashvili ends 50-day hunger strike as health deteriorates
    2021-11-21 23:24 Protesters demand dismissal of Zelensky’s chief of staff Yermak
    2021-11-21 23:19 Russian-led militants shell 2 Ukrainian villages
    2021-11-21 23:12 Ukraine’s defense intelligence chief: Russia is preparing to attack in late January
    2021-11-19 23:04 Thousands of Ukrainians use passwords that take seconds to hack
    2021-11-19 23:01 Ukrposhta to build fully-automated mail sorting complex in Dnipro
    2021-11-19 23:00 J.P. Morgan downgrades Ukraine’s 2021 GDP growth forecast from 4.5% to 2.3%
    2021-11-24 01:05 Ukraine launches operation to stop migrants coming from Belarus
    2021-11-24 01:03 Opponent of Nord Stream 2 to become German foreign minister
    2021-11-24 01:00 Russian authorities arrest 31 Crimean Tatars in occupied peninsula
    2021-11-23 00:53 New app to help victims report domestic abuse
    2021-11-23 00:51 Ukrainian banks’ net profits up by 47% in first 10 months of 2021
    2021-11-23 00:49 Center for Social and Economic Research: Ukraine loses up to $17 billion per year to tax avoidance
    2021-11-23 00:46 Kyiv administration sues subway contractor for using state funds to make a buck
    2021-11-23 00:44 Former judge under investigation for wrongdoing found shot dead in Kyiv
    2021-11-23 00:40 Ukraine extends deal with Pfizer to secure 25 million doses of Covid-19 vaccine
    2021-11-23 00:36 Ukraine gets two more patrol boats from US
    2021-11-23 00:29 US imposes new sanctions over Nord Stream 2
    2021-11-22 23:59 IMF approves $700 million loan tranche to Ukraine, extends stand-by program through June 2022
    2021-11-22 23:49 US intelligence paints clearer picture of Russia’s looming Ukraine invasion
    2021-11-22 00:23 Ukraine’s wearable transcribing device raises $111,000 on Kickstarter
    2021-11-22 00:22 OECD: Ukrainians have some of lowest retirement savings globally
    2021-11-22 00:19 US Westinghouse to build 2 reactors at Khmelnytskyi Nuclear Power Plant
    2021-11-22 00:09 New Ukrainian carrier Air Ocean Airlines starts regular domestic flights
    2021-11-22 00:06 Why is Ukraine still missing a chief anti-corruption prosecutor?
    2021-11-22 00:04 Media: At new state-owned TV station, reporters are told to promote Zelensky
    2021-11-22 00:01 3 Ukrainian women, 11 children repatriated from Syrian refugee camp al-Hawl
    2021-11-21 23:41 Education platform about Chornobyl disaster available in English
    2021-11-21 23:36 Belarus resumes supplying electricity to Ukraine
    2021-11-21 23:33 First of 7 ships carrying emergency coal deliveries arrives in Odesa
    2021-11-21 23:29 Saakashvili ends 50-day hunger strike as health deteriorates
    2021-11-21 23:24 Protesters demand dismissal of Zelensky’s chief of staff Yermak
    2021-11-21 23:19 Russian-led militants shell 2 Ukrainian villages
    2021-11-21 23:12 Ukraine’s defense intelligence chief: Russia is preparing to attack in late January
    2021-11-19 23:04 Thousands of Ukrainians use passwords that take seconds to hack
    2021-11-19 23:01 Ukrposhta to build fully-automated mail sorting complex in Dnipro
    2021-11-19 23:00 J.P. Morgan downgrades Ukraine’s 2021 GDP growth forecast from 4.5% to 2.3%
    2021-11-19 22:56 Authorities complete investigation of infamous judge Vovk’s brother
    2021-11-19 22:55 Saakashvili’s hunger strike in Georgia, explained
    2021-11-19 22:47 Health minister: Ukraine past peak of recent Covid-19 wave
    2021-11-19 22:35 Ex-intelligence chief says Zelensky may be behind failed attempt to arrest Russian proxies
    2021-11-18 21:42 Sentsov’s crime drama ‘Rhino’ wins at Stockholm Film Festival
    2021-11-18 21:35 Sean Penn to film documentary about Russia’s war against Ukraine
    2021-11-18 21:33 Grammarly becomes most expensive Ukrainian tech startup with $13 billion valuation
    2021-11-18 21:26 State Property Fund head resigns, says it has nothing to do with controversial Bilshovyk sale
    2021-11-18 21:24 Saakashvili in intensive care after passing out in jail
    2021-11-18 21:18 Belarus moves migrants away from Polish border
    2021-11-18 21:16 Europe remains closed to unvaccinated Ukrainians
    2021-11-18 21:13 Russian military buildup near Ukraine, explained
    2021-11-18 21:04 Foreign Minister Dmytro Kuleba accuses Russia of trying to kill Normandy Format peace talks
    2021-11-18 21:00 Police: US citizen ordered killing of agriculture minister over business grudge
    2021-11-17 20:42 Ukrainian-developed ORTY named one of world’s top food tech startups in 2021
    2021-11-17 20:40 Rocket with Ukrainian engine launches French military satellites into orbit
    2021-11-17 20:39 Ukrgasbank posts record $102 million profit in first 10 months of 2021
    2021-11-17 20:36 DTEK threatens to sue Guaranteed Buyer for freezing $115 million in debt repayments
    2021-11-17 20:33 Top manager at ArcelorMittal alleged of helping company dodge $82 million in taxes
    2021-11-17 20:30 Ukrainian city starts anonymous Covid-19 vaccination for fake certificate holders
    2021-11-21 23:41 Education platform about Chornobyl disaster available in English
    2021-11-21 23:36 Belarus resumes supplying electricity to Ukraine
    2021-11-21 23:33 First of 7 ships carrying emergency coal deliveries arrives in Odesa
    2021-11-21 23:29 Saakashvili ends 50-day hunger strike as health deteriorates
    2021-11-21 23:24 Protesters demand dismissal of Zelensky’s chief of staff Yermak
    2021-11-21 23:19 Russian-led militants shell 2 Ukrainian villages
    2021-11-21 23:12 Ukraine’s defense intelligence chief: Russia is preparing to attack in late January
    2021-11-19 23:04 Thousands of Ukrainians use passwords that take seconds to hack
    2021-11-19 23:01 Ukrposhta to build fully-automated mail sorting complex in Dnipro
    2021-11-19 23:00 J.P. Morgan downgrades Ukraine’s 2021 GDP growth forecast from 4.5% to 2.3%
    2021-11-19 22:56 Authorities complete investigation of infamous judge Vovk’s brother
    2021-11-19 22:55 Saakashvili’s hunger strike in Georgia, explained
    2021-11-19 22:47 Health minister: Ukraine past peak of recent Covid-19 wave
    2021-11-19 22:35 Ex-intelligence chief says Zelensky may be behind failed attempt to arrest Russian proxies
    2021-11-18 21:42 Sentsov’s crime drama ‘Rhino’ wins at Stockholm Film Festival
    2021-11-18 21:35 Sean Penn to film documentary about Russia’s war against Ukraine
    2021-11-18 21:33 Grammarly becomes most expensive Ukrainian tech startup with $13 billion valuation
    2021-11-18 21:26 State Property Fund head resigns, says it has nothing to do with controversial Bilshovyk sale
    2021-11-18 21:24 Saakashvili in intensive care after passing out in jail
    2021-11-18 21:18 Belarus moves migrants away from Polish border
    2021-11-18 21:16 Europe remains closed to unvaccinated Ukrainians
    2021-11-18 21:13 Russian military buildup near Ukraine, explained
    2021-11-18 21:04 Foreign Minister Dmytro Kuleba accuses Russia of trying to kill Normandy Format peace talks
    2021-11-18 21:00 Police: US citizen ordered killing of agriculture minister over business grudge
    2021-11-17 20:42 Ukrainian-developed ORTY named one of world’s top food tech startups in 2021
    2021-11-17 20:40 Rocket with Ukrainian engine launches French military satellites into orbit
    2021-11-17 20:39 Ukrgasbank posts record $102 million profit in first 10 months of 2021
    2021-11-17 20:36 DTEK threatens to sue Guaranteed Buyer for freezing $115 million in debt repayments
    2021-11-17 20:33 Top manager at ArcelorMittal alleged of helping company dodge $82 million in taxes
    2021-11-17 20:30 Ukrainian city starts anonymous Covid-19 vaccination for fake certificate holders
    2021-11-17 20:25 Over 600,000 Ukrainians are late for second dose of Covid-19 vaccine
    2021-11-17 20:18 Anti-Monopoly Committee pauses Bilshovyk machinery plant sale
    2021-11-17 20:05 Bellingcat: Ukraine behind failed attempt to arrest Russian proxies in 2020
    2021-11-16 20:01 Ukrainian school janitor becomes jiu-jitsu world champion
    2021-11-16 19:59 Centrenergo to import 1.5 million tons of coal to prevent outages
    2021-11-16 19:56 Nord Stream 2 certification put on hold, gas prices soar
    2021-11-16 19:52 President Zelensky leads by 10% in new poll
    2021-11-16 19:50 Ukraine hands out first ever jail sentence for homophobic hate crime
    2021-11-16 19:48 SBU busts anti-vax group, says it plotted violence under Russia’s oversight
    2021-11-16 19:16 Ukraine’s military intercepts Russian drone in Donbas
    2021-11-16 19:10 Media: Head of State Property Fund resigns
    2021-11-16 07:01 Health Ministry: 96% of people who die of COVID-19 are unvaccinated
    2021-11-15 18:56 Ukrainian director wins best video at MTV Europe Music Awards
    2021-11-15 18:53 Gazprom declines to buy additional capacity through Ukraine again
    2021-11-15 18:47 Ukraine’s Naftogaz will take part in certification of Nord Stream 2
    2021-11-15 18:13 Belarusian border crisis, explained
    2021-11-15 18:10 Fully vaccinated Ukrainians will get Hr 1,000 ($40) from government
    2021-11-15 18:03 Ex-chief editor says Ukraine’s top prosecutor pressured Kyiv Post
    2021-11-15 15:25 NATO chief alarmed by Russia’s military buildup, says alliance stands by Ukraine
    2021-11-14 15:26 Head of state-owned Guaranteed Buyer sacked, claims foul play
    2021-11-18 21:16 Europe remains closed to unvaccinated Ukrainians
    2021-11-18 21:13 Russian military buildup near Ukraine, explained
    2021-11-18 21:04 Foreign Minister Dmytro Kuleba accuses Russia of trying to kill Normandy Format peace talks
    2021-11-18 21:00 Police: US citizen ordered killing of agriculture minister over business grudge
    2021-11-17 20:42 Ukrainian-developed ORTY named one of world’s top food tech startups in 2021
    2021-11-17 20:40 Rocket with Ukrainian engine launches French military satellites into orbit
    2021-11-17 20:39 Ukrgasbank posts record $102 million profit in first 10 months of 2021
    2021-11-17 20:36 DTEK threatens to sue Guaranteed Buyer for freezing $115 million in debt repayments
    2021-11-17 20:33 Top manager at ArcelorMittal alleged of helping company dodge $82 million in taxes
    2021-11-17 20:30 Ukrainian city starts anonymous Covid-19 vaccination for fake certificate holders
    2021-11-17 20:25 Over 600,000 Ukrainians are late for second dose of Covid-19 vaccine
    2021-11-17 20:18 Anti-Monopoly Committee pauses Bilshovyk machinery plant sale
    2021-11-17 20:05 Bellingcat: Ukraine behind failed attempt to arrest Russian proxies in 2020
    2021-11-16 20:01 Ukrainian school janitor becomes jiu-jitsu world champion
    2021-11-16 19:59 Centrenergo to import 1.5 million tons of coal to prevent outages
    2021-11-16 19:56 Nord Stream 2 certification put on hold, gas prices soar
    2021-11-16 19:52 President Zelensky leads by 10% in new poll
    2021-11-16 19:50 Ukraine hands out first ever jail sentence for homophobic hate crime
    2021-11-16 19:48 SBU busts anti-vax group, says it plotted violence under Russia’s oversight
    2021-11-16 19:16 Ukraine’s military intercepts Russian drone in Donbas
    2021-11-16 19:10 Media: Head of State Property Fund resigns
    2021-11-16 07:01 Health Ministry: 96% of people who die of COVID-19 are unvaccinated
    2021-11-15 18:56 Ukrainian director wins best video at MTV Europe Music Awards
    2021-11-15 18:53 Gazprom declines to buy additional capacity through Ukraine again
    2021-11-15 18:47 Ukraine’s Naftogaz will take part in certification of Nord Stream 2
    2021-11-15 18:13 Belarusian border crisis, explained
    2021-11-15 18:10 Fully vaccinated Ukrainians will get Hr 1,000 ($40) from government
    2021-11-15 18:03 Ex-chief editor says Ukraine’s top prosecutor pressured Kyiv Post
    2021-11-15 15:25 NATO chief alarmed by Russia’s military buildup, says alliance stands by Ukraine
    2021-11-14 15:26 Head of state-owned Guaranteed Buyer sacked, claims foul play
    2021-11-14 15:26 Kyiv’s state-owned Elektronmash factory auctioned off for $37 million on Nov. 12
    2021-11-14 15:26 UK special forces could shore up Ukraine’s defense amid Russian military buildup
    2021-11-14 15:26 Bellingcat: Secret Donetsk torture prison is controlled by Russia
    2021-11-14 15:26 Russian-led forces damage 9 civilian homes in Donetsk Oblast
    2021-11-14 15:25 Golden Gate District to become Kyiv’s cultural hub with own brand
    2021-11-14 15:25 Mate Academy startup raises $1.9 million from Ukrainian, European investors
    2021-11-16 19:10 Media: Head of State Property Fund resigns
    2021-11-16 07:01 Health Ministry: 96% of people who die of COVID-19 are unvaccinated
    2021-11-15 18:56 Ukrainian director wins best video at MTV Europe Music Awards
    2021-11-15 18:53 Gazprom declines to buy additional capacity through Ukraine again
    2021-11-15 18:47 Ukraine’s Naftogaz will take part in certification of Nord Stream 2
    2021-11-15 18:13 Belarusian border crisis, explained
    2021-11-15 18:10 Fully vaccinated Ukrainians will get Hr 1,000 ($40) from government
    2021-11-15 18:03 Ex-chief editor says Ukraine’s top prosecutor pressured Kyiv Post
    2021-11-15 15:25 NATO chief alarmed by Russia’s military buildup, says alliance stands by Ukraine
    2021-11-14 15:26 Head of state-owned Guaranteed Buyer sacked, claims foul play
    2021-11-14 15:26 Kyiv’s state-owned Elektronmash factory auctioned off for $37 million on Nov. 12
    2021-11-14 15:26 UK special forces could shore up Ukraine’s defense amid Russian military buildup
    2021-11-14 15:26 Bellingcat: Secret Donetsk torture prison is controlled by Russia
    2021-11-14 15:26 Russian-led forces damage 9 civilian homes in Donetsk Oblast
    2021-11-14 15:25 Golden Gate District to become Kyiv’s cultural hub with own brand
    2021-11-14 15:25 Mate Academy startup raises $1.9 million from Ukrainian, European investors
  • directory status & updates copyrights $ grep
  • Howell
  • The war, and the West’s harsh response, has accelerated the shift in the global world order that has been at play for years—centered on the U.S. and China. The relationship between China and the U.S. has frayed, raising concerns about the decoupling of the two countries, especially in areas like technology.
    >> Howell - Here are links to Mearsheimer In a short video posted by Belarusian journalist Tadeusz Giczan, Mr Lukashenko can be seen pointing to a map purporting to show Russia’s movements as it wages a war against Ukraine.
    Hugo video -skeptical about the coverage, and senses that something isn Food Shortages. From Reuters: “Russia and Ukraine account for 29% of global wheat exports, 19% of world maize (corn) exports, and 80% of world sunflower oil exports. ” Due to European sanctions on Russia, this will not be exported to Europe and other western countries, but to China instead. read more
  • directory status & updates copyrights
  • Howell 2022-03-07 14:22 Russian invaders shell Olvia sea port
    2022-03-07 14:09 Invaders focusing efforts on encircling Kyiv and five other cities - General Staff
    2022-03-07 13:44 Third round of Ukraine-Russia talks to begin today at 16:00 - Podoliak
    2022-03-07 13:33 Ukraine calls on UN International Court of Justice to order Russia to end war in Ukraine – MFA
    2022-03-07 13:06 Ukraine demands that Russia open humanitarian corridors
    2022-03-07 12:49 Since war began, 133 civilians, including 5 children, killed in Kharkiv region
    2022-03-07 12:29 Zelensky calls on West to close sky over Ukraine or send weapons
    2022-03-07 12:13 Zelensky demands tougher international sanctions against Russia
    2022-03-07 12:00 Ukraine-Russia talks to start in Belovezhskaya Pushcha Monday afternoon
    2022-03-07 11:44 One of Russian pilots of downed warplane detained near Kharkiv
    2022-03-07 11:37 Ukraine may get about 5,000 Starlink stations in coming days - Shmyhal
    2022-03-07 11:30 Ukrainian Armed Forces destroyed over 11,000 enemy personnel, 290 tanks, 46 aircraft, 68 helicopters – General Staff
    2022-03-07 11:13 Enemy forces shell two schools in Chernihiv, one of them catches fire
    2022-03-07 10:54 OSCE Monitoring Mission leaving Ukraine
    2022-03-07 10:34 Over 30 enemy helicopters destroyed in Kherson region overnight
    2022-03-07 10:19 Ukraine Army hits Russian warship in Black Sea
    2022-03-07 10:09 Near Zaporizhia, Russian tank fires on, crushes mail car, killing two
    2022-03-07 09:35 Russia has already fired 600 missiles since invasion of Ukraine – Pentagon official
    2022-03-07 09:26 Russian troops have minimum gains on ground over weekend – British intel
    2022-03-07 09:11 President replaces head of Chernihiv district state administration
    2022-03-07 09:09 Russian invaders kill Hostomel Mayor
    2022-03-07 08:59 Six cities in Zaporizhzhia region temporarily occupied – administration
    2022-03-07 08:41 11 buildings destroyed, casualties reported after enemy attack on Kharkiv
    2022-03-07 08:13 Russian, Belarusian officials denied entry to New Zealand over invasion of Ukraine
    2022-03-07 07:51 UK to provide $100M in aid to Ukraine
    2022-03-07 07:31 Russian troops continue missile, artillery strikes on cities, villages in Ukraine - General Staff
    2022-03-07 06:58 Pope Francis appeals for peace in Ukraine, guaranteed humanitarian corridors
    2022-03-07 06:47 Russian multiple rocket launchers firing upon Mykolaiv
    2022-03-07 06:31 Since mid-February, Ukrainian resources suffered about 2,800 cyber attacks
    2022-03-07 06:15 About 95% of Russian troops at border now in Ukraine – Pentagon
    2022-03-07 05:59 Mayor of Russian-bombed Okhtyrka saves city during day, operates on people at night
    2022-03-07 05:34 Ukraine launches website of Center for National Resistance
    2022-03-07 05:22 Near White House, thousands of people called for closing sky over Ukraine
    2022-03-07 04:53 About 1,700 foreign students remain in Sumy - Shkarlet
    2022-03-07 04:44 Russian military launched air strike on area near Tuzla village in Odesa region
    2022-03-07 02:56 SpaceX’s mission commander delivers aid for Ukrainian army from United States
    2022-03-07 02:40 SMM hub damaged due to shelling, two vehicles set on fire in Mariupol
    2022-03-07 02:28 Two houses destroyed in enemy air strikes on Ovruch
    2022-03-07 02:11 Arakhamiya: We are ready to discuss non-NATO models of security
    2022-03-07 01:59 Baerbock: NATO will not close sky over Ukraine as it is responsible for lives of Europeans
    2022-03-07 01:47 Ukraine’s Armed Forces liberate Chuhuiv town in Kharkiv region – General Staff
    2022-03-07 01:33 Ukrainian servicemen eliminate Russian marine battalion commander
    2022-03-07 01:20 Poland has not yet made decision on transfer of fighter jets to Ukraine
    2022-03-07 01:12 Zelensky calls for assistance in protecting Ukrainian sky through Global Citizen
    2022-03-07 01:07 Sting dedicates song to Ukrainians fighting against tyranny
    2022-03-07 01:00 Defensive operation ongoing near Huliaypole, Orikhove in Zaporizhzhia region
    2022-03-07 00:52 Minister: Russian troops have already destroyed or damaged 211 Ukrainian schools
    2022-03-07 00:47 Macron holds phone conversations with Zelensky and Putin
    2022-03-07 00:35 Kharkiv, Chernihiv, Mariupol, Kherson, Hostomel, Volnovakha awarded title of hero cities
    2022-03-07 00:20 Ukraine demands termination of Russia's and Belarus' membership in IMF, all WB organizations
    2022-03-07 00:08 Invaders temporarily controlling roads and some populated localities – Center for Countering Disinformation
    2022-03-06 23:59 No "corridor" to ensure shift change at Chernobyl NPP
    2022-03-06 23:45 Instead of humanitarian corridors, Russian army can only ensure bloody ones - Zelensky
    2022-03-06 23:30 Ukraine to suspend certain exports
    2022-03-06 23:15 Russia's audacity shows sanctions “not enough” - Zelensky
    2022-03-06 23:00 Ukraine to work toward “Nuremberg 2" trial for Russia - MFA
    2022-03-06 22:45 Amid West's doubts over no-fly zone, Russia destroying Ukrainian airfields to choke country’s own capacities
    2022-03-06 18:16 Russia preparing to shell its own cities to accuse Ukraine and declare general mobilization
    2022-03-06 17:59 photos, video Pro-Ukrainian rallies held in temporarily occupied Henichesk, Kalanchak and Kakhovka
    2022-03-06 17:34 Kuleba discussed possible supply of fighter jets to Ukraine with Blinken, Rau
    2022-03-06 17:08 Enemy aircraft shot down near Odesa
    2022-03-06 16:53 Another attempt to evacuate people from Mariupol fails over enemy shelling
    2022-03-06 16:33 More than 140,000 Ukrainians returned from abroad since war began
    2022-03-06 16:16 Russian forces continue shelling Mariupol
    2022-03-06 15:45 photos Russian invaders fire upon multi-apartment buildings in Kramatorsk, killing at least two people
    2022-03-06 13:52 Another enemy attack aircraft destroyed over Kharkiv
    2022-03-06 13:31 38 children killed since start of Russian invasion of Ukraine
    2022-03-06 13:17 Zelensky: We are already forming special funds to rebuild Ukraine after war
    2022-03-04 16:53 Zaporizhzhia Nuclear Power Plant (NPP) has been captured by Kadyrov’s people 2022-03-04 03:56 Russian invaders let firefighters into Zaporizhzhia Nuclear Power Plant (NPP), which is on fire following a Russian attack. Ukraine’s Energoatom wrote on Telegram, Ukrinform reports.
    2022-03-04 03:56 Energoatom: Russian shells hit Zaporizhzhia NPP
    2022-03-04 02:25 Zaporizhzhia NPP on fire due to shelling by Russian invaders
    2022-03-04 02:12 Zaporizhzhia NPP declares nuclear threat due to shelling by Russia’s heavy weapons
    2022-03-04 01:38 Enemy’s column of military vehicles moving towards Zaporizhzhia NPP
    2022-03-03 23:24 Russian invaders seize TV tower in Kherson – Ministry of Internal Affairs
    2022-03-03 22:00 U.S. sanctions Russians bankrolling Putin and Russia-backed influence actors
    2022-03-03 20:59 Mariupol cut off from electricity and water supply. Over 200 residents injured
    2022-03-03 20:53 Ukraine did not achieve results it expected in talks with Russia - Podoliak
    2022-03-03 20:32 Ukraine needs military aircraft because we are losing our people - Zelensky
    2022-03-03 18:46 No water supply in Enerhodar due to enemy shelling
    2022-03-03 18:30 Russian troops fired on and flooded the Panamanian-flagged ship HELT
    2022-03-03 17:20 Second round of talks between Ukrainian, Russian delegations begin
    2022-03-03 15:16
    Russia may be preparing to shell its own territory in order to accuse Ukraine - FM
    2022-03-03 13:30 22,000 people evacuated from Kharkiv region over day due to Russia’s air bombardment
    2022-03-03 11:19 UkrT Germany to supply 2,700 SAMs to Ukraine
    2022-03-02 04:48 UkrT Ukrainian military already carry out about 50 precision strikes with Vilkha missile systems
    2022-03-03 04:48 Ukrainian military already carry out about 50 precision stri
    2022-02-28 18:13
    Russian troops may jam communications in near-front zone – intelligence data
    2022-03-02 08:44 The Control Center of the Russian Space Agency Roskosmos has no more control over its spy-sattelites.
    “Hacking group NB65, affiliated with Anonymous, has shut down the Control Center of the Russian Space Agency ‘Roskosmos’. Russia has no more control over their own spy-sattelites,” reads a message on Telegram channel of the latest information from Ukraine’s Armed Forces.
    2022-02-28 18:31 Hungary not to allow lethal weapons for Ukraine to transit its territory – FM
    2022-02-28 18:31 President signs application for Ukraine's membership in EU
    2022-02-28 17:48 Negotiations between Ukrainian and Russian delegations continue
    2022-02-28 17:41 Government approves UAH 100,000 in salaries for military on the frontline
    2022-02-28 17:34 Czech PM supports Ukraine's accession to EU under special procedure
    2022-02-28 17:34 russian-invasion-update-at-least-11-people-killed-in-shelling-in-kharkiv
    2022-02-28 17:12 At least 11 people killed in shelling in Kharkiv
    2022-02-28 17:11 Luxembourg to send anti-tank weapons to Ukraine
    2022-02-28 17:09 Enemy helicopter shot down in Kharkiv region – Herashchenko
    2022-02-28 16:40 Enemies drop cluster bombs on village in Chernihiv region
    2022-02-28 16:35 One dead, several others wounded in Russian artillery attack on Severodonetsk
    2022-02-28 16:25 More than 30 people injured in shelling in Kharkiv, 1 woman died
    2022-02-28 16:24 EU allocates EUR 90M in humanitarian assistance to Ukraine
    2022-02-28 15:56 National Security and Defense Council: Ukraine begins to overtake military initiative
    2022-02-28 15:40 “I’m scared. We’re hitting everyone, even civilians” – Russian soldier to his mother before being killed in action
    2022-02-28 15:24 Ukrainian delegation demands Russia withdraw its troops, including from Crimea, Donbas - Arestovych
    2022-02-28 15:08 Kyivstar blocks all text messages coming to its subscribers from Russia, Belarus
    2022-02-28 15:00 Ukraine awaits UNGA resolution on stopping Russian aggression
    2022-02-28 14:55 Defenders of Zmiinyi Island alive, in Russian captivity – Navy
    2022-02-28 14:52 NATO stepping up support with air-defense missiles, anti-tank weapons to Ukraine – Stoltenberg
    2022-02-28 14:50 Army says Odesa region "stuffed" with Russian saboteurs
    2022-02-28 14:50 Three civilians killed, 14 wounded in shelling of Kharkiv’s residential neighborhoods
    2022-02-28 14:33 Japan to impose sanctions on Russia’s central bank
    2022-02-28 14:32 Lithuania asks The Hague to investigate war crimes by Russia, Belarus in Ukraine
    2022-02-28 14:14 Romania supports Ukraine's membership in EU
    2022-02-28 14:06 Defense Ministry: Several thousand foreigners willing to join International Legion
    2022-02-28 13:54 Ukrzaliznytsia calls on European, Asian countries to suspend cooperation with Russian Railways
    2022-02-28 13:52 Japan allocates $100M to support Ukraine - Zelensky
    2022-02-21 15:02
    2022-02-27 15:20
    Japan to put sanctions on Putin, support Russia's disconnection from SWIFT
    2022-02-27 15:10 Ukrainian military fully regain control of Kharkiv
    2022-02-27 15:10 Belarus may join Russia in war against Ukraine – Ukraine's ex-defense chief
    2022-02-27 14:58 Russian ships capture two Ukrainian vessels
    2022-02-27 14:54 Enemy tank company destroyed near Pryluky in Chernihiv region
    2022-02-27 14:41 No Russian troops in Kyiv now - Klitschko
    2022-02-27 14:25 Denmark closes its airspace for Russian aircraft
    2022-02-27 14:10 Zelensky talks to Lukashenko
    2022-02-27 14:06 In Irpin near Kyiv, three households on fire after enemy shelling
    2022-02-27 13:54 Russian tanks pass Makariv, trying to advance toward Kyiv - eyewitnesses
    2022-02-27 13:45 Ukrainian children call on Putin to end war
    2022-02-27 13:32 Ukraine files lawsuit against Russia in International Court of Justice in The Hague - Zelensky
    2022-02-27 13:08 Zaluzhny shows Bayraktar drone destroying enemy column in Kherson region
    2022-02-27 12:45 Enemy tanks shell bus with civilians in Sumy region
    2022-02-27 12:18 Ukraine parliament proposes UNGA set up tribunal to investigate Putin's crimes
    2022-02-27 11:53 Lukashenko confirms missile strikes on Ukraine from Belarus territory
    2022-02-27 11:19 Let's support the unbreakable: NBU opens special account to raise funds for Ukraine's Armed Forces
    2022-02-27 10:49 Gomel never approved as negotiation site, talks must only be held in neutral cities - Zelensky’s advisor
    2022-02-27 10:39 Portugal to provide military assistance to Ukraine
    2022-02-27 10:11 Russian shell hits apartment building in Bucha, Kyiv region
    2022-02-27 09:41 Ukrainian Air Force shoots down cruise missile fired at Kyiv from Belarus - Zaluzhny
    2022-02-27 09:18 Ukraine creating international territorial defense legion - Zelensky
    2022-02-27 08:49 Ukrainian Armed Forces systematically increasing group defending Kyiv - Maliar
    2022-02-27 08:22 Enemy's light armored vehicles break into Kharkiv
    2022-02-27 07:53 Kyiv under control by Ukrainian army, territorial defense forces - city administration
    2022-02-27 07:32 Macron calls on Lukashenko to demand withdrawal of Russian troops from Belarus as soon as possible
    2022-02-27 07:15 The Embassy of Ukraine in Washington is in daily contact with the U.S. Department of Defense for identifying and providing specific defense assistance to our country.
    2022-02-27 06:35 Russia hits oil depot of KLO gas station network in Vasylkiv
    2022-02-27 06:22 Ukrainian military destroy Kadyrov forces unit near Hostomel
    2022-02-27 05:45 Minister Fedorov calls on Viber and PayPal CEOs to block their services in Russia
    2022-02-27 05:20 Russian shells hit a radioactive waste disposal site at the Radon Association branch in Kyiv.
    2022-02-27 05:12 Ukrainian military capture Russian tank battalion commander
    2022-02-27 04:42 European Commission President: Cutting Russian banks off from SWIFT will effectively block Russia's exports and imports
    2022-02-27 04:19 Enemy loses 7 planes, 11 helicopters, column of equipment, fuel convoy over past day
    2022-02-27 04:07 Ukraine convenes emergency special session of UN General Assembly
    2022-02-27 03:58 Two Russian air bombs destroyed in Vinnytsia region
    2022-02-27 03:53 Musk gives Ukraine access to Starlink Internet
    2022-02-27 03:45 Enemy shell hits nine-story building in Kharkiv, killing a woman
    2022-02-27 03:33 Russian troops blast gas pipeline in Kharkiv
    2022-02-27 03:04 Russia’s military launch artillery attack on Sumy: man killed, woman wounded
    2022-02-27 02:03 Russians kill 10 Greek nationals near Mariupol – PM
    2022-02-27 01:55 Western powers to disconnect Russian banks from SWIFT - statement
    2022-02-27 01:47 Markarova warned US of possible chemical attack in Ukraine
    2022-02-27 01:40 Vasylkiv hit with Russian missiles, oil depot nearby on fire
    2022-02-27 01:20 Ukraine closes sea area in north-western part of Black Sea to repel Russian aggression
    2022-02-27 00:50 Zelensky: We will fight as long as it takes to rid our country of 2022-02-27 00:39 Journalist shot dead by Russian occupiers in Kherson Region
    2022-02-27 00:00 Ukrainian troops down enemy helicopter in Kharkiv region
    2022-02-26 23:47 Founder of Rakuten Viber donates almost $9M to Ukraine
    2022-02-26 23:31 Latvia, Lithuania ban Russian airlines from their airspace
    2022-02-26 23:03 Occupiers fired on Sartana village: four civilians killed, another nine wounded
    2022-02-26 22:44 Russia's attack on Kyiv kills 14 military and 6 civilians, including a child - Klitschko
    2022-02-26 22:11 Ukraine closes checkpoints across border with Russia, Belarus
    2022-02-26 21:43 Ukrainian Armed Forces destroyed Russian column near Severodonetsk
    2022-02-26 21:16 Arestovych on enemy losses: More than 1,000 killed, about 2,500 wounded
    2022-02-26 20:55 Germany to supply Ukraine with anti-tank weapons, Stinger missiles – Scholz
  • directory status & updates copyrights I added added more comments for "Howell - review of Holverstott 2016 Hydrino energy".
    It seems to me that there is a [broad, deep, persistent] basis that questions the [honesty, competence] of essentially all physicists is continually questioned by looking at their historical performance (see "Lies, damned lies, and scientists)". One can always play with [frameworks, assumptions, changeable and inconsistent interpretations, statistics, small world universal function approximators] and get whatever answer you want. This approach is something that GR & QM scientists seem to excel at.
    directory status & updates copyrights
    directory status & updates copyrights
  • Introduction
  • profile
  • challenged, and when they crumble
  • and Scientific Thinking?
  • B2 - Cheating theory and Game theory
  • B3 - Pre-and-post-Science Philosophies for Thinking
  • C1 - A Brave new world
  • C2 - The rise & fall of Enlightenment
  • C3 - Suggestions for science, policy, and society
  • D0 - Conclusions
  • directory status & updates copyrights As with webPage General Relativity is a turkey?, the comments herein are VASTLY incomplete! They would be far better (still vastly incomplete) if I could find extensive old notes.
    Rather than repeating it here, please refer to the introduction of my webPage "General Relativity is a turkey?"
    As this issue is already covered in my webPage "General Relativity is a turkey?", please refer to it.
    directory status & updates copyrights Supporting documents, spreadsheets etc
    directory status & updates copyrights is in the separate file:
    directory status & updates copyrights
  • directory status & updates copyrights Research Council proposal to participate in the
  • 30Nov-01Dec06 13Oct06
  • directory status & updates copyrights
  • Key [results, comments]
  • Play with the [time, mind]-bending perspective yourself
  • Ratio of actual to semi-log detrended data : [advantages, disadvantages]
  • Future potential work
  • Comparison of [TradingView, Yahoo finance] data
  • [data, software] cart [description, links]
  • All emails sent by Anthony Fauci between November 1, 2019 and the present that include the term Moderna or mRNA-1273 in any portion of the email.
  • All emails sent by Anthony Fauci between November 1, 2019 and the present that include the terms SARS-CoV, COVID, COVID-19, or coronavirus in any portion of the email. Read Fauci’s emails here and a few highlights from these emails are outlined below: (Howell : I haven a pdf document that was posted by Jason Leopold. See also the article by Natalie Bettendorf, Jason Leopold 01Jun2021 "Anthony Faucis Emails Reveal The Pressure That Fell On One Man".
    I
  • ???
  • ???
  • ???
  • ???
  • ???
  • ???
  • directory status & updates copyrights
  • Key [results, comments]
  • Play with the [time, mind]-bending perspective yourself
  • Ratio of actual to semi-log detrended data : [advantages, disadvantages]
  • Future potential work
  • Comparison of [TradingView, Yahoo finance] data
  • [data, software] cart [description, links] "...
  • All emails sent by Anthony Fauci between November 1, 2019 and the present that include the term Moderna or mRNA-1273 in any portion of the email.
  • All emails sent by Anthony Fauci between November 1, 2019 and the present that include the terms SARS-CoV, COVID, COVID-19, or coronavirus in any portion of the email. Read Fauci’s emails here and a few highlights from these emails are outlined below: I found a pdf of the emails ~??Jun2021 in a pdf document that was posted by Jason Leopold. See also the article by Natalie Bettendorf, Jason Leopold 01Jun2021 "Anthony Faucis Emails Reveal The Pressure That Fell On One Man".
    loaddefs link d_Qndfs loaddefs link d_Qndfs CDC ???
  • Unix [command, utility]s (eg [grep, sed, wc]) do the lion
  • strings.ndf : fileops.ndf : Notes taken during development
  • pdftotext - popular Linux utility to transform pdf documents to text-only format
  • grep - global regular expression parsing, used frequently in combion wth [sed, find] below. This is a foundational tool across [U, Li]nix. I reqlly wish I had spent the time to learn it properly decades ago, and I have a long way to go.
  • sed - I probably do at least 2 orders of magnitude more editing with sed than with all [text editors, word processors, spreadsheets, etc, etc] combined.
  • find - it feels weird, but I actually don
  • no bash scripts? - Again, this feels very weird, but then again there are several simple one-liner bash sequences wrapped in QNial operators.
  • ??? Most of my other bash scripts are listed here (check sub-directories).
  • QNial programming language - Queen
  • email analysis - Fauci corona virus.ndf -
  • email analysis - Fauci corona virus header.ndf -
  • QNial setup.ndf -
  • strings.ndf -
  • fileops.ndf -
  • ??? - Most of my other Q
  • gimp (GNU image manipulation program) is what I used for the SP500 time-section transparencies. For more details, see the section above "Play with the [time, mind]-bending perspective yourself".
  • Tesla options pricing is an example of a gnuplot script. I won
  • ???
  • ???
  • ???
  • ???
  • ???
  • ???
  • The calculus of words
  • The fractional order calculus of words - Harken back to the Great War between Gottfried Liebniz and Isaac Newton over who invented calculus, and proceed from there. Finally, >300 years later, things are starting to happen!
  • Geometrical Deep Learning neural networks (eg drug discovery) - Michael Bronstein
  • Robert Hecht-Nielson, Confabulation Theory versus Bayes Theorem
  • Lee Giles, CiteSeer - full of ideas on how to build systems.
  • Tom Cobb, tools for natural language processing
  • directory status & updates copyrights A letter sent by Dr. Ashley Bloomfield and Dr. Andrew Connolly to DHB organisers dated December 15th 2021 pressed the emergency button concerning incidence of myocarditis and pericarditis and also admitting underreporting.
    directory status & updates copyrights

    Table of Contents :

  • New corona virus cases/day/population for selected countries
  • Daily case charts for countries, by region
  • Spreadsheet for generating the charts
  • Jumping off the cliff and into conclusions
  • COVID-19 data and models
  • Corona virus models
  • Is the cure worse than the disease? - This section is specific to corona virus. Check my blog comments at the end of this web-page for many other earlier comments that I made about the policy approaches and [health expert, media] perspectives to corona virus, and other issues. My influenza web-page covers the same topic with respect to the flu.
  • Questions, Successes, Failures
  • Questions
  • Why are European-decended countries particularly hard hit?
  • Was the virus made in a lab in China?
  • Apparent successes of the medical, scientific] experts?
  • Apparent failures of essentially ALL [medical, scientific] experts, and the mass media?
  • The population in general
  • Howells blog posts to MarketWatch etc
  • See also the section below : 11Nov2020 The Covid-19 virus has been isolated many times
    2003 Koch COVID-19 and Influenza Co-infection: A Systematic Review and Meta-Analysis
    How influenza cases were recategorized as covid cases
    references - only articles related to the mis-class of flu as covid are listed. This "articles webPage has a fantastic list of references for many other covid issues!!

    20Dec2020 update, Youyang Gu
  • COVIDhub Ensemble - An aggregation of the forecasts of ~30 models submitted to the COVID-19 Forecast Hub. The combined forecast is then published on the CDC website. You can find the pre-print here. Because it is able to combine the forecasts of so many models, it is more accurate than any single model alone. Hence, if one were to only use one model, this would be the one to use.
  • UMass Amherst - An early model that has consistently performed well since its release in May. It is made by the Reich Lab, the same group that runs the COVID-19 Forecast Hub. The downside is that it only forecasts 4 weeks out and has no visualizations (other than on the Forecast Hub).
  • Oliver Wyman - A model released in June that instantly became one of the top-performing models since its release. It is one of the few other models to have estimates of true infections. It only has public forecasts 4 weeks into the future.
  • COVIDAnalytics (MIT DELPHI) - A top-performing model for US nationwide forecasts.
  • USC - A model released in July that has made great improvements over the past few weeks. It is one of the few other models to make daily updates.
  • UCLA - Another early model that has consistently performed well. It also has estimates for the reproduction number (Rt). The visualizations are well-done. With that said, the confidence intervals are fairly narrow, and they have been under-forecasting since September.
  • Los Alamos National Lab (LANL) - One of the top-performing models from April-July, but has been significantly under-forecasting since then.
  • London School of Hygiene & Tropical Medicine - While its forecasts are unproven, it is one of the few other models to have US and global Rt estimates.
  • LNQ - A model released in July that has the best forecasts for incident cases, both on a state-by-state level and county-by-county level. It is created by two individuals.

    Click for ALL daily cases regional charts on a single web-page.
    Anglophone

  • Scandanavia Southern Europe Pacific Asia Latin America Russia etc

    YYG infections, UoW IHME hospital resource usage :

    YYG comparison of several model projections :

    Imperial College, London MIT Covid Analytics Los Alamos National Laboratory Iowa State University

    We have been in an exceptionally low sunspot minimum of longer than usual duration, ash shown in the NOAA cycle chart. By far the best source of information that I What To Do With Space Weather Health Information - much more general information on space weather and health Howell - Pandemics and the sun Howell - Selected pandemics & epidemics.pdf Hoyte & Schatten year - solar influence on climate & natural systems, graphs.pdf Tapping, Mathias, Surkan - Pandemics & solar activity Stephen Puetz - Universal Cycle Theory - You This topic fits well into my theme "Lies, Damned Lies, and Scientists". Even though scientists are not the only group involved, the theme is universally applied to all home sapiens.
    See also : "Pandemics, health, and the sun" (menu above).

    directory status & updates copyrights Data for countries could have been mixed up. Can you spot any such errors? Can YOU fix the spreadsheet to correct the erors?
    directory status & updates copyrights
  • See also Rose 11Oct2021 "World Council for Health, presentation of VAERS data"
    Kyle Beattie 30Oct2021 "Worldwide Bayesian Causal Impact Analysis of Vaccine Administration on Deaths and Cases Associated with COVID-19"
    Guy Hatchard 17Dec2021 - New Zealand provides a very special case for assessing covax adverse effects in a population
    http://feedproxy.google.com/~r/psintl/~3/uW4Tt21A52Q/?utm_source=feedburner&utm_medium=email
    https://brandnewtube.com/watch/dr-jessica-rose-vaers-data-reveals-5-427-increase-in-deaths-following-c-19-shots_luxXci1s1oSeMRE.html
  • https://brandnewtube.com/watch/craig-paardekooper-vaers-different-effects-of-the-vaccine-on-women-and-men_X3XzcDSVmc1y4kE.html
  • directory status & updates copyrights
  • directory status & updates copyrights
  • PSI = John O
  • Dobler = Sacha Dobler



    04Apr2022 PSI: Science Whistleblowers Now Being Stalked, Threatened and Murdered
    14Mar2022 PSI, childrenshealthdefense.org: 10,000 Covid Patients, Almost Zero Deaths
    14Mar2022 PSI: Dr. John Campbell has been red-pilled
    Yesterday, on March 9, 2022, Dr. John Campbell published a video entitled “The Pfizer documents” where he steps through just one of the 150 released Pfizer documents in detail: the ADVERSE EVENTS OF SPECIAL INTEREST (AESI) document (aka the “5.3.6 document”).
    PSI, downthechupacabrahole.com: 14Mar2022 Sudden Adult Death Syndrome: Programming the Population
    13Mar2022 PSI: Official U.S. Data Proves COVID Drug Remdesivir is Mass Killer!
    Watch Dr Ardis offer his impassioned public testimony exposing this mass genocide in the video below
    12Mar2022 PSI: Vaccinated people are being TRACKED in real time
    07Mar2022 PSI: UK: Pandemic is Over for the Unvaccinated
    see graph 220307 PSI, Igor Chudov: Unvax versus boosted case rates per 100k
    see graph 220307 PSI, Igor Chudov: Unvax versus boosted death rates per 100k 07Mar2022 PSI, opindia.com: US-funded gain of function reasearch to ‘enhance’ viruses at Wuhan laboratory in China
    It has been now been established by now that scientists from the USA, with NIH funding, had been running gain of function research into a number of SARS-like viruses in the Wuhan laboratory in China, which is widely believed to be the origin of Covid-19 infection. In September 2021, it was revealed in a book that the US had funded research on deadly viruses in Wuhan under the supervision of Anthony Fauci.
    US scientist Peter Daszak, who had received an NIH grant, through his organisation EcoHealth Alliance had been engaged in ‘gain of function’ research for years in the Wuhan Institute of Virology in China. Daszak has himself admitted on numerous occasions that they are running breakthrough research into coronaviruses.
    Earlier this year, an investigation by Project Veritas had revealed that coronavirus research deemed ‘too dangerous’ by DARPA, was approved by Anthony Fauci’s NIAID and was conducted by Daszak’s EcoHealth Alliance at Wuhan.
    In October 2021, it was revealed via leaked emails that Anthony Fauci’s NIAID funded institute trained researchers at Wuhan lab housed fatal aerosol-borne viruses.
    05Mar2022 PSI, theepochtimes.com: Unintended Consequences of mRNA Vaccines Against COVID-19
    MIT scientist Stephanie Seneff’s paper, “Worse Than the Disease: Reviewing Some Possible Unintended Consequences of mRNA Vaccines Against COVID-19,” published in the International Journal of Vaccine Theory, Practice and Research in collaboration with Dr. Greg Nigh, is still one of the best, most comprehensive descriptions of the many possible unintended consequences of the mRNA gene transfer technologies incorrectly referred to as “COVID vaccines.”
    04Mar2022 PSI: Johns Hopkins University confirms: The PCR test alone will vaccinate you

    03Mar2022 PSI, theepochtimes.com: Study: Pfizer’s COVID Jab Enters Liver Cells & Is Converted to DNA
    28Feb2022 PSI: Pfizer Pushing Drug to Treat Heart Conditions Caused by COVID Jabs
    28Feb2022 PSI: How to Detox Spike Protein After Covid or Vaccine
    28Feb2022 PSI, NYTimes: ‘That’s murder’: NYT report on CDC sparks outrage
    Written by americasfrontlinedoctors.org
    >> Bill Howell, 24Feb2022 - I feel obliged to comment on a [recurring, dominant] observation across general issues : Females are leaving their distinctive mark on [public discourse, leadership, democracy], and it isn HILARIOUS!!! Reminds me of https://darwinawards.com/ :
    >> Bill Howell, 23Feb2022 - First time I PSI: German Professor: "Thousands Of Hidden Deaths Daily”
    "... This is what they do to people who play along: Hungarian gymnast and Olympic gold medalist Szilveszter Csollany had once expressed "anti-vaccination views on social media”, but then he was pressured to take the Covid vaccine, falls ill of pneumonia within days with a positive Covid test, dies on a ventilator, and the mainstream media slander him an anti-vaxxer who dies of Covid. ..."
    See more here: nofrakkingconsensus.com
    VAERS Link:
  • directory status & updates copyrights
  • Seneff, Nigh 31Mar2021 Worse Than the Disease? Reviewing Some Possible Unintended Consequences of the mRNA Vaccines Against COVID-19
  • BrightWorkResearch.com - This "articles webPage has a fantastic list of references for many covid issues!! Far more than I have collected below, and there are also a large number of [covid, covax]-related references on my related webPages!
  • Rose 11Oct2021 "World Council for Health, presentation of VAERS data"
  • Kyle Beattie 30Oct2021 "Worldwide Bayesian Causal Impact Analysis of Vaccine Administration on Deaths and Cases Associated with COVID-19"
    also reported by Steve Kirsch
  • canadiancovidcarealliance - Awesome job!
  • COVID 19 Inoculations More Harm Than Good
  • Most vaccinated die because of vax induced autoimmune attacks on their own organs
  • Craig Paardekooper - The Speed of Death from vaccine - ADVERSE REPORTING VAERS
  • Craig Paardekooper - VAERS - Different Effects of the Vaccine on women and men
  • Principia Scientific, John O
  • Sacha Dobler postings - While Sacha tends to be bit strident ("the world is going to end.."), his previous books on history, human behaviour, etc are far beyound anything that essentially any [[government, academic] scientist can do, partly because the latter are constrained more strict requirements of [political correctness, proof], but mostly because they are [cognitively, creatively] incapable of matching him.
  • Steve Kirsch, the executive director of the Vaccine Safety Research Foundation (VSRF)
  • Document - engineering analysis, not a strict scientific analysis - This is an engineering analysis, not a strict scientific analysis. - What I mean by this is that our objective is to use all the available data and our own expert judgement in interpreting that data in a reasonable way in an attempt to get an accurate estimate.
  • Bill Howell
  • Pandemics, health, and the sun - overall webPaag on pandemics
  • Influenza
  • Corona virus
  • Suicides from a pandemic context
  • directory status & updates copyrights Most of my comments time] went into a "sister webPage", which is also linked near the end of this webPage.
  • I use some of Tamny I have posted a collection of "quasi-principles", none of them "true", on a separate webPage. The principles were prepared during the rew of Tamny These references were initially prepared for this webPage
  • Bill Howell
  • directory status & updates copyrights
  • Covid-19 vaccine shots
  • Symptoms within one week or so after getting the vaccine
  • Symptoms within two weeks or so after getting the vaccine
  • Symptoms within a month or so after getting the vaccine
  • Comparison of [TradingView, Yahoo finance] data
  • [data, software] cart [description, links]
  • Sacha Dobler
  • Canadian Covid Care Alliance "... We are a group of over 500 independent Canadian doctors, scientists, and professionals aiming to provide top quality, evidence-based information about COVID-19, intent on reducing hospitalizations and saving more lives. Our role is to assist you by providing balanced, unbiased information relating to all sides of current recommendations surrounding this pandemic. ..."
    Their ??Jun2021 slide presentation is EXCELLENT! I still have to go through the rest of their reports, but that will have to wait several months.
  • Sucharit Bhakdi, M.D. and Arne Burkhardt, M.D "On COVID vaccines: why they cannot work, and irrefutable evidence of their causative role in deaths after vaccination" - This comes from Veronika Kyrylenko 2Dec2021 "Study: Most of Vaccinated Die Because of Vax-induced Autoimmune Attacks on Their Own Organs". "... A recently published study suggests that nearly every COVID vaccine recipient who died within seven days to six months after inoculation likely died because of vaccine-induced autoimmune damage." I still have to go back to read the report and check background [references, data].
  • "Helping People Be Seen, Heard and Believed After Adverse Vaccine Reactions
  • PSI International 30Dec2021 "Columbia study: True U.S. COVID vaxx death count around 400,000" - As [proper,basic] follow-up has been suppressed, this report is necessarily speculative. But "experts" have NO basis fcontention, and their deceitful proclamations have even less support than the renegades (as often is the case in science).
  • directory status & updates copyrights

    Table of Contents :

  • We have been in an exceptionally low sunspot minimum of longer than usual duration, ash shown in the NOAA cycle chart. By far the best source of information that I What To Do With Space Weather Health Information - much more general information on space weather and health

  • directory status & updates copyrights

    Table of Contents :

  • As per commentary over the last few years by Ben Davidson of Suspicious Observers, I have added the geomagnetic Kp index (see definitions below) to sunspots in the graph below to see if it might add "explanatory power" to the influenza data, beyond using sunspots alone. Ben has posted an awesome series of [papers, books, videos] dealing with human health in a broad sense and a multitude of astronomical phenomena!

    US Center for Disease Control (CDC) - annual flu seasons 2010-2017 :

    Peter Doshi, May2008 - The overall decline in influenza-attributed mortality over the 20th century cannot be the result of influenza vaccination?
    Sacha Dobler 2018 - 1917-18 "Spanish flu deaths" may have been largely due to secondary infections?
    Amazon Customer - Concerned about vaccines? Make an informed decision with this book. Classic on the debate which has raged for a century.
    Anne Rooney - Dangerous and inaccurate nonsense
    Stephen Puetz 2011 "Universal Wave Series"
  • Sacha Dobler 2018 "Solar History: The Connection of Solar Activity, War, Peace and the Human Mind in the 2nd Millennium" ISBN: 1730722873
  • [532] Ida Honorof, Eleanor McBean 1977 "Vaccination : the silent killer. A clear and present danger" Honor Publications
  • Trung Nguyen, Eleanor McBean, Sue Martson, Ida Honorof "Vaccines: The Biggest Medical Fraud in History History of Vaccination, Book 26, Kindle Edition
    "
  • This topic fits well into my theme "Lies, Damned Lies, and Scientists". Even though scientists are not the only group involved, the theme is universally applied to all home sapiens.
    See also : "Pandemics, health, and the sun" (menu above).
  • directory status & updates copyrights

    Table of Contents :

  • COVID-19 data and models
  • Corona virus models
  • Questions, Successes, Failures
  • Thunderbolts.info - the Electric Universe and Health

  • Michael Berk, Seetal Dodd, Margaret Henry 2006 "Do ambient electromagnetic fields affect behaviour? A demonstration of the relationship between geomagnetic storm activity and suicide" Bio-electromagnetics Journal, Uof Melbourne, Australia (both images above come from this paper)
  • Tsutomu Nishimura, I-Ju Tsai, Hiroyuki Yamauchi, Eli Nakatani, Masanori Fukushima, Chung Y Hsu Feb2020 "Association of of geomagnetic disturbances and suicide attempts in Taiwan, 1997-2013. A cross-sectional study"

    Eileen Mckusick gave a presentation at an Electric Universe (Thunderbolts conference, but that video is only accessible to members so I have provided a link to her website. Another site that has posted an article of her work is Sarah Kalvin

    Ben Davidson & Suspicious0bservers.org - Space Weather and health

    Robert Prechter - Socionomics, the first quantitative sociology?

    Elliot Wave Theory is well entrenched in stock market analysis. Here I take quotes from the Investopedia website, which shows its relevance to the investment community. This is also shown by the significant portion of Elliot Wave technical analysis on TradingView.com, even though a vast array of other trading toolsets are used. It

    "... Ralph Nelson Elliott developed the Elliott Wave Theory in the 1930s. Elliott believed that stock markets, generally thought to behave in a somewhat random and chaotic manner, in fact, traded in repetitive patterns.[1] In this article, we

  • Investopedia Introduction to Elliott Wave Theory
  • [1] Ralph Nelson Elliott. "The Wave Principle," Pages 3-4. Lula Press, 2019

    Stephen Puetz - the second universal wave series after the Mayans?

  • Bill Howell, "
  • Hoyte & Schatten year? - solar influence on climate & natural systems, graphs
  • Stephen Puetz - Universal Cycle Theory - Puetz has developed a majestic "Universal Wave Series" that covers a huge number of [natural, human] well-established periodicities, and best illustrates the ensemble of astronomical influences. His work is reminiscent of the system of 20+ Mayan calendars, but goes vastly beyond it! You We have been in an exceptionally low sunspot minimum of longer than usual duration, as shown in the NOAA cycle chart. By far the best source of information that I

    Spanish flu death estimate from Wikipedia

  • This topic fits well into my theme "Lies, Damned Lies, and Scientists". Even though scientists are not the only group involved, the theme is universally applied to all home sapiens.
  • directory status & updates copyrights
    directory status & updates copyrights
  • WebSite: https://StructuredAtom.org/
  • "Atom Viewer" 3D rotatable online model to see element structures according to SAM :
  • transmutation of fission wastes - Howell eg [Am, Cm, etc]:Bromerly papers see SAFIRE & Aureon.ca for actual experimental results see above: "Nuclear [material, process, deactivate]s" Institute for Energy and Environmental Research (viewed 05Jan2024) "Fissile Material Basics" https://ieer.org/resource/factsheets/fissile-material-basics/
  • directory status & updates copyrights
  • J.E. Kaal, A. Otte, J.A. Sorensen, J.G. Emming 2021 "The nature of the atom" www.Curtis-Press.com, 268pp ISBN 978-1-8381280-2-9 https://StructuredAtom.org/
    "Atom Viewer" 3D rotatable online model to see nucleus structures for each element according to SAM :
    full video transcript: Kaal, The Proton-Electron Atom
    see above: "Nuclear [material, process, deactivate]s" Institute for Energy and Environmental Research (viewed 05Jan2024) "Fissile Material Basics" https://ieer.org/resource/factsheets/fissile-material-basics/
  • directory status & updates copyrights
  • Sun Mercury Venus Mars Jupiter Saturn Mesopot Hindu Egypt Hebrew Chinese Greek Roman Japan Maya Inca civiliser tragedy knowledge, letters love bad guy war destroyer Mesopot Hindu Egypt Hebrew Chinese Greek Roman Japan Maya Inca
    directory status & updates copyrights
    directory status & updates copyrights
  • Stephen Puetz, at the Progressive Science Institute, Honolulu, HI, USA, conceived of the "Universal Waves Series" (UWS), which is by far the most [broad, deep, stunning, insightful] concept that I am aware of for relating periodicities of phenomena across [physics, astronomy, geology, climate, weather, evolutionary biology, history, crops, financial markets,etc]. A great many of the [known, reported] periodicities are described by the UWS as a "factor of 3" series, where each successive cycle is 3 times [more, less] than it "... Sympathetic resonance or sympathetic vibration is a harmonic phenomenon wherein a passive string or vibratory body responds to external vibrations to which it has a harmonic likeness. The classic example is demonstrated with two similarly-tuned tuning forks. ..." (wikipedia) "... The frequency splitting phenomenon (FSP) is a critical issue in wireless power transfer (WPT) systems. When the FSP exists, the load power will sharply increase and can be dozens of times of the power obtained at the resonant frequency if the driving frequency varies from the resonant frequency, which seriously affects the system safety. ..." (Liu etal [9]) "... Energy harvesting (also known as power harvesting or energy scavenging or ambient power) is the process by which energy is derived from external sources (e.g., solar power, thermal energy, wind energy, salinity gradients, and kinetic energy, also known as ambient energy), captured, and stored for small, wireless autonomous devices, like those used in wearable electronics and wireless sensor networks. ..." (wikipedia) A partial table is provided in Puetz - Universal Waves Series tabTable, from 2014.txt, although keep in mind that the periodicities are adjusted slightly as more and more data is analysed in Puetz p52 of [4], Puetz noted that all his UWS cycles up to the publication of the book in 2009 "... relate closest to either gravitational or electromagnetic forces. Nothing can be found related to eithertrong nuclear force or the weak nuclear force. Neverthelless, neither force should be completely discounted as a potential factor in modulating these cycles. ...". As I note in the section "General Relativity (GR) Space-Time", and since ~2010 elsewhere on my website, I am NOT fond of [GR, QM, dark [matter, enegy, [strong, weak] nuclear force]. I do retain them in the context of "multiple conflicting hypothesis".
    Glenn Borchardt collaborated with Puetz in [3]. He emphasizes vortex motion and his "concept of infinity" from 2004 to partially explain Puetz Most recently, in a paper co-authored with Kent Condie of the New Mexico Institute of Mining and Technology, Socorro, NM, USA[4], Puetz hypothesized causes for a number of geological series. While and these were assessed, definitive cusions cannot be drawn :
  • directory status & updates copyrights
  • The Puetz "acceptance criteria" as member of UWS for a periodicity is that the periodicity must be within ~5% (if I remember correctly) of one of the official Puetz "Universal Wave Series" (UWS) or "Half Universal Wave Series" (HUWS) lambda (frequency or period). I ignore the "Double UWS" for much of this webPage, as it appears that Puetz is noww more focused on the HUWS instead. But was this on a linear or log scale? (should be log scale - need to check). Even for a 3/2 mnimum factor (HUWS), 5% is a very tight criteria, somewhat related to the standard 95% confidence interval concept that is commonly used.
    This acceptance criteria tells us something important : if Puetz [H]UWS frequencies make up a non-ramdom portion of all known periodicities, then it really begs the question of why. Possible phenomenological explanations are discussed in , but on this webPage alysis is uniquely focussed on the dynamical implications of the [H]UWS, irrespective of phenomena that my drive it.
  • directory status & updates copyrights
  • The model here is DEFINITELY NOT suitable for application to [trade, invest]ing! It Users only have to set up the basic chart and symbols in TradingView based on my chart PuetzUWS [time, price] multiFractal mirrors, SPX 1872-2020. To do so you must be a TradingView subscriber. After that, copy over my PineScript coding, which you can find on my TradingView page - click on "SCRIPTS", and select my script "PuetzUWS [time, price] multifractal of detrended SPX 1871-2020". Further setup details are given below.
    This webPage is for users of the chart. Details about my Pine Script program to generate it, and how programmers can modify it for other [symbol, chart, fractal type, additional measure]s are provided on my TradingView PineScript of priceTimeFractals webPage..
    The above images are snapshots of my TradingView chart.ot shown are diverse market symbols that I have added, simply by using the normal TradingView capabilities. For the purposes of this discussion, I will focus on the SPX US stock market index symbol. Both the symbol name and the chartline that represents it are colored light blue. For this chart, SPX is the "primary market symbol", so normally volume bars would appear at the bottom of the chart. However, these are not available for SPX (?), whereas SPX500USD does provide them.
    The above charts are generated by my [written, adapted] Pine Script program that super-imposes fractal [time, price] grids : WARNING : You probably have to be a subscriber to TradingView to access the actual program that is being used. But you can also access an upload of my working copy (which I upload infrequently to my website).
  • @fract has interesting fractal ideas and colorful charts. In particular, he does [time, price] multi-fractals in his own way, and he has an interesting emphasis on the [rise, decline] angles.
  • @quantguy One of the more consistent and solid day-by-day analysis. He combines price Fibonaccis and his own signal-processing based (?) momentum indicators. (or phase change - I forget)
  • @Deus - consistent, detailed application of Elliot Wave theory, which bears some relationship to [time, price] multi-fractals.
  • @VasilyTrader - very prolific, kind of fun, and has a huge following. Vasily combines Fibonaccis, patterns, etc.
  • @tntsunrise
  • @chrism665 very powerful thinking, relatively rare postings
  • @cryptobullethbtcxlm - powerful commentary and ideas. Mix of approaches.
  • @samitrading - very creative and insightful, but I haven
  • @EsotericTrading - I have only just noticed a whole "Astrology" group in TradingView. EsotericTrading is one of many active in this area. This is definitiely NOT my area, a I Users cannot easily change the SPX basis of the chart, or the PuetzUWS [time, price] fractals. That also requires changes to the underlying PineScript coding, as described on my webPage TradingView PineScript of priceTimeFractals. You can do that if you are reasonably comfortable with PineScript (hopefully more comfortable than I).
  • Use of shorter-timeframe trends - as an example,look at my ??missinglink?? covid Mar2020-Dec2021 Spreadsheet of [data, trend, relStdDev] : Comments along these lines are provided in my very incomplete Howell - preliminary frequency analysis, Fourier series as origins of Puetz UWS. I don
  • directory status & updates copyrights
  • Howell - References related to Puetz [H]UWS
  • Dynamics of complex systems - for example [oscillators, electrial engineering signal processing, Fractional Order Calculus (FOC) of memrister arrays, etc, etc]. My own simplistic analysis looks at frequency [harmonic, sympathetic resonance, split, scavange, etc]. Another popular series of similar nature is JM Hurst
  • directory status & updates copyrights
  • Howell - Quick introduction to Puetz Universal Wave Series (UWS) fractal time).html
  • Howell - comments on Puetz UWS, the greatest of cycles, human implications.odt
  • Howell - preliminary frequency analysis as origin of Puetz UWS.html
  • Howell - possible phenomenological origins of Puetz Universal Wave Series (UWS).html
  • Puetz - Universal Waves Series, spreadset of wave data.ods
  • /home/bill/web/ProjMini/PuetzUWS/Puetz finance - 88 year cycle and harmonics 13Jan2019.png, for years 1310-2019 AD, major historical financial stress events
  • /home/bill/web/ProjMini/PuetzUWS/Puetz finance - 1.17, 3.5, 10.5 y cycles across countries & indexes 11Sep2019.png, 1965-2020
  • Howell - SP500 [Puetz time, Fibonacci price]Fractals [trend, relStdDev] 1926-2020.txt
  • Howell- TradingView chart "USA multi-indicator" 1872-2020 SP500 index, ratio of opening price to semi-log detrended price.html
  • Howell - TradingView PineScript of priceTimeFractals.html
  • Howell - TradingView PineScript [description, problem, debug].html
  • Howell - TradingView PineScript of priceTimeFractals.html
  • 0_PineScript notes.txt - details of software [code, bug, blogSolutions]
  • 0_PineScript errors.txt - [error, solution]s that keep coming back
  • Howell - References related to Puetz [H]UWS.html
  • Kivanc Ozbilgics Turtle Trade PineScript - documention.txt
  • Kivanc Ozbilgics Turtle Trade PineScript, plus 8-year detrended SP500.txt
  • RicardoSantos, Function Polynomial Regression.txt
  • sickojacko maximum [,relative] drawdown calculating functions.txt
  • TradingView auto fib extension.txt
  • directory status & updates copyrights While these ideas have floated aroung my [webSite, email, note]s for a long time, I first put them together to provide a colleecon of ideas for the review of John Tamny 2021 "When politicians panicked: The new corona virus, expert opinion, and a tragic lapse of reason".
    Often, when reading a modern-day "new" [thought, concept, theory, belief], I can The theme arose from my reading of general history, and the oft-commented long persecution and tribulations of the Jews : from their escape from Egypt (was this the Amelikites, possibly the [famous, brutal] Hyksos regime, as noted by Immanuel Velikovsky?), to the diaspora aftermath of the 167–160 BC Maccabean_Revolt, to their [torture, slaughter] for "causing" the black death, to the [Soviet, Nazi] death camps. (Apparently, Stalin was well under prepar goulags for a jewish purge, but died before initiating it? His NKVD apparently returned Jews who had escaped from Germany to East to a Gestapo, whom the Soviets had trained in the [construction, operation] of goulags?) But this though struck me first with respect to the oft-[mistaken, exaggerated] belief of a core role of Jews in the [development, progress, success] of [Soviet, Nazi] socialism d the [terror, destruction, death] that ensued. I suspect that every culture goes through challenges such as these.
    directory status & updates copyrights
    directory status & updates copyrights
  • QNial manuals and information, including :
  • QNial language description
  • QNial dictionary
  • Howell
  • MindCode [description, direction] webPage
  • Howell
  • MindCode [data, optr]s - one-line description per operator. Browse the text files in the same directory to get an idea of my [ToDos, concepts, progress] - but this is a mess to go through.
  • Howell
  • callerID Spiking Neural Networks (callerID-SNNs) - This webPage describes my [direction, concept]s that will be built at least partially on MindCode.
  • callerID-SNN [programming, note]s directory
  • callerID-SNN programming - QNial program file
  • directory status & updates copyrights
    directory status & updates copyrights

    Go to: Home page of www.BillHowell.ca



    directory status & updates copyrights
    directory status & updates copyrights
    directory status & updates copyrights
  • Key [results, comments]
  • directory status & updates copyrights Courtney Museum display of his "legend of the Queneesh" (Origins of the white whale glacier of Comox following the Great flood) :
    Mini-paintings and ceramic vases
    directory status & updates copyrights
  • 28May2016 Paul Vaughan - Ring_of_Fire and Volcano_Explosivity_Index versus El_Nino_La_Nina Here is a breakthrough graph from Paul that relates volcanic activity to El Nino/ La Nina. This makes an interesting century-scale [compliment, contrast] to a "short term" model for major earthquakes by Ben Davidson and colleagues of Suspicious0bservers.org (that
  • on A related
  • directory status & updates copyrights
    directory status & updates copyrights
  • data from [neuroscience, psychology] : quick list, more details
  • success in [definitions, models] of [consciousness, sentience]. However, for reasons given on that webpage, only Stephen Grossberg A few models of consciousness are summarized on my webPage A quick comparison of Consciousness Theories. Only a few concepts are listed, almost randomly selected except for [Grossberg, Taylor] Stephen Grossberg may have the ONLY definition of consciousness that is directly tied to quantitative models for lower-level [neuron, general neurology, psychology] data. Foundational models, similar in nature to the small number of general theories in physics to describe a vast range of phenomena, were derived over a period of ?4-5? decades BEFORE they were found to apply to consciousness. That paralleled their use in very widespread
  • John Taylor
  • references- Grossberg and
  • see incorporate reader questions into theme webPage
    see Navigation: [menu, link, directory]s
  • p153 Howell: grepStr
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
    bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :
    Advanced Search
    [tonight on Global]
    [full listings]-->

    document.write(

    Creative Commons LicenseThanks to many expert reviewers for useful comments. Since science is about self-correction, let me know under juergen@idsia.ch if you can spot any remaining error. Many additional relevant publications can be found in my 1991: Neural nets learn to program neural nets with fast weights (1991) - by Juergen Schmidhuber Goedel Machine End-To-End Differentiable Fast Weights: NNs Learn to Program NNs (1991) Slow neural net programs fast neural net through additive outer products End-to-End Differentiable Sequential Neural Attention 1990-93. Juergen Schmidhuber Reinforcement learning robotino double pole balancer with neuroevolution for fast weights A modern self-referential weight matrix (2022) based on the one of 1992 Creative Commons License Transformers with linearized self-attention in Neural Computation 1992, equivalent to fast weight programmers (apart from normalization), separating storage and control. Key/value was called FROM/TO. The attention terminology was introduced at ICANN 1993. Juergen Schmidhuber.

    Cauchy

    In 1805, Adrien-Marie Legendre published what
<p><img src=

    In 1972, Shun-Ichi Amari made the Ising recurrent net adaptive. This was the first published learning artificial recurrent neural network

    Alan Turing

    In 1958, Frank Rosenblatt had  multilayer perceptrons whose last layer learned

    In 1965, Alexey Ivakhnenko & Valentin Lapa introduced  the first working deep learning algorithm for deep MLPs with arbitrarily many hidden layers

    In 1967-68, Shun-Ichi Amari trained deep MLPs by stochastic gradient descent

    In 1960, Henry J. Kelley had a precursor of backpropagation in the field of control theory

    In 1979, Kunihiko Fukushima introduced the convolutional neural network (CNN) architecture. Deep Reinforcement Learning with Policy Gradients for Long Short-Term Memory (LSTM) )

    Leonardo Torres y Quevedo, the  20th century
<p><a rel=Creative Commons License

    Creative Commons LicenseThanks

    Creative Commons LicenseThanks to many expert reviewers for useful comments. Since science is about self-correction, let me know under juergen@idsia.ch if you can spot any remaining error. Many additional relevant publications can be found in my 1991: Neural nets learn to program neural nets with fast weights (1991) - by Juergen Schmidhuber Goedel Machine End-To-End Differentiable Fast Weights: NNs Learn to Program NNs (1991) Slow neural net programs fast neural net through additive outer products End-to-End Differentiable Sequential Neural Attention 1990-93. Juergen Schmidhuber Reinforcement learning robotino double pole balancer with neuroevolution for fast weights A modern self-referential weight matrix (2022) based on the one of 1992 Creative Commons License Transformers with linearized self-attention in Neural Computation 1992, equivalent to fast weight programmers (apart from normalization), separating storage and control. Key/value was called FROM/TO. The attention terminology was introduced at ICANN 1993. Juergen Schmidhuber.

    Cauchy

    In 1805, Adrien-Marie Legendre published what
<p><img src=

    In 1972, Shun-Ichi Amari made the Ising recurrent net adaptive. This was the first published learning artificial recurrent neural network

    Alan Turing

    In 1958, Frank Rosenblatt had  multilayer perceptrons whose last layer learned

    In 1965, Alexey Ivakhnenko & Valentin Lapa introduced  the first working deep learning algorithm for deep MLPs with arbitrarily many hidden layers

    In 1967-68, Shun-Ichi Amari trained deep MLPs by stochastic gradient descent

    In 1960, Henry J. Kelley had a precursor of backpropagation in the field of control theory

    In 1979, Kunihiko Fukushima introduced the convolutional neural network (CNN) architecture. Deep Reinforcement Learning with Policy Gradients for Long Short-Term Memory (LSTM) )

    Leonardo Torres y Quevedo, the  20th century
<p><a rel=Creative Commons License

    Creative Commons LicenseThanks

    Creative Commons LicenseThanks to many expert reviewers for useful comments. Since science is about self-correction, let me know under juergen@idsia.ch if you can spot any remaining error. Many additional relevant publications can be found in my This survey is being conducted by KEEP to gain a better understanding of how students use ChatGPT and similar AI tools. We will publish the results to our public communication channels, but no personal or identifying information will be published or shared with any third party. The findings from this survey are for educational purposes only.">

    document.write( document.write( document.write( document.write( document.write( document.write( document.write( Enter_ the conversation
  • ABC Home
  • Creative Commons License
    Note that a separate webPage lists a very small portion of Stephan Grossberg
  • J.E. Kaal, A. Otte, J.A. Sorensen, J.G. Emming 2021 "The nature of the atom" www.Curtis-Press.com, 268pp ISBN 978-1-8381280-2-9 https://StructuredAtom.org/
  • rationalwiki.org "Quantum consciousness" (last update 07Nov2022, viewed 16Jul2023)
    also critiques of the article above
  • Terrence J. Sejnowski 21Aug2023 "Large Language Models and the Reverse Turing Test", Neural Computation (2023) 35 (3): 309–342 (33 pages) https://direct.mit.edu/neco/issue (also copy in case original link fails)
  • Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin 12Jun2017 "Attention Is All You Need" [v5] Wed, 6 Dec 2017 03:30:32 UTC https://arxiv.org/abs/1706.03762
  • Wikipedia Consciousness
  • directory status & updates copyrights kozmoklimate@gmail.com +1-403-734-2118
    Full paper: Wickson 2007 - Galactic Theory of Climate.pdf
    Menu
  • Grossbergs list of [chapter, section]s.html - Note that the links on this webPage can be used to individually view all captioned images.
  • directory of captioned images - users can easily view all of the captioned images, especially if they are downloaded onto their computer. Many image viewers have "forward, backward] arrows to go through these sequentially, or right-click to open a link in a window.
  • core bash script for extracting captions from webPage listing, convert them to images, then vertically appending them to the figure.
  • my bash utility to [position, move] windows. This is normally used to start up 6 workspaces on my computer (Linux Mint Debian Edition), each with 5-10 apps in separate windows.
  • Prepared themes with links to the captioned images - there are a huge number of themes from the book to focus on. I have prepared a few as examples.
  • What is consciousness? - video example not ready as of 30Aug2023. I save videos as "ogv/ogg" files, and open standard format. The "VLC media viewer" is the program that I use to view them. I have found that although some of the standard video viewers complain, when pushed into the process ogv files can be viewed with them.
  • A very primitive bash script is used to generate the search results for ALL themes in the Themes webPage. Many readers will already have far better tools for this from the Computational Intelligence area etc.
    Because the theme webPage is automatically generated, and frequently re-generated as I update the list of themes and sources, I do NOT edit the file directly. The output format can be confusing, due to the special formatted [chapter, section] headings, and large tables which will keep the readers guessing whether they are still within the theme they want to peruse (as per the Table of Contents). Perhaps I can upgrade the searches in time to reduce the confusion, and to split themes in a better way.
  • list of [chapter, section]s
  • list of [figure, table]s
  • selected index items - I have NO intention of re-typing the entire index!
  • Grossberg quotes
  • reader Howell notes - this is an example of building your own webPage of [note, comment, thought]s when reading the book, which can them be added to the bash script for searches. Are notes in addition to [figure, table] captions, mostly comprised of text within the image, but also including quotes of text in the book. Rarely, it includes comments by Howell preceded by "Howell".
    The latter are distinct from "readers notes" (see, for example : Grossberg The reader may want to create their own file of comments based on this example, or augment this list with their [own, others More importantly, and as an easy first adaptation of Grossbergs [core, fun, strange] concepts.html thematic listings, you probably want to get rid of Howell
  • downloading the entire webDirectories below to some directory on your filesystem, say {yourDir} : TrNNs_ART , bin (hopefully I
  • adapt the bash script bash script: thematic [search, collect]s.sh to your own system, and run. This will require re-defining several environmental variables for your, such as :
  • thematic sub-lists appear in the webPage "Grossberg
  • 29Sep2023 Here is a list of various problems with the captioned images and their links on the webPage Grossbergs list of [figure, table]s.html :
    10Aug2023 I haven
  • How difficult would it be to augment "Transformer Neural Networks" (TrNNs) with Grossberg 10Aug2023 This webPage has not yet been worked on. It will touch on one of three questions of this webSite as mentioned in the Introduction :
  • How difficult would it be to augment "Transformer Neural Networks" (TrNNs) with Grossberg 10Aug2023 I haven
  • Navigation: [menu, link, directory]s
  • Theme webPage generation by bash script
  • Notation for [chapter, section, figure, table, index, note]s
  • incorporate reader questions into theme webPages
  • Navigation: [menu, link, directory]s
  • Theme webPage generation by bash script
  • Notation for [chapter, section, figure, table, index, note]s
  • incorporate reader questions into theme webPages
  • [Profile[Register]   [Logout]  • Find a job
    • Find a car
    • Find a home
    • Obituaries
    • Personals
    • Find an ad News
    World
    Canada
    Issues & Ideas
    Toronto
    Body & Health
    Editorials
    Letters
    Arts & Life
    Sports
    Sports Briefs
    Driver Financial Post
    Post Movies
    FP Investing
    FP Market Data
    Toronto
    Columnists
    Diversions
    Section Scan
    7-Day Archive
    Online Extras
    Subscribe
    Newspaper Ads

    G3
    Weddings
    Driver Post Movies
    FP Entrepreneur
    Weekend Post
    FP Weekend
    Travel
    Homes
    FP Business Magazine
    Post Homes Magazine
    Subscriber services
    Renew Subscription
    Update Credit Card
    User Help
    Corrections
    Send us a news tip
    - To the editor
    - About your event
    - Site Feedback
    Mobile Headlines
    Daily Newsletter
    Contests
    Contact the Post
    Advertise
    National news
    Global National
    Features
    [full listings]-->
    Menu in a row (15 May 2011, 6 Aug 2011, 1 Mar 2012, 10 Sep 2012).[GPUCNN5] and were able to greatly improve steel defect detection.[ST]CVPR paper on DanNet[GPUCNN3] 5 months later, the similar GPU-accelerated AlexNet won the ImageNet[IM09] 2012 contest.[GPUCNN4-5][R6] Our CNN image scanners were 1000 times faster than previous methods.[SCAN] The VGG network (ImageNet 2014 winner)[GPUCNN9] and other highly cited CNNs[RCNN1-3]

    ResNet, the ImageNet 2015 winner[HW2] (Dec 2015) and currently the NNs with rapidly changing "fast weights" were introduced by v.d. Malsburg (1981) and others.[FAST,a,b] Deep learning architectures that can manipulate structured data such as graphs[T22] were the work of Baldi and colleagues.[BA96-03] Today, graph NNs are used in numerous applications.

    Werbos,[BP2][BPTT1] Williams,[BPTT2][CUB0-2] and others[ROB87][BPTT3][DL1] analyzed ways of implementing gradient descent[BB2][NAN1-4][NHE][HEL] and recent renewed interest in such methods.[NAN5][FWPMETA6][HIN22] version of this became popular under the moniker "dropout."[Drop1-4][GPUCNN4] Two dueling NNs (a probabilistic generator and a predictor) are trying to maximize each other (using stochastic units[AC90] like in the much later StyleGANs[GAN2]). the predictor NN minimizes its error, while the generator NN tries to make outputs that maximize this error: one net

    4 years before a 2014 paper on GANs,[GAN1] my well-known 2010 survey[AC10] summarised the generative adversarial NNs of 1990 as follows: a early adversarial machine learning settings[S59][H90] neither involved unsupervised NNs nor were about modeling data nor used gradient descent.[AC20] has been widely used for exploration in Reinforcement Learning[SIN5][OUD13][PAT17][BUR18] for synthesis of realistic images,[GAN1,2] although the latter domain was recently taken over by Rombach et al. which is now considered a remaining grand challenge.[LEC] The early 1990s, however, saw first exceptions: NNs that learn to decompose complex spatio-temporal observation sequences into compact but meaningful chunks[UN0-3] (see further below), and NN-based planners of hierarchical action sequences for compositional learning,[HRL0] as discussed next. This work injected concepts of traditional "symbolic" hierarchical AI[NS59][FU77] into end-to-end differentiable "sub-symbolic" NNs. end-to-end differentiable NN-based subgoal generators for Hierarchical Reinforcement Learning (HRL).[HRL0] Soon afterwards, this was also done with problem."[LEC]

    Compare other NNs that have "worked on command" since April 1990, in particular, for learning selective attention,[ATT0-3] artificial curiosity and self-invented problems,[PP][PPa,1,2][AC] upside-down reinforcement learning[UDRL1-2] and its generalizations.[GGP] Recently, Transformers[TR1] have been all the rage, e.g., generating human-sounding texts.[GPT3] were first published in March 1991[FWP0-1][FWP6][FWP] These so-called "Fast Weight Programmers" or "Fast Weight Controllers"[FWP0-1] separated storage and control like in traditional computers, but in an end-to-end-differentiable, adaptive, fully neural way (rather than in a hybrid fashion[PDA1-2][DNC]). The "self-attention" in standard Transformers[TR1-4] combines this with a projection and softmax (using

    Today layers of neurons or many subsequent computational stages.[MIR] ones[DL1-2] (but see a 1989 paper[MOZ]). of arbitrary depth.[DL1] scales:[LEC] the Neural Sequence Chunker[UN0] or "very deep learning" tasks of depth > 1000[UN2] (requiring Neural History Compressor.[UN3] (See also recent work on unsupervised NN-based abstraction.[OBJ1-5]) More than a decade after this work,[UN1] called Deep Belief Networks (DBNs).[UN4] (or negative log probability) of the data representation in the level below.[HIN][T22][MIR] NN distillation was also republished many years later,[DIST2][MIR][HIN][T22] and is widely used today. used by Transformers[TR1-6] for together with unsupervised/self-supervised pre-training for deep learning.[UN0-3] See the previous section. his diploma thesis which I had the pleasure to supervise.[VAN1] First he implemented the Neural History Compressor above but then did much more: In both cases, learning fails (compare[VAN2]). This analysis led to basic principles of what diploma thesis,[VAN1] which I consider one of the most important documents in the history of machine learning. It also provided essential insights for overcoming the problem, through basic principles (such as constant error flow) of what we called LSTM in a tech report of 1995.[LSTM0] After the main peer-reviewed publication in 1997[LSTM1][25y97] (now the most cited NN article of the 20th century[MOST]), application of LSTM to speech (2004).[LSTM10] 2005 saw the first publication of LSTM with full backpropagation through time and of bi-directional LSTM[LSTM3] (now widely used). Another milestone of 2006 was the training method "Connectionist Temporal Classification" or CTC[CTC] for simultaneous alignment and recognition of sequences. Our team successfully applied CTC-trained LSTM to speech in 2007[LSTM4] (also with hierarchical LSTM stacks[LSTM14]). NNs and traditional approaches such as Hidden Markov Models (HMMs).[BW][BRI][BOU][HYB12][T22] LSTM was soon used for everything that involves sequential data such as speech[LSTM10-11][LSTM4][DL1] and videos. Google Many other companies adopted this.[DL4] on-device speech recognition of 2019 (now on your phone, not on the server) In 1995, we already had an excellent neural probabilistic text model[SNT] whose basic concepts were Nakamura and Shikano In 2001, we showed that LSTM can learn languages unlearnable by traditional models such as HMMs,[LSTM13] achieve only 10 billion clicks),[FB17][DL4] Apple image caption generation[DL4] & automatic email answering[DL4] etc. Business Week called LSTM "arguably the most commercial AI achievement."[AV1] have "LSTM" in their title.[DEC] (previous NNs had at most a few tens of layers). Microsoft The earlier Highway Nets perform roughly as well as their ResNet versions on ImageNet.[HW3] Variants of highway gates are also used for certain algorithmic tasks where the pure residual layers do not work as well.[NDR] is all about NN depth.[DL1] Net version called ResNet the most cited NN of the 21st.[MOST] (Citations, however, are a highly questionable measure of true impact.[NAT1]) Reinforcement Learning (RL),[KAE96][BER96][TD3][UNI][GM3][LSTMPG] expected cumulative reward signals.[DL1] formulated in the general RL framework.[UNI] Monte Carlo (tree) search (MC, 1949),[MOC1-5] dynamic programming (DP, 1953),[BEL53] artificial evolution (1954),[EVO1-7]([TUR1],unpublished) alpha-beta-pruning (1959),[S59] control theory and system identification (1950s),[KAL59][GLA85] stochastic gradient descent (SGD, 1951),[STO51-52] and universal search techniques (1973).[AIT7] system identification,[WER87-89][MUN87][NGU89] DP and its online variant called Temporal Differences (TD),[TD1-3] artificial evolution,[EVONN1-3] and policy gradients.[GD1][PG1-3] Many additional references on this can be found in Sec. 6 of the 2015 survey.[DL1]

    When there is a Markovian interface[PLAN3] RL with DP/TD/MC-based FNNs can be very successful, as shown in 1994[TD2] (master-level backgammon player) and the 2010s[DM1-2a] (superhuman players for Go, chess, and other games). history of previous inputs, our combinations of RL algorithms and LSTM[LSTM-RL][RPG] have become standard, in particular, our For example, in 2018, a PG-trained LSTM was the core of OpenAI beat a pro player in the game of Starcraft, which is theoretically harder than Chess or Go[DM2] in many ways, using OpenAI Five which learned to defeat human experts in the Dota 2 video game (2018).[OAI2] commonsense reasoning[MAR15] and learning to think.[PLAN4-5] time scales?[LEC] We published answers to these questions in 1990-91: self-supervised 1997[AC97][AC99][AC02] and 2015-18.[PLAN4-5] century[SHA7a][RAU1] by Heron of Alexandria was perhaps the first machine with a stored program.[BAN][KOE1] It used pins on designed the first machine (the step reckoner) that could perform all four arithmetic operations, and the first with a memory.[BL16] cards (1679),[L79][L03][LA14][HO66] and published the chain rule[LEI07-10] (see above), essential ingredient of deep learning and modern AI.

    Leonardo Torres y Quevedo, the  20th century
Leonardo Torres y Quevedo (mentioned in the <a href=introduction) became it at the 1951 Paris AI conference.[AI51][BRO21][BRU4] The corresponding patent of 1936[ZU36-38][RO98][ZUS21] predating Claude Shannon principles of binary computation (1679)[L79][LA14][HO66][L03] This greatly simplified the hardware.[LEI21,a,b] Church[CHU] (1935), conditional jump instruction.[RO98] John Atanasoff (the "father of tube-based computing"[NASC6a]). Julius Edgar Lilienfeld in 1925.[LIL1-2] used to break the Nazi code.[NASC6] someone other than Zuse (1941)[RO98] was Howard Aiken and the 1948 upgrade of ENIAC, which was reprogrammed by entering numerical instruction codes into read-only memory.[HAI14b] with several transistors on a common substrate (granted in 1952).[IC49-14] In 1959, Robert Noyce presented a monolithic IC.[IC14] ICs/GPUs of today (2022) contain many billions of transistors (almost all of them of Lilienfeld Moore According to Bremermann (1982),[BRE] as previously noted back in 2004.[OOPS2][ZUS21] are actually light beams).[DL2] are expected to become even much more important than they are today.[DL2] He combined Georg Cantor with the foundational work by Gottlob Frege[FRE] (who introduced the first formal language in 1879), Thoralf Skolem[SKO23] (who introduced primitive recursive functions in 1923) and Jacques Herbrand[GOD86] (who identified deductively equivalent[LE18] to the later Boolean Algebra of 1847.[BOO] Turing Machine.[TUR] He rederived the above-mentioned result.[CHU][TUR][HIN][GOD21,21a][TUR21][LEI21,21a] In the same year of 1936, Emil Post published yet another independent universal model of computing.[POS] the first high-level programming language.[BAU][KNU] 1945[KNU] in 1948.[ZU48] Compare Newell & Simon of learning to predict future data from past observations.[AIT1][AIT10] With this concept,[AIT7][AIT5][AIT12-13][AIT16-17] as well as applications to NNs.[KO2][CO1-3] environments.[AIT20,22] He also derived the asymptotically fastest algorithm for all well-defined computational problems,[AIT21] a beautiful pattern of exponential acceleration in it,[OMG] which I have presented in many talks since then, and which also made it into Sibylle Berg intervals: just a few decades or centuries or at most millennia.[OMG1] Heron of Alexandria[RAU1] in the 1st century). The telephone (e.g., Meucci 1857, Reis 1860, Bell 1876)[NASC3] Haber-Bosch process for creating artificial fertilizer, without which the world could feed at most 4 billion people.[HAB1-2] the root of today artificial curiosity and generative adversarial NNs for agents that invent their own problems (see above),[AC90-AC20][PP-PP2][SA17] Transformers with linearized self-attention (see above),[FWP0-6][TR5-6] distilling teacher NNs into student NNs (see above),[UN][UN0-3] at multiple levels of abstraction and multiple time scales (see above),[HRL0-2][LEC] and other exciting stuff. Much of this has become very popular, and improved the lives of billions of people.[DL4][DEC][MOST] lab for decades[AC][AC90,AC90b]) will quickly improve themselves, restricted only by the fundamental limits of computability and physics. it,[ACM16][FA15][SP16][SA17] make more and bigger AIs. Those who don

    Creative Commons License arXiv page. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. 555+ References (and many more in the survey[DL1]) PDF. PDF. systems with intrinsic motivation,[AC90-AC95] the system also See later publications.[AC99][AC02] PDF. PDF. IEEE link. With a brief summary of the generative adversarial neural networks of 1990[AC90,90b][AC20] Preprint arXiv/1906.04493. Link. H. Bruderer[BRU4] calls that the first conference on AI. Blog of Werner Vogels, CTO of Amazon (Nov 2016): First publication of what was later sometimes called the Hopfield network[AMH2] or Amari-Hopfield Network,[AMH3] based on the (uncited) Lenz-Ising recurrent architecture.[L20][I25][T22] Mentions the recurrent Ising model[L20][I25]on which the (uncited) Amari network[AMH1,2] is based. The Hopfield network or Amari-Hopfield Network was first published in 1972 by Amari.[AMH1] [AMH2] did not cite [AMH1]. Transformers with linearized self-attention (1991-93).[FWP] Today, both types are very popular. PDF. H. Larochelle, G. E. Hinton. Learning to combine foveal glimpses with a third-order Boltzmann machine. NIPS 2010. This work is very similar to [ATT0-2] which the authors did not cite. In fact, Hinton was the reviewer of a 1990 paper[ATT2] his own work:[ATT3] arXiv/1409.0473, 2014-16. This work on soft "attention" did not cite Schmidhuber Bloomberg, May 15, 2018. PDF. by Sherrington & Kirkpatrick[SK75] & Glauber[G63] nor the first working algorithms for deep learning of internal representations (Ivakhnenko & Lapa, 1965)[DEEP1-2][HIN] nor Amari Even later surveys by the authors[S20][DLC] failed to cite the prior art.[T22] formal Algebra of Thought (1686)[L86][WI48] was deductively equivalent[LE18] to the much later PDF. Link. PDF. First application of backpropagation[BP1] to NNs (concretizing thoughts in Werbos More.[DL2] Link. IEEE Spectrum, 2021. Link. English version: [CNN1+]. More in Scholarpedia. Link. [CNN1a] A. Waibel. Phoneme Recognition Using Time-Delay Neural Networks. Meeting of IEICE, Tokyo, Japan, 1987. First application of backpropagation[BP1-5] and weight-sharing PDF. Spatial Averaging.[CNN1] Spatial Averaging.[CNN1] Inverse, 2016. Link. Since November 2021: Comments on version 1 of the report[T22] in the Connectionists Mailing List, perhaps the oldest mailing list on artificial neural networks. Link to the archive. PDF. PDF. Beijing, 2014. Preprint arXiv:1402.3511 [cs.NE]. 1st superhuman result in 2011.[DAN1] Now everybody is using this approach. Deep Learning. HTML. A "survey" of deep learning that does not mention the pioneering works of deep learning [T22]. [DL3a] Y. Bengio, Y. LeCun, G. Hinton (2021). Turing Lecture: Deep Learning for AI. Communications of the ACM, July 2021. HTML. Another "survey" of deep learning that does not mention the pioneering works of deep learning [T22]. greatly improved (CTC-based) on-device speech recognition (on the phone, not the server) PDF. Web site deeplearning.net of Y. Bengio Internet Archive), referring to Hinton unsupervised pre-training for deep NNs[UN4] (2006) although it). More on this under [T22]. Preprint arXiv:2212.11279. Tweet of 2022. arxiv:1312.5602. Link. the first sentence of the abstract of the earlier tech report version[DM1] was created earlier by Jan Koutnik et al. in Schmidhuber Hochreiter et al. Preprint arXiv:2112.10752, LMU Munich, 2021. neural networks learning to control dynamic external memories.[PDA1-2][FWP0-1] arXiv:1808.03578, 2018. arXiv:1808.03578, 2018. Conf. on Neural Networks, Vol. 2, 2004, pp. 985-990. This paper does not mention that the "ELM" concept goes back to Rosenblatt This overview does not mention that the "ELM" concept goes back to Rosenblatt Link. over 4 billion automatic translations per day (The Verge, August 4, 2017); Facebook blog by J.M. Pino, A. Sidorov, N.F. Ayan (August 3, 2017) alternative[FWP0-1] to recurrent NNs. the fast weights[FAST,FASTa,b] of Such Fast Weight Programmers[FWP0-6,FWPMETA1-8] can learn to memorize past data, e.g., by computing fast weight changes through additive outer products of self-invented activation patterns[FWP0-1] (now often called keys and values for self-attention[TR1-6]). The similar Transformers[TR1-2] combine this with projections Transformers with linearized self-attention[TR5-6] In 1993, he introduced in this context,[ATT] and RNNs that program themselves. See tweet of 2022. normalization).[FWP] PDF. See tweet of 2022 for 30-year anniversary. PDF. Preprint: arXiv:1811.12143. PDF. PDF. Very similar to [FWP0-2], in both motivation [FWP2] and execution. This work on "attention" did not cite Schmidhuber Preprint: arXiv:2003.08165. PDF. Linear Transformers Are Secretly Fast Weight Programmers. ICML 2021. Preprint: arXiv:2102.11174. Preprint: arXiv:2106.06295 (June 2021). PDF. An introspective network that can learn to run its own weight change algorithm. In Proc. of the Intl. Conf. on Artificial Neural Networks, J. Schmidhuber. Habilitation thesis, TUM, 1993. PDF. Preprint arXiv:2012.14905 [cs.LG], 2020. Report arXiv:2011.07831 [cs.AI], 2020. Preprint: arXiv:2202.05780. Probably the first paper on using stochastic gradient descent[STO51-52] reverse mode of automatic differentiation or backpropagation[BP1]). Implementation of Amari Preprint arXiv/2207.01570, 4 July 2022 (submitted in May 2022). arXiv:cs/0309048 (2003). PDF. Google Research Blog, Sep 2015, see also Aug 2015 Google Alphr Technology, Jul 2015, or 9to5google, Jul 2015 WIRED, Sep 2016, siliconANGLE, Sep 2016 Blog post, Internet Archive, 2010. A blog post describing basic ideas[AC][AC90,AC90b][AC20] of GANs. A description of GANs that does not cite Schmidhuber Link. This was number 1 on Hacker News. Frankfurter Allgemeine Zeitung, 16/6/2021. Preprint arXiv/2005.14165. for Image Classification. International Joint Conference on Artificial Intelligence (IJCAI-2011, Barcelona), 2011. PDF. ArXiv preprint. competitor.[DAN1] This led to massive interest from industry. PDF. to win computer vision contests in 2011[GPUCNN2-3,5] (AlexNet and VGG Net[GPUCNN9] followed in 2012-2014). [GPUCNN4] emphasizes benefits of Fukushima PDF. PDF. PDF. Bengio claimed[YB20] date back to 1991-93.[UN0-2][UN] An unsupervised learning algorithm related to Schmidhuber what Y. LeCun called an "open problem" in 2022.[LEC] North-Holland, 1991. PDF. Extending TR FKI-129-90, TUM, 1990. PDF. This work did not cite Schmidhuber PDF. Preprints arXiv:1505.00387 (May 2015) and arXiv:1507.06228 (July 2015). Also at NIPS 2015. The LSTM with forget gates[LSTM2] for RNNs.) Resnets[HW2] are a version of this where the gates are always open: g(x)=t(x)=const=1. Highway Nets perform roughly as well as ResNets[HW2] on ImageNet.[HW3] Variants of highway gates are also used for certain algorithmic tasks, where the simpler residual layers do not work as well.[NDR] Link. arXiv:1512.03385 (Dec 2015). Residual nets are a version of Highway Nets[HW1] arxiv:1612.07771 (2016). Also at ICLR 2017. This work did not cite the earlier LSTM[LSTM0-6] trained by Connectionist Temporal Classification (CTC, 2006).[CTC] CTC-LSTM was successfully applied to speech in 2007[LSTM4] (also with hierarchical LSTM stacks[LSTM14]) and became the first superior end-to-end neural speech recogniser that outperformed the state of the art, dramatically improving Google Markov models (HMMs).[BW][BRI][BOU] [HYB12] still used the old hybrid approach and did not compare it to CTC-LSTM. Later, however, Hinton switched to LSTM, too.[LSTM8] Ernst Ising and Wilhelm Lenz in the 1920s.[L20][I25][K41][W45][T22] It settles into an equilibrium state in response to input conditions, and is the foundation of the first well-known learning RNNs.[AMH1-2] Who Invented the IC? Preprint arXiv:1704.04760 PDF. PDF. Mathematischen Schriften, ed. C. Gerhardt, Berlin 1879, vol.7, p.223. English link. Link. arXiv:1607.06450, 2016. See tweet1. See tweet2. 19/5/2021. [LSTM1] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 1997. PDF. Based on [LSTM0]. More. PDF. PDF. PDF. PDF. PDF. PDF. PDF. PDF. Preprint: arxiv:1506.07452. PDF. PDF. are actually a variant of the vanilla LSTM architecture[LSTM2] (2000) which the authors did not cite although this work[LSTM2] was the one that introduced gated recurrent units. Furthermore, Schmidhuber learn to count[LSTMGRU2] nor learn simple non-regular languages;[LSTMGRU2] they according to Google Brain.[LSTMGRU3]) Preprint arXiv:1805.04908. Architectures. Preprint arXiv:1703.03906 A misleading "history of deep learning" goes more or less like this: "In 1969, Minsky & Papert[M69] researchers took a fresh look at the problem in the 1980s."[S20] However, the 1969 book[M69] addressed a "problem" of Gauss & Legendre and then also by Amari arXiv:2005.05744, 2020. The Computation 22(12): 3207-3220, 2010. ArXiv Preprint. By 2010, when compute was 100 times more expensive than today, both the feedforward NNs[MLP1] Preprint arXiv:1608.05343, 2016. Preprint arXiv:1611.01578 (PDF), 2017. Compare the earlier Neural Architecture Search of Bayer et al. (2009) for LSTM-like topologies.[LSTM7] Correspondence, Nature, vol 483, p 541, March 2012, doi:10.1038/483541b. Letter, Science, vol 336, p 1639, June 2012. See also comment on response by A. Hodges (DOI:10.1126/science.336.6089.1639-a) [NASC6a] J. Schmidhuber. Comment on "Biography: The ABC of computing" by J. Gilbey, Nature 468 p 760-761 (2010). Link. The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization. Proc. ICLR 2022. Preprint arXiv/2110.07732. excellent 1995 neural probabilistic text model.[SNT] See also Nakamura and Shikano theorem proving[ZU48] the first high-level programming language.[BAU][KNU] NY Times article Learning Dexterous In-Hand Manipulation. arxiv:1312.5602 (PDF). arxiv:1912.06680. Link. history Preprint arXiv/1606.06724. Preprint arXiv/1708.03498. Preprint arXiv/1802.10353. Preprint arXiv/2010.03635. Preprint arXiv/2011.12930. PDF. PDF. Link. Based on TR FKI-126-90 (1990).[AC90] PDF. Partially based on TR FKI-126-90 (1990).[AC90] Report arXiv:1210.0118 [cs.AI], 2015. One Big Net For Everything. Preprint arXiv:1802.08864 [cs.AI], Feb 2018. Preprint: arXiv:1809.01999. Github: World Models. minimization. TR CU-CS-565-91, Univ. Colorado at Boulder, 1991. PDF. 1991. PDF. Link. arXiv:1112.5309 [cs.AI] First Experiments with PowerPlay. arXiv:1210.8385 [cs.AI]. [R1] Reddit/ML, 2019. Hinton, LeCun, Bengio receive ACM Turing Award. This announcement contains more comments about Schmidhuber than about any of the awardees. [R2] Reddit/ML, 2019. J. Schmidhuber really had GANs in 1990. [R3] Reddit/ML, 2019. NeurIPS 2019 Bengio Schmidhuber Meta-Learning Fiasco. in 1987[META1][META] long before Bengio [R4] Reddit/ML, 2019. Five major deep learning papers by G. Hinton did not cite similar earlier work by J. Schmidhuber. [R5] Reddit/ML, 2019. The 1997 LSTM paper by Hochreiter & Schmidhuber has become the most cited deep learning research paper of the 20th century. [R6] Reddit/ML, 2019. DanNet, the CUDA CNN of Dan Ciresan in J. Schmidhuber [R7] Reddit/ML, 2019. J. Schmidhuber on Seppo Linnainmaa, inventor of backpropagation in 1970. [R8] Reddit/ML, 2019. J. Schmidhuber on Alexey Ivakhnenko, godfather of deep learning 1965. [R9] Reddit/ML, 2019. We [R11] Reddit/ML, 2020. Schmidhuber: Critique of Honda Prize for Dr. Hinton [R12] Reddit/ML, 2020. J. Schmidhuber: Critique of Turing Award for Drs. Bengio & Hinton & LeCun [R15] Reddit/ML, 2021. J. Schmidhuber Although these MLPs did not yet have deep learning, because only the last layer learned,[DL1] Rosenblatt basically had what much later was rebranded as Extreme Learning Machines (ELMs) without proper attribution.[ELM1-2][CONN21][T22] Preprint arXiv/1311.2524, Nov 2013. Preprint arXiv/1703.06870, 2017. The first paper on policy gradients for LSTM. This approach has become very important in reinforcement learning.[LSTMPG] the first working algorithms for deep learning of internal representations (Ivakhnenko & Lapa, 1965)[DEEP1-2][HIN] as well as Amari Even later surveys by the authors[DL3,3a] failed to cite the prior art.[T22] Link. A misleading "history of deep learning" which goes more or less like this: "In 1969, Minsky & Papert[M69] researchers took a fresh look at the problem in the 1980s."[S20] However, the 1969 book[M69] addressed a "problem" of Gauss & Legendre and then also by Amari in the 1960s-70s, especially outside of the Anglosphere.[DEEP1-2][GD1-3][CNN1][DL1-2][T22] The Past, Present and Future of Artificial Intelligence. Link. PDF. Much later this was called a probabilistic language model.[T22] Link. ACM [T22] debunks this justification. Debunking [T19] and [DL3a] . the 1991 publication on what See tweet of 2022 for 30-year anniversary. Link. The Turing Test. YouTube video, 2022. Preprint arXiv/1912.02875, 5 Dec 2019. Preprint arXiv/1912.02877, 5 Dec 2019. By 1993, the approach solved problems of depth 1000 [UN2] 1992. Based on TR FKI-148-91, TUM, 1991.[UN0] PDF. [UN2] J. Schmidhuber. Habilitation thesis, TUM, 1993. PDF. can be found here (depth > 1000). 2006. PDF. It did not cite the much earlier 1991 unsupervised pre-training of stacks of more general recurrent NNs (RNNs)[UN0-3] (or negative log probability) of the data representation in the level below.[HIN][T22][MIR] This can greatly facilitate very deep downstream learning.[UN0-3] The comment under reference[UN4] applies here as well. Link. Results are essentially identical to those of Schmidhuber own [VAN2] but not the original work. PDF. [VAN4] Y. Bengio. Neural net language models. Scholarpedia, 3(1):3881, 2008. Link. Link. Youtube video [see 28:16]. However, in 2010, Schmidhuber Preprint arXiv:1609.08144 (PDF), 2016. Based on LSTM which it mentions at least 50 times. WWW link (retrieved 15 May 2020). date back to 1991-93.[UN0-2][UN] already in 1995.[SNT] architecture [NEU45]. PDF. Weltwoche, Nr. 33.21, 19 August 2021. Menu

    in a row (15 May 2011, 6 Aug 2011, 1 Mar 2012, 10 Sep 2012).[GPUCNN5] and were able to greatly improve steel defect detection.[ST]CVPR paper on DanNet[GPUCNN3] 5 months later, the similar GPU-accelerated AlexNet won the ImageNet[IM09] 2012 contest.[GPUCNN4-5][R6] Our CNN image scanners were 1000 times faster than previous methods.[SCAN] The VGG network (ImageNet 2014 winner)[GPUCNN9] and other highly cited CNNs[RCNN1-3]

    ResNet, the ImageNet 2015 winner[HW2] (Dec 2015) and currently the NNs with rapidly changing "fast weights" were introduced by v.d. Malsburg (1981) and others.[FAST,a,b] Deep learning architectures that can manipulate structured data such as graphs[T22] were the work of Baldi and colleagues.[BA96-03] Today, graph NNs are used in numerous applications.

    Werbos,[BP2][BPTT1] Williams,[BPTT2][CUB0-2] and others[ROB87][BPTT3][DL1] analyzed ways of implementing gradient descent[BB2][NAN1-4][NHE][HEL] and recent renewed interest in such methods.[NAN5][FWPMETA6][HIN22] version of this became popular under the moniker "dropout."[Drop1-4][GPUCNN4] Two dueling NNs (a probabilistic generator and a predictor) are trying to maximize each other (using stochastic units[AC90] like in the much later StyleGANs[GAN2]). the predictor NN minimizes its error, while the generator NN tries to make outputs that maximize this error: one net

    4 years before a 2014 paper on GANs,[GAN1] my well-known 2010 survey[AC10] summarised the generative adversarial NNs of 1990 as follows: a early adversarial machine learning settings[S59][H90] neither involved unsupervised NNs nor were about modeling data nor used gradient descent.[AC20] has been widely used for exploration in Reinforcement Learning[SIN5][OUD13][PAT17][BUR18] for synthesis of realistic images,[GAN1,2] although the latter domain was recently taken over by Rombach et al. which is now considered a remaining grand challenge.[LEC] The early 1990s, however, saw first exceptions: NNs that learn to decompose complex spatio-temporal observation sequences into compact but meaningful chunks[UN0-3] (see further below), and NN-based planners of hierarchical action sequences for compositional learning,[HRL0] as discussed next. This work injected concepts of traditional "symbolic" hierarchical AI[NS59][FU77] into end-to-end differentiable "sub-symbolic" NNs. end-to-end differentiable NN-based subgoal generators for Hierarchical Reinforcement Learning (HRL).[HRL0] Soon afterwards, this was also done with problem."[LEC]

    Compare other NNs that have "worked on command" since April 1990, in particular, for learning selective attention,[ATT0-3] artificial curiosity and self-invented problems,[PP][PPa,1,2][AC] upside-down reinforcement learning[UDRL1-2] and its generalizations.[GGP] Recently, Transformers[TR1] have been all the rage, e.g., generating human-sounding texts.[GPT3] were first published in March 1991[FWP0-1][FWP6][FWP] These so-called "Fast Weight Programmers" or "Fast Weight Controllers"[FWP0-1] separated storage and control like in traditional computers, but in an end-to-end-differentiable, adaptive, fully neural way (rather than in a hybrid fashion[PDA1-2][DNC]). The "self-attention" in standard Transformers[TR1-4] combines this with a projection and softmax (using

    Today layers of neurons or many subsequent computational stages.[MIR] ones[DL1-2] (but see a 1989 paper[MOZ]). of arbitrary depth.[DL1] scales:[LEC] the Neural Sequence Chunker[UN0] or "very deep learning" tasks of depth > 1000[UN2] (requiring Neural History Compressor.[UN3] (See also recent work on unsupervised NN-based abstraction.[OBJ1-5]) More than a decade after this work,[UN1] called Deep Belief Networks (DBNs).[UN4] (or negative log probability) of the data representation in the level below.[HIN][T22][MIR] NN distillation was also republished many years later,[DIST2][MIR][HIN][T22] and is widely used today. used by Transformers[TR1-6] for together with unsupervised/self-supervised pre-training for deep learning.[UN0-3] See the previous section. his diploma thesis which I had the pleasure to supervise.[VAN1] First he implemented the Neural History Compressor above but then did much more: In both cases, learning fails (compare[VAN2]). This analysis led to basic principles of what diploma thesis,[VAN1] which I consider one of the most important documents in the history of machine learning. It also provided essential insights for overcoming the problem, through basic principles (such as constant error flow) of what we called LSTM in a tech report of 1995.[LSTM0] After the main peer-reviewed publication in 1997[LSTM1][25y97] (now the most cited NN article of the 20th century[MOST]), application of LSTM to speech (2004).[LSTM10] 2005 saw the first publication of LSTM with full backpropagation through time and of bi-directional LSTM[LSTM3] (now widely used). Another milestone of 2006 was the training method "Connectionist Temporal Classification" or CTC[CTC] for simultaneous alignment and recognition of sequences. Our team successfully applied CTC-trained LSTM to speech in 2007[LSTM4] (also with hierarchical LSTM stacks[LSTM14]). NNs and traditional approaches such as Hidden Markov Models (HMMs).[BW][BRI][BOU][HYB12][T22] LSTM was soon used for everything that involves sequential data such as speech[LSTM10-11][LSTM4][DL1] and videos. Google Many other companies adopted this.[DL4] on-device speech recognition of 2019 (now on your phone, not on the server) In 1995, we already had an excellent neural probabilistic text model[SNT] whose basic concepts were Nakamura and Shikano In 2001, we showed that LSTM can learn languages unlearnable by traditional models such as HMMs,[LSTM13] achieve only 10 billion clicks),[FB17][DL4] Apple image caption generation[DL4] & automatic email answering[DL4] etc. Business Week called LSTM "arguably the most commercial AI achievement."[AV1] have "LSTM" in their title.[DEC] (previous NNs had at most a few tens of layers). Microsoft The earlier Highway Nets perform roughly as well as their ResNet versions on ImageNet.[HW3] Variants of highway gates are also used for certain algorithmic tasks where the pure residual layers do not work as well.[NDR] is all about NN depth.[DL1] Net version called ResNet the most cited NN of the 21st.[MOST] (Citations, however, are a highly questionable measure of true impact.[NAT1]) Reinforcement Learning (RL),[KAE96][BER96][TD3][UNI][GM3][LSTMPG] expected cumulative reward signals.[DL1] formulated in the general RL framework.[UNI] Monte Carlo (tree) search (MC, 1949),[MOC1-5] dynamic programming (DP, 1953),[BEL53] artificial evolution (1954),[EVO1-7]([TUR1],unpublished) alpha-beta-pruning (1959),[S59] control theory and system identification (1950s),[KAL59][GLA85] stochastic gradient descent (SGD, 1951),[STO51-52] and universal search techniques (1973).[AIT7] system identification,[WER87-89][MUN87][NGU89] DP and its online variant called Temporal Differences (TD),[TD1-3] artificial evolution,[EVONN1-3] and policy gradients.[GD1][PG1-3] Many additional references on this can be found in Sec. 6 of the 2015 survey.[DL1]

    When there is a Markovian interface[PLAN3] RL with DP/TD/MC-based FNNs can be very successful, as shown in 1994[TD2] (master-level backgammon player) and the 2010s[DM1-2a] (superhuman players for Go, chess, and other games). history of previous inputs, our combinations of RL algorithms and LSTM[LSTM-RL][RPG] have become standard, in particular, our For example, in 2018, a PG-trained LSTM was the core of OpenAI beat a pro player in the game of Starcraft, which is theoretically harder than Chess or Go[DM2] in many ways, using OpenAI Five which learned to defeat human experts in the Dota 2 video game (2018).[OAI2] commonsense reasoning[MAR15] and learning to think.[PLAN4-5] time scales?[LEC] We published answers to these questions in 1990-91: self-supervised 1997[AC97][AC99][AC02] and 2015-18.[PLAN4-5] century[SHA7a][RAU1] by Heron of Alexandria was perhaps the first machine with a stored program.[BAN][KOE1] It used pins on designed the first machine (the step reckoner) that could perform all four arithmetic operations, and the first with a memory.[BL16] cards (1679),[L79][L03][LA14][HO66] and published the chain rule[LEI07-10] (see above), essential ingredient of deep learning and modern AI.

    Leonardo Torres y Quevedo, the  20th century
Leonardo Torres y Quevedo (mentioned in the <a href=introduction) became it at the 1951 Paris AI conference.[AI51][BRO21][BRU4] The corresponding patent of 1936[ZU36-38][RO98][ZUS21] predating Claude Shannon principles of binary computation (1679)[L79][LA14][HO66][L03] This greatly simplified the hardware.[LEI21,a,b] Church[CHU] (1935), conditional jump instruction.[RO98] John Atanasoff (the "father of tube-based computing"[NASC6a]). Julius Edgar Lilienfeld in 1925.[LIL1-2] used to break the Nazi code.[NASC6] someone other than Zuse (1941)[RO98] was Howard Aiken and the 1948 upgrade of ENIAC, which was reprogrammed by entering numerical instruction codes into read-only memory.[HAI14b] with several transistors on a common substrate (granted in 1952).[IC49-14] In 1959, Robert Noyce presented a monolithic IC.[IC14] ICs/GPUs of today (2022) contain many billions of transistors (almost all of them of Lilienfeld Moore According to Bremermann (1982),[BRE] as previously noted back in 2004.[OOPS2][ZUS21] are actually light beams).[DL2] are expected to become even much more important than they are today.[DL2] He combined Georg Cantor with the foundational work by Gottlob Frege[FRE] (who introduced the first formal language in 1879), Thoralf Skolem[SKO23] (who introduced primitive recursive functions in 1923) and Jacques Herbrand[GOD86] (who identified deductively equivalent[LE18] to the later Boolean Algebra of 1847.[BOO] Turing Machine.[TUR] He rederived the above-mentioned result.[CHU][TUR][HIN][GOD21,21a][TUR21][LEI21,21a] In the same year of 1936, Emil Post published yet another independent universal model of computing.[POS] the first high-level programming language.[BAU][KNU] 1945[KNU] in 1948.[ZU48] Compare Newell & Simon of learning to predict future data from past observations.[AIT1][AIT10] With this concept,[AIT7][AIT5][AIT12-13][AIT16-17] as well as applications to NNs.[KO2][CO1-3] environments.[AIT20,22] He also derived the asymptotically fastest algorithm for all well-defined computational problems,[AIT21] a beautiful pattern of exponential acceleration in it,[OMG] which I have presented in many talks since then, and which also made it into Sibylle Berg intervals: just a few decades or centuries or at most millennia.[OMG1] Heron of Alexandria[RAU1] in the 1st century). The telephone (e.g., Meucci 1857, Reis 1860, Bell 1876)[NASC3] Haber-Bosch process for creating artificial fertilizer, without which the world could feed at most 4 billion people.[HAB1-2] the root of today artificial curiosity and generative adversarial NNs for agents that invent their own problems (see above),[AC90-AC20][PP-PP2][SA17] Transformers with linearized self-attention (see above),[FWP0-6][TR5-6] distilling teacher NNs into student NNs (see above),[UN][UN0-3] at multiple levels of abstraction and multiple time scales (see above),[HRL0-2][LEC] and other exciting stuff. Much of this has become very popular, and improved the lives of billions of people.[DL4][DEC][MOST] lab for decades[AC][AC90,AC90b]) will quickly improve themselves, restricted only by the fundamental limits of computability and physics. it,[ACM16][FA15][SP16][SA17] make more and bigger AIs. Those who don

    Creative Commons License arXiv page. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. 555+ References (and many more in the survey[DL1]) PDF. PDF. systems with intrinsic motivation,[AC90-AC95] the system also See later publications.[AC99][AC02] PDF. PDF. IEEE link. With a brief summary of the generative adversarial neural networks of 1990[AC90,90b][AC20] Preprint arXiv/1906.04493. Link. H. Bruderer[BRU4] calls that the first conference on AI. Blog of Werner Vogels, CTO of Amazon (Nov 2016): First publication of what was later sometimes called the Hopfield network[AMH2] or Amari-Hopfield Network,[AMH3] based on the (uncited) Lenz-Ising recurrent architecture.[L20][I25][T22] Mentions the recurrent Ising model[L20][I25]on which the (uncited) Amari network[AMH1,2] is based. The Hopfield network or Amari-Hopfield Network was first published in 1972 by Amari.[AMH1] [AMH2] did not cite [AMH1]. Transformers with linearized self-attention (1991-93).[FWP] Today, both types are very popular. PDF. H. Larochelle, G. E. Hinton. Learning to combine foveal glimpses with a third-order Boltzmann machine. NIPS 2010. This work is very similar to [ATT0-2] which the authors did not cite. In fact, Hinton was the reviewer of a 1990 paper[ATT2] his own work:[ATT3] arXiv/1409.0473, 2014-16. This work on soft "attention" did not cite Schmidhuber Bloomberg, May 15, 2018. PDF. by Sherrington & Kirkpatrick[SK75] & Glauber[G63] nor the first working algorithms for deep learning of internal representations (Ivakhnenko & Lapa, 1965)[DEEP1-2][HIN] nor Amari Even later surveys by the authors[S20][DLC] failed to cite the prior art.[T22] formal Algebra of Thought (1686)[L86][WI48] was deductively equivalent[LE18] to the much later PDF. Link. PDF. First application of backpropagation[BP1] to NNs (concretizing thoughts in Werbos More.[DL2] Link. IEEE Spectrum, 2021. Link. English version: [CNN1+]. More in Scholarpedia. Link. [CNN1a] A. Waibel. Phoneme Recognition Using Time-Delay Neural Networks. Meeting of IEICE, Tokyo, Japan, 1987. First application of backpropagation[BP1-5] and weight-sharing PDF. Spatial Averaging.[CNN1] Spatial Averaging.[CNN1] Inverse, 2016. Link. Since November 2021: Comments on version 1 of the report[T22] in the Connectionists Mailing List, perhaps the oldest mailing list on artificial neural networks. Link to the archive. PDF. PDF. Beijing, 2014. Preprint arXiv:1402.3511 [cs.NE]. 1st superhuman result in 2011.[DAN1] Now everybody is using this approach. Deep Learning. HTML. A "survey" of deep learning that does not mention the pioneering works of deep learning [T22]. [DL3a] Y. Bengio, Y. LeCun, G. Hinton (2021). Turing Lecture: Deep Learning for AI. Communications of the ACM, July 2021. HTML. Another "survey" of deep learning that does not mention the pioneering works of deep learning [T22]. greatly improved (CTC-based) on-device speech recognition (on the phone, not the server) PDF. Web site deeplearning.net of Y. Bengio Internet Archive), referring to Hinton unsupervised pre-training for deep NNs[UN4] (2006) although it). More on this under [T22]. Preprint arXiv:2212.11279. Tweet of 2022. arxiv:1312.5602. Link. the first sentence of the abstract of the earlier tech report version[DM1] was created earlier by Jan Koutnik et al. in Schmidhuber Hochreiter et al. Preprint arXiv:2112.10752, LMU Munich, 2021. neural networks learning to control dynamic external memories.[PDA1-2][FWP0-1] arXiv:1808.03578, 2018. arXiv:1808.03578, 2018. Conf. on Neural Networks, Vol. 2, 2004, pp. 985-990. This paper does not mention that the "ELM" concept goes back to Rosenblatt This overview does not mention that the "ELM" concept goes back to Rosenblatt Link. over 4 billion automatic translations per day (The Verge, August 4, 2017); Facebook blog by J.M. Pino, A. Sidorov, N.F. Ayan (August 3, 2017) alternative[FWP0-1] to recurrent NNs. the fast weights[FAST,FASTa,b] of Such Fast Weight Programmers[FWP0-6,FWPMETA1-8] can learn to memorize past data, e.g., by computing fast weight changes through additive outer products of self-invented activation patterns[FWP0-1] (now often called keys and values for self-attention[TR1-6]). The similar Transformers[TR1-2] combine this with projections Transformers with linearized self-attention[TR5-6] In 1993, he introduced in this context,[ATT] and RNNs that program themselves. See tweet of 2022. normalization).[FWP] PDF. See tweet of 2022 for 30-year anniversary. PDF. Preprint: arXiv:1811.12143. PDF. PDF. Very similar to [FWP0-2], in both motivation [FWP2] and execution. This work on "attention" did not cite Schmidhuber Preprint: arXiv:2003.08165. PDF. Linear Transformers Are Secretly Fast Weight Programmers. ICML 2021. Preprint: arXiv:2102.11174. Preprint: arXiv:2106.06295 (June 2021). PDF. An introspective network that can learn to run its own weight change algorithm. In Proc. of the Intl. Conf. on Artificial Neural Networks, J. Schmidhuber. Habilitation thesis, TUM, 1993. PDF. Preprint arXiv:2012.14905 [cs.LG], 2020. Report arXiv:2011.07831 [cs.AI], 2020. Preprint: arXiv:2202.05780. Probably the first paper on using stochastic gradient descent[STO51-52] reverse mode of automatic differentiation or backpropagation[BP1]). Implementation of Amari Preprint arXiv/2207.01570, 4 July 2022 (submitted in May 2022). arXiv:cs/0309048 (2003). PDF. Google Research Blog, Sep 2015, see also Aug 2015 Google Alphr Technology, Jul 2015, or 9to5google, Jul 2015 WIRED, Sep 2016, siliconANGLE, Sep 2016 Blog post, Internet Archive, 2010. A blog post describing basic ideas[AC][AC90,AC90b][AC20] of GANs. A description of GANs that does not cite Schmidhuber Link. This was number 1 on Hacker News. Frankfurter Allgemeine Zeitung, 16/6/2021. Preprint arXiv/2005.14165. for Image Classification. International Joint Conference on Artificial Intelligence (IJCAI-2011, Barcelona), 2011. PDF. ArXiv preprint. competitor.[DAN1] This led to massive interest from industry. PDF. to win computer vision contests in 2011[GPUCNN2-3,5] (AlexNet and VGG Net[GPUCNN9] followed in 2012-2014). [GPUCNN4] emphasizes benefits of Fukushima PDF. PDF. PDF. Bengio claimed[YB20] date back to 1991-93.[UN0-2][UN] An unsupervised learning algorithm related to Schmidhuber what Y. LeCun called an "open problem" in 2022.[LEC] North-Holland, 1991. PDF. Extending TR FKI-129-90, TUM, 1990. PDF. This work did not cite Schmidhuber PDF. Preprints arXiv:1505.00387 (May 2015) and arXiv:1507.06228 (July 2015). Also at NIPS 2015. The LSTM with forget gates[LSTM2] for RNNs.) Resnets[HW2] are a version of this where the gates are always open: g(x)=t(x)=const=1. Highway Nets perform roughly as well as ResNets[HW2] on ImageNet.[HW3] Variants of highway gates are also used for certain algorithmic tasks, where the simpler residual layers do not work as well.[NDR] Link. arXiv:1512.03385 (Dec 2015). Residual nets are a version of Highway Nets[HW1] arxiv:1612.07771 (2016). Also at ICLR 2017. This work did not cite the earlier LSTM[LSTM0-6] trained by Connectionist Temporal Classification (CTC, 2006).[CTC] CTC-LSTM was successfully applied to speech in 2007[LSTM4] (also with hierarchical LSTM stacks[LSTM14]) and became the first superior end-to-end neural speech recogniser that outperformed the state of the art, dramatically improving Google Markov models (HMMs).[BW][BRI][BOU] [HYB12] still used the old hybrid approach and did not compare it to CTC-LSTM. Later, however, Hinton switched to LSTM, too.[LSTM8] Ernst Ising and Wilhelm Lenz in the 1920s.[L20][I25][K41][W45][T22] It settles into an equilibrium state in response to input conditions, and is the foundation of the first well-known learning RNNs.[AMH1-2] Who Invented the IC? Preprint arXiv:1704.04760 PDF. PDF. Mathematischen Schriften, ed. C. Gerhardt, Berlin 1879, vol.7, p.223. English link. Link. arXiv:1607.06450, 2016. See tweet1. See tweet2. 19/5/2021. [LSTM1] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 1997. PDF. Based on [LSTM0]. More. PDF. PDF. PDF. PDF. PDF. PDF. PDF. PDF. Preprint: arxiv:1506.07452. PDF. PDF. are actually a variant of the vanilla LSTM architecture[LSTM2] (2000) which the authors did not cite although this work[LSTM2] was the one that introduced gated recurrent units. Furthermore, Schmidhuber learn to count[LSTMGRU2] nor learn simple non-regular languages;[LSTMGRU2] they according to Google Brain.[LSTMGRU3]) Preprint arXiv:1805.04908. Architectures. Preprint arXiv:1703.03906 A misleading "history of deep learning" goes more or less like this: "In 1969, Minsky & Papert[M69] researchers took a fresh look at the problem in the 1980s."[S20] However, the 1969 book[M69] addressed a "problem" of Gauss & Legendre and then also by Amari arXiv:2005.05744, 2020. The Computation 22(12): 3207-3220, 2010. ArXiv Preprint. By 2010, when compute was 100 times more expensive than today, both the feedforward NNs[MLP1] Preprint arXiv:1608.05343, 2016. Preprint arXiv:1611.01578 (PDF), 2017. Compare the earlier Neural Architecture Search of Bayer et al. (2009) for LSTM-like topologies.[LSTM7] Correspondence, Nature, vol 483, p 541, March 2012, doi:10.1038/483541b. Letter, Science, vol 336, p 1639, June 2012. See also comment on response by A. Hodges (DOI:10.1126/science.336.6089.1639-a) [NASC6a] J. Schmidhuber. Comment on "Biography: The ABC of computing" by J. Gilbey, Nature 468 p 760-761 (2010). Link. The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization. Proc. ICLR 2022. Preprint arXiv/2110.07732. excellent 1995 neural probabilistic text model.[SNT] See also Nakamura and Shikano theorem proving[ZU48] the first high-level programming language.[BAU][KNU] NY Times article Learning Dexterous In-Hand Manipulation. arxiv:1312.5602 (PDF). arxiv:1912.06680. Link. history Preprint arXiv/1606.06724. Preprint arXiv/1708.03498. Preprint arXiv/1802.10353. Preprint arXiv/2010.03635. Preprint arXiv/2011.12930. PDF. PDF. Link. Based on TR FKI-126-90 (1990).[AC90] PDF. Partially based on TR FKI-126-90 (1990).[AC90] Report arXiv:1210.0118 [cs.AI], 2015. One Big Net For Everything. Preprint arXiv:1802.08864 [cs.AI], Feb 2018. Preprint: arXiv:1809.01999. Github: World Models. minimization. TR CU-CS-565-91, Univ. Colorado at Boulder, 1991. PDF. 1991. PDF. Link. arXiv:1112.5309 [cs.AI] First Experiments with PowerPlay. arXiv:1210.8385 [cs.AI]. [R1] Reddit/ML, 2019. Hinton, LeCun, Bengio receive ACM Turing Award. This announcement contains more comments about Schmidhuber than about any of the awardees. [R2] Reddit/ML, 2019. J. Schmidhuber really had GANs in 1990. [R3] Reddit/ML, 2019. NeurIPS 2019 Bengio Schmidhuber Meta-Learning Fiasco. in 1987[META1][META] long before Bengio [R4] Reddit/ML, 2019. Five major deep learning papers by G. Hinton did not cite similar earlier work by J. Schmidhuber. [R5] Reddit/ML, 2019. The 1997 LSTM paper by Hochreiter & Schmidhuber has become the most cited deep learning research paper of the 20th century. [R6] Reddit/ML, 2019. DanNet, the CUDA CNN of Dan Ciresan in J. Schmidhuber [R7] Reddit/ML, 2019. J. Schmidhuber on Seppo Linnainmaa, inventor of backpropagation in 1970. [R8] Reddit/ML, 2019. J. Schmidhuber on Alexey Ivakhnenko, godfather of deep learning 1965. [R9] Reddit/ML, 2019. We [R11] Reddit/ML, 2020. Schmidhuber: Critique of Honda Prize for Dr. Hinton [R12] Reddit/ML, 2020. J. Schmidhuber: Critique of Turing Award for Drs. Bengio & Hinton & LeCun [R15] Reddit/ML, 2021. J. Schmidhuber Although these MLPs did not yet have deep learning, because only the last layer learned,[DL1] Rosenblatt basically had what much later was rebranded as Extreme Learning Machines (ELMs) without proper attribution.[ELM1-2][CONN21][T22] Preprint arXiv/1311.2524, Nov 2013. Preprint arXiv/1703.06870, 2017. The first paper on policy gradients for LSTM. This approach has become very important in reinforcement learning.[LSTMPG] the first working algorithms for deep learning of internal representations (Ivakhnenko & Lapa, 1965)[DEEP1-2][HIN] as well as Amari Even later surveys by the authors[DL3,3a] failed to cite the prior art.[T22] Link. A misleading "history of deep learning" which goes more or less like this: "In 1969, Minsky & Papert[M69] researchers took a fresh look at the problem in the 1980s."[S20] However, the 1969 book[M69] addressed a "problem" of Gauss & Legendre and then also by Amari in the 1960s-70s, especially outside of the Anglosphere.[DEEP1-2][GD1-3][CNN1][DL1-2][T22] The Past, Present and Future of Artificial Intelligence. Link. PDF. Much later this was called a probabilistic language model.[T22] Link. ACM [T22] debunks this justification. Debunking [T19] and [DL3a] . the 1991 publication on what See tweet of 2022 for 30-year anniversary. Link. The Turing Test. YouTube video, 2022. Preprint arXiv/1912.02875, 5 Dec 2019. Preprint arXiv/1912.02877, 5 Dec 2019. By 1993, the approach solved problems of depth 1000 [UN2] 1992. Based on TR FKI-148-91, TUM, 1991.[UN0] PDF. [UN2] J. Schmidhuber. Habilitation thesis, TUM, 1993. PDF. can be found here (depth > 1000). 2006. PDF. It did not cite the much earlier 1991 unsupervised pre-training of stacks of more general recurrent NNs (RNNs)[UN0-3] (or negative log probability) of the data representation in the level below.[HIN][T22][MIR] This can greatly facilitate very deep downstream learning.[UN0-3] The comment under reference[UN4] applies here as well. Link. Results are essentially identical to those of Schmidhuber own [VAN2] but not the original work. PDF. [VAN4] Y. Bengio. Neural net language models. Scholarpedia, 3(1):3881, 2008. Link. Link. Youtube video [see 28:16]. However, in 2010, Schmidhuber Preprint arXiv:1609.08144 (PDF), 2016. Based on LSTM which it mentions at least 50 times. WWW link (retrieved 15 May 2020). date back to 1991-93.[UN0-2][UN] already in 1995.[SNT] architecture [NEU45]. PDF. Weltwoche, Nr. 33.21, 19 August 2021. Menu

    Creative Commons LicenseGNU Public LicenseCreative Commons License
    GNU Public LicenseCreative Commons License


















  • GNU Public LicenseCreative Commons LicenseGNU Public LicenseCreative Commons LicenseGNU Public LicenseCreative Commons License /scripts/popup.js /scripts/cc_header_fontstyles.js /scripts/cc_header_contents.js Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing-award-lecture754x288.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/88x31.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/miraculous-year754x466.gif Schmidhuber%2026Mar2022%20Neural%20nets%20learn%20to%20program%20neural%20nets%20with%20with%20fast%20weights%20(1991)_files/fastweights754x288.png Schmidhuber%2026Mar2022%20Neural%20nets%20learn%20to%20program%20neural%20nets%20with%20with%20fast%20weights%20(1991)_files/gmlogo288.jpg Schmidhuber%2026Mar2022%20Neural%20nets%20learn%20to%20program%20neural%20nets%20with%20with%20fast%20weights%20(1991)_files/fastweights466x288.png Schmidhuber%2026Mar2022%20Neural%20nets%20learn%20to%20program%20neural%20nets%20with%20with%20fast%20weights%20(1991)_files/fastouter466.png Schmidhuber%2026Mar2022%20Neural%20nets%20learn%20to%20program%20neural%20nets%20with%20with%20fast%20weights%20(1991)_files/deep1990attention466x178.png Schmidhuber%2026Mar2022%20Neural%20nets%20learn%20to%20program%20neural%20nets%20with%20with%20fast%20weights%20(1991)_files/lstm466x288-6border.gif Schmidhuber%2026Mar2022%20Neural%20nets%20learn%20to%20program%20neural%20nets%20with%20with%20fast%20weights%20(1991)_files/highway-networks466x288.png Schmidhuber%2026Mar2022%20Neural%20nets%20learn%20to%20program%20neural%20nets%20with%20with%20fast%20weights%20(1991)_files/fastweightrobot466x288.png Schmidhuber%2026Mar2022%20Neural%20nets%20learn%20to%20program%20neural%20nets%20with%20with%20fast%20weights%20(1991)_files/selfRNN526x474.jpg Schmidhuber%2026Mar2022%20Neural%20nets%20learn%20to%20program%20neural%20nets%20with%20with%20fast%20weights%20(1991)_files/swrm754x754.jpg Schmidhuber%2026Mar2022%20Neural%20nets%20learn%20to%20program%20neural%20nets%20with%20with%20fast%20weights%20(1991)_files/88x31.png Schmidhuber%2026Mar2022%20Neural%20nets%20learn%20to%20program%20neural%20nets%20with%20with%20fast%20weights%20(1991)_files/robbyraus754x754.png Schmidhuber%2026Mar2022%20Neural%20nets%20learn%20to%20program%20neural%20nets%20with%20with%20fast%20weights%20(1991)_files/fastweight1992neco754.png Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/leibniz178x288.png Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/cauchy178+6x288.jpg Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/legendre178x288.jpg Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/gauss178+6x288.jpg Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/ising178x288.jpg Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/amari178+6x288.jpg Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/turing178x178.jpg Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/rosenblatt178x288.jpg Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/ivakhnenko178x288.jpg Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/amari178x178.jpg Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/backprop754x288seppo.png Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/kelley178x288.jpg Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/deep2010nn754x288.png Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/fukushima178x288.jpg Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/gpu-cnn-contests754x288.gif Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/curiosity754x288.png Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/pmfaust754x288.jpg Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/fastweights754x288.png Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/chunker754x288.png Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/seppdeep466x288.jpg Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/lstm754x466.gif Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/highway-networks754x288.png Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/lstm-policy-gradient754x288.png Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/computer-history754x466.png Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/leibniz754x288.png Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/quevedo178x178.png Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/zuse754x288.png Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/goedel754x288.png Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/BigBangOmega466.png Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/allmetaverses754x288.jpg Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/88x31.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing-award-lecture754x288.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/88x31.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/miraculous-year754x466.gif Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing-award-lecture754x288.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/88x31.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/miraculous-year754x466.gif Schmidhuber%2026Mar2022%20Neural%20nets%20learn%20to%20program%20neural%20nets%20with%20with%20fast%20weights%20(1991)_files/fastweights754x288.png Schmidhuber%2026Mar2022%20Neural%20nets%20learn%20to%20program%20neural%20nets%20with%20with%20fast%20weights%20(1991)_files/gmlogo288.jpg Schmidhuber%2026Mar2022%20Neural%20nets%20learn%20to%20program%20neural%20nets%20with%20with%20fast%20weights%20(1991)_files/fastweights466x288.png Schmidhuber%2026Mar2022%20Neural%20nets%20learn%20to%20program%20neural%20nets%20with%20with%20fast%20weights%20(1991)_files/fastouter466.png Schmidhuber%2026Mar2022%20Neural%20nets%20learn%20to%20program%20neural%20nets%20with%20with%20fast%20weights%20(1991)_files/deep1990attention466x178.png Schmidhuber%2026Mar2022%20Neural%20nets%20learn%20to%20program%20neural%20nets%20with%20with%20fast%20weights%20(1991)_files/lstm466x288-6border.gif Schmidhuber%2026Mar2022%20Neural%20nets%20learn%20to%20program%20neural%20nets%20with%20with%20fast%20weights%20(1991)_files/highway-networks466x288.png Schmidhuber%2026Mar2022%20Neural%20nets%20learn%20to%20program%20neural%20nets%20with%20with%20fast%20weights%20(1991)_files/fastweightrobot466x288.png Schmidhuber%2026Mar2022%20Neural%20nets%20learn%20to%20program%20neural%20nets%20with%20with%20fast%20weights%20(1991)_files/selfRNN526x474.jpg Schmidhuber%2026Mar2022%20Neural%20nets%20learn%20to%20program%20neural%20nets%20with%20with%20fast%20weights%20(1991)_files/swrm754x754.jpg Schmidhuber%2026Mar2022%20Neural%20nets%20learn%20to%20program%20neural%20nets%20with%20with%20fast%20weights%20(1991)_files/88x31.png Schmidhuber%2026Mar2022%20Neural%20nets%20learn%20to%20program%20neural%20nets%20with%20with%20fast%20weights%20(1991)_files/robbyraus754x754.png Schmidhuber%2026Mar2022%20Neural%20nets%20learn%20to%20program%20neural%20nets%20with%20with%20fast%20weights%20(1991)_files/fastweight1992neco754.png Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/leibniz178x288.png Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/cauchy178+6x288.jpg Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/legendre178x288.jpg Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/gauss178+6x288.jpg Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/ising178x288.jpg Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/amari178+6x288.jpg Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/turing178x178.jpg Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/rosenblatt178x288.jpg Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/ivakhnenko178x288.jpg Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/amari178x178.jpg Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/backprop754x288seppo.png Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/kelley178x288.jpg Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/deep2010nn754x288.png Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/fukushima178x288.jpg Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/gpu-cnn-contests754x288.gif Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/curiosity754x288.png Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/pmfaust754x288.jpg Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/fastweights754x288.png Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/chunker754x288.png Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/seppdeep466x288.jpg Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/lstm754x466.gif Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/highway-networks754x288.png Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/lstm-policy-gradient754x288.png Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/computer-history754x466.png Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/leibniz754x288.png Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/quevedo178x178.png Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/zuse754x288.png Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/goedel754x288.png Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/BigBangOmega466.png Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/allmetaverses754x288.jpg Schmidhuber%2029Dec2022%20Annotated%20history%20of%20modern%20AI%20and%20deep%20neural%20networks_files/88x31.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing-award-lecture754x288.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/critique-turing754x110.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/88x31.png Scientific%20Integrity%20and%20the%20History%20of%20Deep%20Learning%20The%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award_files/miraculous-year754x466.gif Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing-award-lecture754x288.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/critique-turing754x110.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/88x31.png Schmidhuber%2024Sep2021%20Scientific%20Integrity,%20the%202021%20Turing%20Lecture,%20and%20the%202018%20Turing%20Award%20for%20Deep%20Learning_files/miraculous-year754x466.gif 230604%20KEEP%20survey%20ChatGPT%20and%20AI%20Usage%20(Students)_files/lazy.min.js 230604%20KEEP%20survey%20ChatGPT%20and%20AI%20Usage%20(Teachers)_files/lazy.min.js ../Copyright ending.html b64.png /images/icons/file.gif /images/icons/valid-xhtml10.png /homepage/2010b/scripts/common.js /homepage/2010/styles/images/abc_logo2.png /homepage/2010/images/enterABC/abc.jpg /homepage/2010/images/enterABC/conversation.jpg /homepage/2010/images/enterABC/something_new.jpg /homepage/2010/images/enterABC/catch-up_tv.jpg /homepage/2010/images/enterABC/conversation.jpg /homepage/2010/images/enterABC/something_new.jpg /homepage/2010/images/enterABC/catch-up_tv.jpg /homepage/2010/images/enterABC/conversation.jpg ../Copyright ending.html Executive Intelligence Review  
    p370 Chapter 11 means (Grossberg 2021) page 370, Chapter 11
    p002sec Illusion and realitymeans (Grossberg 2021) page 2, section Illusion and reality
    p013fig01.09means (Grossberg 2021) page 13, Figure 1.09 (1.9 as in book)
    p030tbl01.02 means (Grossberg 2021) page 30, Table 1.02 (1.2 as in book)
    p111c2h0.5means (Grossberg 2021) page 111, column 2, height from top as fraction of page height
    || text...Are notes in addition to [figure, table] captions, mostly comprised of text within the image, but also including quotes of text in the book. Rarely, it includes comments by Howell preceded by "Howell". The latter are distinct from "readers notes" (see, for example : reader Howell notes).
    p044 Howell: grepStr
  • p00I PrefacePreface - Biological intelligence in sickness, health, and technology
  • p001 Chapter 1 Overview - From Complementary Computing and Adaptive Resonance to conscious awareness
  • p050 Chapter 2 How a brain makes a mind - Physics and psychology split as brain theories were born
  • p086 Chapter 3 How a brain sees: Constructing reality - Visual reality as illusions that explain how we see art
  • p122 Chapter 4 How a brain sees: Neural mechanisms - From boundary completion and surface flling-in to figure-ground perception
  • p184 Chapter 5 Learning to attend, recognize, and predict the world -
  • p250 Chapter 6 Conscious seeing and invariant recognition - Complementary cortical streams coordinate attention for seeing and recognition
  • p280 Chapter 7 How do we see a changing world? - How vision regulates object and scene persistence
  • p289 Chapter 8 How we see and recognize object motion - Visual form and motion perception obey complementary laws
  • p337 Chapter 9 Target tracking, navigation, and decision-making - Visual tracking and navigation obey complementary laws
  • p353 Chapter 10 Laminar computing by cerebral cortex - Towards a unified theory of biologucal and artificial intelligence
  • p370 Chapter 11 How we see the world in depth - From 3D vision to how 2D pictures induce 3D percepts
  • p404 Chapter 12From seeing and reaching to hearing and speaking - Circular reaction, streaming, working memory, chunking, and number
  • p480 Chapter 13 From knowing to feeling - How emotion regulates motivation, attention, decision, and action
  • p517 Chapter 14 How prefrontal cortex works - Cognitive working memory, planning, and emotion conjointly achieved valued goals
  • p539 Chapter 15 Adaptively timed learning - How timed motivation regulates conscious learning and memory consolidation
  • p572 Chapter 16 Learning maps to navigate space - From grid, place, and time cells to autonomous mobile agents
  • p618 Chapter 17 A universal development code - Mental measurements embody universal laws of cell biology and physics
  • Menu
    p370 Chapter 11 means (Grossberg 2021) page 370, Chapter 11
    p002sec Illusion and realitymeans (Grossberg 2021) page 2, section Illusion and reality
    p013fig01.09means (Grossberg 2021) page 13, Figure 1.09 (1.9 as in book)
    p030tbl01.02 means (Grossberg 2021) page 30, Table 1.02 (1.2 as in book)
    p111c2h0.5means (Grossberg 2021) page 111, column 2, height from top as fraction of page height
    || text...Are notes in addition to [figure, table] captions, mostly comprised of text within the image, but also including quotes of text in the book. Rarely, it includes comments by Howell preceded by "Howell". The latter are distinct from "readers notes" (see, for example : reader Howell notes).
    p044 Howell: grepStr
  • image pxvifig00.01 Macrocircuit of the visual system
  • image p002fig01.01 The difference between seeing and recognizing.
    || (W. Epstein, R. Gregory, H. von Helmholtz, G. Kanizsa, P. Kellman, A. Michote...) Seeing an object vs Knowing what it is. Seeing Ehrenstein illusion (See, recognize) va Recognizing offset grating Do not see, recognize). offset grating: some boundaries are invisible or amodal.
  • image p002fig01.02 Dalmation in snow
    || p002c2h0.55 "...This image reminds us that invisible boundaries can sometimes be very useful in helping us to recognize visual objects in the world. ... When we first look at this picture, it may just look like an array of black splotches of different sizes, desities, and orientations across the picture. Gradually, however, we can recognize the Dalmatian in it as new boundaries form in our brain between the black splotches. ..."
  • image p003fig01.03 Amodal completion
    || p00c1h0.75 "... Figure 1.3 illustrates what I mean by the claim that percepts derived from pictures are often illusions. Figure 1.3 (left column) shows three rectangular shapes that abut one another. Our percept of this image irresitably creates a different interpretation, however. We perceive a horizontal bar lying in front of a partially occluded vertical bar that is amodally completed behind it. ..."
  • image p004fig01.04 (top row) Kanizsa stratification; (botton row) transparency images
    || [top row images] "... are called stratification percepts... This simple percept can ... be perceived either as a white cross in front of a white outline square, or as a white outline square in front of a white cross. The former percept usually occurs, but the percept can intermittently switch between these two interpretations. ...it is said to be a bistable percept. ..."
  • image p008fig01.05 Noise-saturation dilemma.
    || cell activity vs cell number; [minimum, equilibrium, current, maximal] activity
  • image p009fig01.06 Primacy gradient of activity stored in working memory within a recurrent shunting on-center off-surround network. Rehersal is controlled by a nonspecific rehersal wave and self-inhibitory feedback of the item that is currently being rehearsed. Rehearsal is controlled by a nonspecific rehearsal wave and self-inhibitory feedback of the item that is currently being rehearsed. Green = excitatory, red = inhibitory
    || inputs? -> item and order WM storage -> competitive selection-> rehearsal wave -> outputs
  • image p011fig01.07 The choice of signal function f determines how an initial activity pattern will be transformed and stored in short-term memory (STM). Among [same, slower, faster]-than-linear signal functions, only the last one can suppress noise. It does so as it chooses the population that receives the largest input for storage, while suppressing the activities of all other population, thereby giving rise to a winner-take-all choice.
    || initial pattern (xi(0) vs i):
    fXi(∞)= xi(∞)/sum[j: xj(∞)]x(∞)
    linearperfect storage of any patternamplifies noise (or no storage)
    slower-than-linearsaturatesamplifies noise
    faster-than-linearchooses max [winner-take-all, Bayesian], categorical perceptionsuppresses noise, [normalizes, quantizes] total activity, finite state machine
  • image p012fig01.08 A sigmoidal signal function is a hybrid signal that combines the best properties of [faster, same, slower]-than linear signals. It can suppress noise and store a partially contrast-enhanced activity pattern. slower-than-linear saturates pattern; approximately linear- preserves pattern and normalizes; faster-than-linear- noise suppression and contrast-enhancement.
    || Sigmoidal signal: a hybrid. (upper) saturates pattern- slower-than-linear; (middle) preserves pattern and normalizes- approximately linear. (lower) noise suppression and contrast enhancement- faster-than-linear.
  • image p013fig01.09 A sigmoid signal function generates a quenching threshold below which cell activities are treated like noise and suppressed. Activities that are larger than the quenching threshold are contrast enhanced and stored in short-term memory.
    || Quenching threshold xi(o) vs i.
    fXi(∞)= xi(∞)/sum[j: xj(∞)]x(∞)
    sigmoidtunable filter
    stores infinitely many contrast-enhanced patterns
    suppresses noise
  • image p016fig01.10 The blocking paradigm shows how sensory cues that are conditioned to predict specific consequences can attentionally block other cues that do not change those predictions. On the other hand, if the total cue context is changed by adding a cue that does not change the predicted consequences, then the new cues can be conditioned to the direction of that change. They can hereby learn, for example, to predict fear if the shock level unexpectedly increases, or relief if the shock level unexpectedly decreases.
    || Minimal adaptive prediction. blocking- CS2 is irrelevant, unblocking- CS2 predicts US change. Learn if CS2 predicts a different (novel) outcome than CS1. CS2 is not redundant.
  • image p016fig01.11 A sufficiently big mismatch between a bottom-up input pattern and a top-down expectation can activate the orienting system, which triggers a burst of nonspecific arousal that can reset the recognition category that read out the expectation. In this way, unexpected events can reset short-term memory and initiate a search for a category that better represents the current situation.
    || [category- top-down (TD) expectation; Bottom-up (BU) input pattern] -> Feature pattern -> BU-TD mismatch -> orienting system -> non-specific arousal -> category.
  • image p018fig01.12 Peak shift and behavioural contrast. When a negative generalization gradient (in red) is subtracted from a positive generalization gradient (in green), the net gradient (in purple) is shifted way from the negative gradient and has a width that is narrower than any of its triggering gradients. Because the total activity of the network tends to be normalized, the renormalized peak of the net gradient is higher than that of the rewarded gradient, thereby illustrating that we can prefer experiences that we have never previously experienced over those for which we have previously been rewarded.
    ||
  • image p019fig01.13 Affective circuits are organized into opponent channels, such as fear vs. relief, and hunger vs. frustration. On a larger scale of affective behaviours, exploration and consummation are also opponent types of behaviour. Exploration helps to discover novel sources of reward. Consummation enables expected rewards to be acted upon. Exploration must be inhibited to enable an animal to maintain attention long enough upon a stationary reward in order to consume it.
    || exploration vs consummation
  • image p023fig01.14 A gated dipole opponent process can generate a transient antagonistic reboubnd from its OFF channel in response to offset of an input J to its ON channel. sustained on-response; transient off-response; opponent process; gates arousal: energy for rebound.
    ||
  • image p024fig01.15 A REcurrent Associative Dipole, or READ, circuit is a recurrent shunting on-center off-surround network with habituative transmitter gates. Sensory cues sample it with LTM traces and thereby become conditioned reinforcers.
    ||
  • image p025fig01.16 (left panel) The main processing stages of the Cognitive-Emotional-Motor (CogEM) model have anatomical interpretations in terms of sensory cortex, amygdala, and prefrontal cortex. Chapter 13 will describe in greater detail how CS cues activate invariant object categories in the sensory cortex, value categories in the amygdala, and object-value categories in the prefrontal cortex, notably the orbitofrontal cortex. The amygdala is also modulated by internal drive inputs like hunger and satiety. (right panel) Anatomical data support this circuit, as do many neurophysiological data.
    || drive -> amygdala -> prefrontal cortex <-> sensory cortex -> amygdala. [visual, somatosensory, auditory, gustatory, olfactory] cortex -> [amygdala, Orbital Prefrontal Cortex]. amygdala -> Lateral Prefrontal Cortex
  • image p025fig01.17 Sensory-drive heterarchy vs. drive hierarchy. How cues and drives interact to choose the drive and motivation that will control behavioral choices.
    || [drive inputs, sensory cue [before, after] cross-over] -> incentive motivation [eat, sex].
  • image p026fig01.18 Inverted U as a function of arousal. A Golden Mean at intermediate levels of arousal generates a combination of behavioral threshold, sensitivity, and activation that can support typical behaviors. Both underarousal and overarousal lead to symptoms that are found in mental disorders.
    || Behavior vs arousal.
    depressionunder-arousedover-aroused
    thresholdelevatedlow
    excitable above thresholdHyperHypo
    "UPPER" brings excitability "DOWN".
  • image p027fig01.19 The ventral What stream is devoted to perception and categorization. The dorsal Where stream is devoted to spatial representation and action. The Where stream is also often called the Where/How stream because of its role in the control of action.
    ||
    Spatial representation of actionPerception categorization
    WHERE dorsalWHAT ventral
    Parietal pathway "where"Temporal pathway "what"
    Posterior Parietal Cortex (PPC)Inferior temporal Cortex (IT)
    Lateral Prefrontal Cortex (LPFC)Lateral Prefrontal Cortex (LPFC)
  • image p029tbl01.01 Some pairs of complementary processing streams.
    ||
    visual boundary:
    interblob stream V1-V2-V4
    visual surface:
    blob stream V1-V2-V4
    visual boundary:
    interblob stream V1-V2-V4
    visual motion:
    magno stream V1-MT-MST
    WHAT streamWHERE stream
    perception & recognition:
    interferotemporal & prefrontal areas
    space & action:
    parietal & prefrontal areas
    object tracking:
    MT interbands & MSTv
    optic flow navigation:
    MT+ bands & MSTd
    motor target position:
    motor & parietal cortex
    volitional speed:
    basal ganglia
  • image p030tbl01.02 The What and Where cortical processing streams obey complementary laws. These laws enable the What stream to rapidly and stably learn invariant object categories without experiencing catastrophic forgetting, while the Where stream learns labile spatial and action representations to control actions that are aimed towards these objects.
    ||
    WHATWHERE
    spatially-invariant object learning and recognitionspatially-variant reaching and movement
    fast learning without catastrophic forgettingcontinually update sensory-motor maps and gains
    IT InterferoTemporal CortexPPC Posterior Parietal Cortex
    WhatWhere
    matchingexcitatoryinhibitory
    learningmatchmismatch
  • image p030fig01.20 A schematic cross-section of a slice of laminar neocortex whose cells are organized in a characteristic way in six layers, which themselves may be organized into distinct sublaminae. The computational paradigm of Laminar Computing attempts to show how different parts of neocortex can represent and control very different kinds of behavior - including vision, speech, can cognition - using specializations of the same canonical laminar cortical design.
    || Projection fibres: Cortico[spinal, bulbar, pontine, striate, reticulat, etc]; Thalamocortical fibres; Diffuse cortical afferent fibres: [nonspecific thalamocortical, Cholinergic, Monoaminergic]; Corticocortical efferents; Projection [cell, fibre]; Corticocortical efferent terminals.
  • image p032fig01.21 At least three parallel visual cortical streams respond to visual inputs that reach the retina. Two parvocellular streams process visual surfaces (blob stream) and visual boundaries (interblob stream). The magnocellular stream processes visual motion.
    || [Retina, LGNs, V[1,2,3,4], MT] to [What- inferotemporal areas, Where- parietal areas]: visual parallel streams [2x blob, 1x bound]
  • image p035fig01.22 A classical example of phonemic restoration. The spectrogram of the word "legislatures" is either excised, leaving a silent interval, or filled with broad-band noise. A percept of the restored phoneme is heard when it is replaced by noise, but not by silence.
    || [normal, silence, noise replaced] presentations. frequency (Hz) vs time (sec).
  • image p036fig01.23 As more items are stored in working memory through time, they can select larger chunks with which to represent the longer list of stored items.
    || [x, y, z] -> [xy, xyz]
  • image p037fig01.24 Only three processing stages are needed to learn how to store and categorize sentences with repeated words in working memory. See the text for more discussion.
    || IOR working memory (item chunk-> sequences) <-> IOR masking field: [item->list]<->[list->list] chunks. (<-> signifies <- expectation/attention, adaptive filter ->)
  • image p038fig01.25 The ART Matching Rule stabilizes real time learning using a [top-down, modulatory on-center, off-surround] network. Object attention is realized by such a network. See text for additional discussion.
    || ART Matching Rule [volition, categories, features]. [one, two] against one.
  • image p039tbl01.03 The link between consciousness and movement
    ||
    VISUALseeing, knowing, and reaching
    AUDITORYhearing, knowing, and speaking
    EMOTIONALfeeling, knowing, and acting
  • image p042tbl01.04 The six main kinds of resonances which support different kinds of conscious awareness that will be explained and discussed in this book.
    ||
    type of resonancetype of consciousness
    surface-shroudsee visual object or scene
    feature-categoryrecognize visual object or scene
    stream-shroudhear auditory object or stream
    spectral-pitch-and-timbrerecognize auditory object or stream
    item-listrecognize speech and language
    cognitive-emotionalfeel emotion and know its source
  • image p051fig02.01 Along the boundaries between adjacent shades of gray, laterial inhibition makes the darker area appear even darker, and the lighter areas appear even lighter. (Ernst Mach bands)
    ||
  • image p052fig02.02 Feature-category resonances enable us to rapidly learn how to recognize objects without experiencing catastrophic forgetting. Attentive matching between bottom-up feature pattern inputs and top-down expectations prevent catastrophic forgetting by focussing object attention upon expected patterns of features, while suppressing outlier features that might otherwise have caused catastophic forgetting if they were learned also.
    || Adaptive Resonance. Attended feature clusters reactivate bottom-up pathways. Activated categories reactivate their top-down pathways. Categories STM, Feature patterns STM. Feature-Category resonance [synchronize, amplify, prolong]s system response. Resonance triggers learning in bottom-up and top-down adaptive weights: adaptive resonance!
  • image p057fig02.03 Some basic anatomical and physiological properties of individual neurons. See the text for additional discussion.
    ||
    physiologycell body potentialaxonal signalchemical transmitter
    anatomynerve cell bodyaxonsynaptic knob, synapse
  • image p058fig02.04 Serial learning paradigm: Learning the temporal order of events by practicing them in the order that they occur in time.
    || Learning a global arrow in time. How do we learn to encode the temporal order of events in LTM? serial learning. [w=intra, W=inter]trial interval. "... data about serial verbal learning (Figure 2.4) seemed to suggest that events can go "backwards in time". ..."
  • image p059fig02.05 Bowed serial position curve. This kind of data emphasizes the importance of modelling how our brains give rise to our minds using nonlinear systems of differential equations.
    || Effects of [inter, intra]trial intervals (Hovland 1938). # of errors vs list position. [w (sec), W (sec)] = (2 6) (4 6) (2 126) (4 126). Nonoccurance of future items reduces the number of errors in response to past items. Thes data require a real-time theory for their explanation! that is, DIFFERENTIAL equations.
  • image p059fig02.06 The bowed serial position curve illustrates the sense in which "events can go backwards in time" during serial learning.
    || Bow due to backward effect in time. If the past influenced the future, but no conversely: # of errors vs list position; Data (Hoyland Hull, Underwood, etc).
  • image p060fig02.07 Position-specific-forward and backward error gradients illustrate how associations can form in both the forward and backward directions in time before the list is completely learned.
    || Error gradients: depend on list position. # of responses vs list position:
    list beginninganticipatory errorsforward in time
    list middleanticipatory and perseverative errorsforward and backward in time
    list endperseverative errorsbackward in time
  • image p061fig02.08 The existence of forward and backward associations, such as from A to B and from B to A is naturally explained by a network of neurons with their own activities or STM traces, and bidirectional connections between them with their own adaptive weights or LTM traces.
    || How these results led to neural networks (Grossberg 1957). Networks can learn forward and backward associations! Practice A->B also learn B<-A. Because learning AB is not the same as learning BA, you need STM traces, or activations, xp at the nodes, or cells, and LTM traces, or adaptive weights, zg, for learning at the synapses.
  • image p063fig02.09 The Additive Model describes how multiple effects add up influence the activities, or STM, traces of neurons.
    || STM: Additive model (Grossberg, PNAS 1967, 1968).
    Short-term memory (STM)
    trace activation
    signaladaptive weightLong-term memory (LTM)
    trace
    xi(j)fi(xi(t))*Bijzij(t)xj(t)
    learning rate?passive decaypositive feedbacknegative feedbackinput
    d[dt: xi(t)] = - Ai*xi + sum[j=1 to n: fj(xj(t))*Bji*zji] - sum[j=1 to n: gj(xj)*Cp*Zp] + Ii
    Special case : d[dt: xi(t)] = - Ai*xi + sum[j=1 to n: fj(xj(t))*zp] + Ii
  • image p064fig02.10 The Shunting Model includes upper and lower bounds on neuronal activities. These bound have the effect of multiplying additive terms by excitatory and inhibitory automatic gain terms that enable such models to preserve their sensitivity to inputs whose size may vary greatly in size through time, while also approximately normalizing their total activities.
    || STM: Shunting Model (Grossberg, PNAS 1967, 1968). Mass action in membrane equations. Bi/Ci -> xi(t) -> O -> -Fi/Ei. Bounded activations, automatic gain control. d[dt: xi(t)] = - Ai*xi + (Bi - Ci*xi)sum[j=1 to n: fj(xj(t))*Dji*yji*zji + Ii] - (Ei*Xi + Fi)*sum[j=1 to n: gj(xj)*Gji*Yji*Zji + Ji]. Includes the Additive Model.
  • image p064fig02.11 Medium-Term Memory (MTM) and Long-Term Memory (LTM) equations complement the Additive and Shunting Models of STM. MTM is typically defined by a chemical transmitter that is released from the synaptic knobs of a neuron (Figure 2.03). Its release or inactivation in an activity-dependent way is also called habituation. LTM defines how associative learning occurs between a pair of neurons whose activities are approximately correlated through time. See the text for details.
    || Medium and Long Term memory.
    MTMhabituative transmitter gated[dt: yki(t)] = H*(K - yki) - L*fk(xk)*yki
    LTMgated steepest descent learningd[dt: zki(t)] = Mk*fk(xk)*(hi(xi) - zki)
  • image p065fig02.12 Three sources of neural network research: [binary, linear, continuous nonlinear]. My own research has contributed primarily to the third.
    || Three sources of neural network research.
    BinaryLinearContinuous and non-Linear
    neural network signal processingSystems theoryNeurophysiology and Psychology
    McCullogh-Pitts 1943
    ... Xi(t+1) = sgn{sum[j: Aij*Xj(t) - Bi}
    Von Neumann 1945
    Calanielio 1961
    Rosenblatt 1962
    Widrow 1962
    Anderson 1968
    Kohonen 1971
    Hodgkin, Huxley 1952
    Hartline, Ratliff 1957
    Grossberg 1967
    Von der Malsburg 1973
    digital computerY-A*X
    cross-correlate
    steepest descent
  • image p068fig02.13 Hartline
  • image p068fig02.14 Hodgkin and Huxley developed a model to explain how spikes travel down the squid giant axon.
    || Neurophysiology (single cell): spike potentials in squid giant axon (Hodgekin, Huxley 1952, Nobel Prize). time -> (dendrites -> cell body -> axon).
    C*dp[dt: V] = α*dp^2[dX^2: V] + (V(+) - V)*g(+) + (V(-) - V)*g(-) + (V^p - V)*g^p
    g(+) = G(+)(m,h), g(-) = G(-)(n), G^p = const, [m, h, n] - ionic processes, V - voltage
    Precursor of Shunting network model (Rail 1962). (Howell: see p075fig02.24 Membrane equations of neurophysiology. Shunting equation
  • image p071fig02.15 The noise saturation dilemma: How do neurons retain their sensitivity to the relative sizes of input patterns whose total sizes can change greatly through time?
    || Noise-Saturation Dilemma (Grossberg 1968-1973). Bounded activities from multiple input sources.
    If activities xi are sensitive to SMALL inputs, then why don
  • image p071fig02.16 To solve the noise-saturation dilemma, individual neurons in a network that is receiving a distributed spatial patterns of inputs need to remain sensitive to the ratio of input to them divided by all the inputs in that spatial pattern. Although the inputs are delivered to a finite number of neurons, the input and activity patterns are drawn continuously across the cells for simplicity.
    || Noise-Saturation Dilemma. [Ii, xi] vs t. [Input, Activity] pattern [small -> noise, large -> saturation]. Problem: remain sensitive to input RATIOS θi = Ii / sum[j: Ij] as total input I = sum[j: Ij] -> ∞. Many kinds of data exhibit sensitivity to ratios of inputs.
  • image p072fig02.17 Brightness constancy.
    || Vision: brightness constancy, contrast normalization. Compute RATIOS of reflected light. Reflectance processing. p72c1h0.45 "... In other words, the perceived brightness of the gray disk is constant despite changes in the overall illumination. On the other hand, if only the gray disk were illuminated at increaing intensities, with the annulus illuminated at a constant intensity, then the gray disk would look progressively brighter.
  • image p072fig02.18 Vision: brightness contrast. Conserve a total quantity, Total activity normalization.
    LUCERatio scales in choice behavior
    ZEILERAdaptation level theory

    ||
  • image p073fig02.19 Computing with cells: infinity does not exist in biology!
    || Computing in a bounded activity domain, Gedanken experiment (Grossberg 1970). Vm sub-areas [xm, B - xm], I(all m)], m=[1, i, B].
    Bexcitable sites
    xi(t)excited sites (activity, potential)
    B - xi(t)unexcited sites
  • image p073fig02.20 Shunting saturation occurs when inputs get larger to non-interacting cells.
    || Shunting saturation. [xi(t), B - xi(t)].
    (a)(b)
    d[dt: xi] = -A*xi + (B - xi)*Ii
    (a) Spontaneous decay of activity xi to equilibrium
    (b) Turn on unexcited sites B - xo by inputs Ii (mass action)
    Inadequate response to a SPATIAL PATTERN of inputs: Ii(t) = θi*I(t)
    θirelative intensity (cf. reflectance)
    I(t)total intensity (cf. luminance)
  • image p073fig02.21 How shunting saturation turns on all of a cell
  • image p073fig02.22 An on-center off-surround network is capable of computing input ratios.
    || Computing with patterns.
    How to compute the pattern-sensitive variable: θi = Ii / sum[k=1 to n: Ik]?
    Needs interactions! What type? θi = Ii / sum[k ≠ i: Ik]
    Ii↑ ⇒ θi↑ excitation, Ik↑ ⇒ θk↓, k ≠ i inhibition
    On-center off-surround network.
  • image p074fig02.23 The equations for a shunting on-center off-surround network. Shunting terms lead to many beautiful and important properties of these networks, which are found ubiquitously, in one form or another, in all cellular tissues.
    || Shunting on-center off-surround network.
    Mass action: d[dt: xi] = -A*xi +(B - xi)*Ii -xi*sum[k≠i: Ik]
    Turn on unexcited sitesTurn off excited sites
    At equilibrium:
    0 = d[dt: xi] = -(A + Ii + sum[k≠i: Ik])*xi + B*Ii = -(A + I)*xi + B*Ii
    xi = B*Ii/(A + I) = B*θi*I/(A + I) = θi* B*I/(A + I)No saturation!
    Infinite dynamical range
    Automatic gain control
    Compute ratio scale
    Weber law
    x = sum[k-1 to n: xk] = B*I/(A + I) ≤ B Conserve total activity
    NORMALIZATION
    Limited capacty
    Real-time probability
  • image p075fig02.24 The membrane equations of neurophysiology describe how cell voltages change in response to excitatory, inhibitory, and passive input channels. Each channel is described by a potential difference multiplied by a conductance. With the special choices shown in the lower right-hand corner, this equation defines a feedforward shuntin on-center off-surround network.
    || Membrane equations of neurophysiology.
    C*dp[dt] = (V(+) - V)*g(+) +(V(-) - V)*g(-) +(V(p) - V)*g(p)
    Shunting equation (not additive)
    V Voltage
    V(+), V(-), V(p) Saturating voltages
    g(+), g(-), g(p) Conductances
    V(+) = B, C = 1; V(-) = V(p) = 0; g(+) = Ii; g(-) = sum[k≠i: Ik];
    lower V: V(+) = V(p) Silent inhibition, upper V: V(+). (Howell: see p068fig02.14 Grossberg
  • image p076fig02.25 An on-center off-surround network can respond to increasing on-center excitatory inputs without a loss of sensitivity. Instead, as the off-surround input increases, the region of a cell
  • image p076fig02.26 The mudpuppy retina exhibits the shift property that occurs in the feedforward shunting on-center off-surround network in Figure 2.25. As a result, its sensitivity also shifts in response to different background off-surrounds, and therefore exhibits no compression (dashed purple lines).
    || Mudpuppy retina neurophysiology.
    I center, J background
    a) Relative figure-to-ground
    b) Weber-Fechner I*(A + J)^(-I)
    c) No hyperpolarization, SHUNT: Silent inhibition
    d) Shift property(Werblin 1970) xi(K,J) vs K = ln(I)
    Adaptation- sensitivity shifts for different backgrounds. NO COMPRESSION.
  • image p077fig02.27 A schematic of the on-center off-surround network that occurs in the mudpuppy retina, including three main cell types: receptors, horizontal cells, and bipolar cells.
    || Mechanism: cooperative-competitive dynamics.
    On-center off-surround (Kuffler 1953) cat retina
    Subtractive lateral inhibition (Hartline, Ratcliff 1956/7+) limulus retina.
    R receptor -> H horizontal -> B bipolar (Werblin, Dowling, etal 1969+) mudpuppy retina.
  • image p077fig02.28 Silent inhibition is replaced by hyperpolarization when the inhibitory saturating potential is smaller than the passive saturating potential. Then an adpatation level is created that determines how big input ratios need to be to activate their cells.
    || Weber Law and adaptation level.
    Hyperpolarization vs Silent inhibition
    d[dt: xi] = -A*xi +(B - xi)*Ii -(xi + C)*sum[k≠i: Ik]
    At equilibrium:
    0 = d[dt: xi] = -(A + Ii + )*xi +B*Ii -C*sum[k≠i: Ik]
    = -(A + I)*xi +(B + C)*Ii -C*I
    = -(A + I)*xi +(B + C)*I*[θi -C/(B + C)]
    xi = (B + C)*I/(A + I)* [θi -C/(B + C)]
    Weber Law Reflectance Adaptation level
  • image p078fig02.29 How the adaptation level is chosen to enable sufficiently distinct inputs to activate their cells.
    || Weber Law and adaptation level.
    xi = (B + C)*I/(A + I)* [θi -C/(B + C)]
    Weber Law Reflectance Adaptation level
    V(+) >> V(-) ⇒ B >> C ⇒ C/(B + C) << 1
    Adaptation level theory (Zeiler 1963).
  • image p078fig02.30 Choosing the adaptation level to achieve informational noise suppression.
    || Noise suppression. Attenuate Zero Spatial frequency patterns: no information. Ii vs i (flat line), xi vs i (flat line at zero)
    B >> C: Try B = (n - 1)*C or C/(B + C) = 1/n
    Choose a uniform input pattern (no distinctive features): All θi = 1/n
    xi = (B + C)*I/(A + I)*[θi -C/(B + C)] = 0 no matter how intense I is.
  • image p078fig02.31 How noise suppression enables matching of bottom-up and top-down input patterns.
    || Noise suppression -> pattern matching. mismatch (out of phase) suppressed, match (in phase) amplifies pattern.
  • image p079fig02.32 Matching amplifies the matched pattern due to automatic gain control. See terms I and J in the equation.
    || Substrate of resonance. Match (in phase) of BU and TD input patterns AMPLIFIES matched pattern due to automatic gain control by shunting terms. J = sum[i: Ji], I = sum[i: Ii], θi = (Ii + Ji)/(I + J)
    xi = (B + C)*(I + J)/(A + I + J)*[θi -C/(B + C)]
    Need top-down expectations to be MODULATORY.
  • image p080fig02.33 An opposite-attracts rule during the development of intracellular connections can lead to a mature network that realizes informational noise suppression.
    || How do noise suppression parameters arise? Symmetry-breaking during morphogenesis? Opposites attract rule.
    Intracellular parameters C/B = 1/(1 - n) Intercellular parameters
    Predicts that:
    • Intracellular excitatory and inhibitory saturation points can control the growth during development of :
    • Intercellular excitatory and inhibitory connections.
  • image p080fig02.34 How to achieve informational noise suppression in a network with multiple parallel processing channels.
    || Symmetry-breaking: dynamics and anatomy.
    Dynamics:
    • excitatory range is amplified
    • inhibitory range is compressed
    Anatomy:
    • narrow on-center
    • broad off-surround
    Noise suppression: attenuates uniform patterns
    Contour direction: enhances pattern gradients
  • image p081fig02.35 The equilibrium activities of a shunting netwok with Gaussian on-center off-surround kernels are sensitive to the ratio-contrasts of the input patterns that they process. The terms in the denominator of the equilibrium activities accomplish this using the shunting on-center and off-surround terms.
    || Ratio-contrast detector. flat versus [Gaussian Cki, flattened Gaussian? Eki]
    d[dt: xi] = -A*xi +(B - xi)*sum[k≠i: Ik]*Cki -(xi + D)*sum[k=1 to n: Ik*Eki]
    Cki = C*e^(-μ*(k - i)^2), Eki = E*e^(-μ*(k - i)^2)
    At equilibrium: xi = I*sum[k=1 to n: θk*Fki] / (A + I*sum[k=1 to n: θk*Gki])
    Fki = B*Cki -D*Eki (weighted D.O.G)
    Gki = Cki +Eki (S,O,G)
    • Reflectance processing
    • Contrast normalization
    • Discount illuminant
  • image p081fig02.36 Informational noise suppression in network with Gaussian on-center and off-surround function as contour detectors that are sensitive to ratio-contrast.
    || Noise suppression and contour detection.
    If B*sum[k=1 to n: Cki] <= D*sum[k=1 to n: Eki] then:
    • uniform patterns are suppressed
    • contrasts are selectively enhanced
    • contours are detected
    Ii vs i, xi vs i
    Responses are selective to [REFLECTANCE, SPATIAL SCALE], eg color [feature, surface] contours.
  • image p082fig02.37 My models begin with behavioral data, since brains are designed to achieve behavioral success. The text explains how models evolve in stages, through a process of successive refinements, or unlumpings. These unlumpings together carry out a kind of conceptual evolution, leading to models that can explain and predict ever larger psychological and neurobiological databases.
    || Modelling method and cycle.
    Behavioral data -(art of modeling)-> Design principles <- Neural data <-(brain predictions)- Mathematical model and analysis -(behavioral predictions)-> Behavioural data
    Operationalizes "proper level of abstraction"
    Operationalizes that you cannot "derive a brain" in one step.
  • image p085fig02.38 Our models have been used in many large-scale applications to engineering and technology. Linking brain to behavior explains how brain mechanisms give rise to psychological functions, and do so autonomously. The combination of mechanism, function, and autonomy helps to explain their value in helping to solve outstanding problems in technology.
    || Modeling method and cycle.
    Behavioral data -(art of modeling)-> Design principles <- Neural data <-(brain predictions)- Mathematical model and analysis -(behavioral predictions)-> Behavioural data
    Technology: Mathematical model and analysis <-> Technological applications
    At every stage, spin off new model designs and mechanisms to technologist who need autonomous intelligent applications.
  • image p087fig03.01 A macrocircuit of key visual processes (in green) and the cortical areas in which they primarily occur (in red), from the retina to the Prefrontal Cortex (PFC), including both the What and Where cortical streams. The [bottom-up, horizontal, and top-down] interactions help each of these processes to overcome computationally complementary processing deficiencies that they would experience without them, and aso to read-out top-down expectations that help to stabilize learning while they focus attention on salient objects and positions.
    || Emerging unified theory of visual intelligence. [What, Where] streams. Bottom-up and top-down interactions overcome COMPLEMENTARY processing deficiencies.
  • image p089fig03.02 What do you think lies under the two grey disks? (on a checkers board)
    || p089c1h0.55 "... As your eye traverses the entire circular boundary (Howell: of a grey disk on a checkerboard), the contrast keeps flipping between light-to-dark and dark-to-light. Despite these contrast reversals, we perceive a single continuous boundary surrounding the gray disk. ...".
  • image p090fig03.03 Kanizsa square and reverse-contrast Kanizsa square precepts. The spatial arrangement of pac-men, lines, and relative contrasts determines the perceived brightness of the squares, and even if they exhibit no brightness difference from their backgrounds, as in (b). These factors also determine whether pac-men will appear to be amodally completed behind the squares, and how far behind them.
    || p089c2h0.65 "...
    a) The percept of the square that abuts the pac-men is a visual illusion that is called the Kanizsa square. The enhanced brightness of the square is also an illusion.
    c) shows that these boundaries can be induced by either collinear edges or perpendicular line ends, and that both kinds of inducers cooperate to generate an even stronger boundary.
    d) if the perpendicular lines cross the positions of the illusory contours, then they can inhibit the strength of these contours. ..."
  • image p091fig03.04 A cross-section of the eye, and top-down view of the retina, shao how the blind spot and retinal veins can occlude the registration of light signals at their positions on the retina.
    || Eye: [optic nerve, ciliary body, iris,lens, pupil, cornea, sclera, choroid, retina]. Human retina: [fovea, blind spot, optic nerve]. see alsi cross-section of retinal layer.
  • image p092fig03.05 A cross-section of the retinal layer. Note that light stimuli need to go through all retinal layers before they reach the photoreceptor layer at which the light signals are registered.
    || light stimuli ->
    retinal layerscellular composition
    inner limiting membrane
    retinal nerve fibreganglion nerve fibres
    ganglion cellganglion
    inner plexiformamacrine
    inner nuclearhorizontal
    outer plexiform
    outer limiting membrane
    photoreceptorrod
    photoreceptorcone
    retinal pigment epithelium
    <- signal transduction. http://brain.oxfordjournals.org/content/early/2011/01/20/brain.awq346
  • image p093fig03.06 Every line is an illusion because regions of the line that are occluded by the blind spot or retinal veins are completed at higher levels of brain processing by boundary completion and surface filling-in.
    || Every line is an illusion!
    Boundary completionWhich boundaries to connect?
    Surface filling-inWhat color and brightness do we see?
  • image p094fig03.07 The processes of boundary completion and surface filling-in are computationally complementary.
    ||
    Boundary completionSurface filling-in
    outwardinward
    orientedunoriented
    insensitive to direction of contrastsensitive to direction-of-contrast
  • image p095fig03.08 Computer simulation of a Kanizsa square percept. See the text for details.
    || p094c2h0.2 "...
    b) shows the feature contours that are induced just inside the pac-man boundaries.
    c) feature contours fill-in within the square boundary
    d) create a percept of enhanced brightness throughout the square surface ..."
  • image p095fig03.09 Simulation of a reverse-contrast Kanizsa square percept. See the text for details.
    || p094c2h0.5 "...
    b) whereas bright feature contours are induced just inside the boundaries of the two black pac-men at the bottom of the figure, dark feature contours are induced inside the boundaries of the two white pac-man at the top of the figure
    c) the square boundary is recognized
    d) Because these dark and bright feature contours are approximately balanced, the filled-in surface color is indistinguishable from the filled-in surface color outside of the square, ... but [the square boundary is] not seen ..."
  • image p096fig03.10 The visual illusion of eon color spreading. Neither the square nor the blue color that are percieved within it are in the image that defines a neon color display. The display consists only of black and blue arcs.
    ||
  • image p096fig03.11 Another example of neon color spreading. The image is composed of black and blue crosses. See the text for details.
    || Howell: note the appearance of illusory red squares
  • image p100fig03.13 The Ehrenstein percept in the left panel is significantly weakened as the orientations of the lines that induce it deviate from being perpendicular deviate from being perpendicular to the illusory circle.
    ||
  • image p100fig03.14 Boundaries are completed with the orientations that receive the largest total amount of evidence, or support. Some can form in the locally preferred orientations that are perpendicular to the inducing lines, while others can form through orientations that are not locally preferred, thus showing that there is initially a fuzzy band of almost perpendicular initial grouping orientations at the end of each line.
    || Perpendicular induction at line ends wrt [circular, square] boundaries
    line ends localglobal
    perpendicular, crisppreferredpreferred
    NOT perpendicular, fuzzyunpreferredpreferred
  • image p100fig03.15 A fuzzy band of possible initial grouping orientations allows grouping to get started. Cooperative-competitive feedback via a hierarchical resolution of uncertainty chooses a sharp final grouping that has the most evidence to support it.
    || before choice: transient; after choice: equilibrium
  • image p102fig03.16 T
  • image p102fig03.17 The relative positions of the squares give rise to a percept of three regions. In the middle region, emergent diagonal groupings form, despite the fact that all the orientations in the image are verticals and horizontals.
    ||
  • image p103fig03.18 Computer simulations in [b, c, e, f] of groupings in response to different spatial arrangements in [a,c, e, g] of inducers that are composed of short vertical boundaries. Note the emergent horizontal groupings in [d, f, h] and the diagonal groupings in h, despite the fact that all its inducers have vertical orientations.
    ||
  • image p103fig03.19 As in Figure 3.18, emergent groupings can form whose orientations differ from thos of the inducing stimuli.
    || Thats how multiple orientations can induce boundary completion of an object. [diagonal, perpendicular, parallel]
  • image p104fig03.20 Sean Williams: how boundaries can form
    ||
  • image p104fig03.21 Four examples of how emergent boundaries can form in response to different kinds of images. These examples show how boundary webs can shape themselves to textures, as in (c), and shading, as in (d), in addition to lines, as in (a). In all these cases, the boundaries are invisible, but reveal themselves by supporting filling-in of surface brightness and color within their form-sensitive webs.
    ||
  • image p105fig03.22 Depth-selective boundary representations capture brightness and colors in surface filling-in domains. See the text for details.
    || 3D vision and figure-ground separation. multiple-scale, depth-selective boundary webs. refer to Figure 3.21(d)
    depth increasing ↓boundariessurfaces
    BC inputsurface capture!
    FC input
  • image p105fig03.23 The pointillist painting A Sunday on la Grande Jatte by Georges Seurat illustrates how we group together both large-scale coherence among the pixels of the painting, as well as forming small groupings around the individual dabs of color.
    ||
  • image p106fig03.24 In response to the Synthetic Aperture image (upper corner left), a shunting on-center off-surround network "discounts the illiminant" and thereby normalizes cell activities to compute feature contours, without causing saturation (upper right corner). Multiple-scale boundaries form in response to spatially coherent activities in the feature contours (lower left corner) and create the webs, or containers, into which the feature contours fill-in the final surface representations (lower right corner).
    || Do these ideas work on hard problems? SAR!
    input imagefeature contoursboundary contoursfilled-in surface
    Synthetic Aperture Radar: sees through weather 5 orders of magnitude of power in radar returndiscounting the illuminant
    • normalizes the image: preseves RELATIVE activities without SATURATION
    • shows individual PIXELS
    boundaries complete between regions where normalized feature contrasts changefilling-in averages brightnesses within boundary compartments
  • image p107fig03.25 The Roofs of Collioure by Matisse. See the text for details
    || p107c1h0.6 "... [Matisse] showed how patches of pure color, when laid down properly on a canvas, could be grouped by the brain into emergent boundarues, without the intervention of visible outlines. ... The trick was that these emergent boundaries, being invisible, or amodal, did not darken the colors in the surface representations. In this sense, Matisse intuitively realized that "all boundaries are invisible" through the masterful way in which he arranged his colors on canvas to generate boundaries that could support compelling surface representations. ..."
  • image p107fig03.26 How "drawing directly in color" leads to colored surface representations. Amodal boundary webs control the filling-in of color within these surface representations. See the text for details.
    || color patches on canvas -> [surface color and form, Amodal boundary web]. Amodal boundary web -> surface color and form.
  • image p108fig03.27 Matisse
  • image p108fig03.28 The watercolor illusion of Baingio Pinna 1987 can be explained using spatial competition betweeen like-oriented boundary signals. This occurs at what I have called the First Competitive Stage. This is one stage in the brain
  • image p109fig03.29 The 3D percepts that are generated by chiaroscuro and trompe l
  • image p109fig03.30 The triptych of Joe Baer, called Primary Light Goup: Red, Green, and Blue 1964-1965, generates watercolor illusion percepts which, when displayed side by side in a museum, create a striking impression.
  • image p110fig03.31 Henry Hensche
  • image p110fig03.32 Claude Monet
  • image p112fig03.33 Various ways that spatial gradients in boundary webs can cause self-luminous percepts. See the text for details.
    || Boundary web gradient can cause self luminosity. Similar to watercolor illusion. Gloss by attached highlight (Beck, Prazdny 1981), glare. (Bresan 2001) Double brilliant illusion, (Grossberg, Hong 2004) simulation. p111c2h0.5 "... This effect may be explained as the result of the boundary webs that are generated in response to the luminance gradients and how they control the filling-in of lightness within themselves and abutting regions. ... Due to the mutually inhibitory interactions across the boundaries that comprise these boundary webs, more lightness can spread into the central square as the steepness of the boundary gradients increases. ...".
  • image p113fig03.35 The Highest Luminance As White (HLAW) rule of (Hans Wallach 1948) works in some cases (top row) but not others (bottom row).
  • image p113fig03.36 The Blurred Highest Luminance As White (BHLAW) rule that I developed with my PhD student, Simon Hong, works in caseswhere the rule of Hans Wallach fails, as can be seen by comparing the simulation in Figure 3.35 with the one in this figure.
    || Blurred Highest Luminance As White (BHLAW) rule (Grossberg, Hong 2004, 2006). Spatial integration (blurring) adds spatial context to lightness perception.
  • image p114fig03.37 How the Blurred Highest Luminance as White rule sometimes normalizes the highest luminance to white (left panel) but at other times normalizes it to be self-luminous (right panel). See the text for details.
    || perceived reflectance vs cross-section of visual field. [white level, anchored lightness, self-luminous*, BHCAW]. *self-luminous only when conditions are right.
  • image p114fig03.38 Four color-field spray paintings of Jules Olitski. The text explains why they generate surfaces percepts with such ambiguous depth.
    || Jules and his friends (1967), Lysander-1 (1970), Instant Loveland (1968), Comprehensive Dream (1965). p114c2h0.4 "... it is impossible to visually perceive descrete colored units within the boundary webs in Olitski
  • image p115fig03.39 Two of Gene Davis
  • image p116fig03.40 A combination of T-junctions and perspective cues can create a strong percept of depth in response to 2D images, with a famous example being Leonardo da Vinci
  • image p117fig03.41 End gaps, or small breaks or weakenings of boundaries, can form where a stronger boundary abuts a weaker, like-oriented, boundary, as occurs where black boundaries touch red boundaries in the neon color spreading image of Figure 3.11.
    || Boundary contours - lower contrast boundary signals are weakened. feature contours- no inhibition, feature signals survive and spread. MP -> [BCS, FCS]. BCS -> FCS.
  • image p117fig03.42 Two paintings by Frank Stella. See the text for details.
    || Firzubad (top row) ... and Khurasan Gate (variation) (bottom row). p117c1h0.75 "... The luminance and color structure within a painting affects how it groups and stratifies the figures within it. These processes, in turn, affect the formation of attentional shrouds that organize how spatial attention is is allocated as we view them. ..." "... Stella wrote Furzabad is a good example of of lookng for stability and trying to create as much instability as possible.
  • image p120fig03.43 Four paintings by Monet of the Rouen cathedral under different lighting conditions (top row) and their monochromatic versions (bottom row). See the text for details.
    || p119c2h0.25 "... Monet uses nearby colors that are nearly equiluminant, and sharp, high-contrast luminance defined edges are sparse. He hereby creates weaker boundary signals within and between the parts of many forms, and stronger boundary signals between the forms. This combination facilitates color spreading within the forms and better separation of brightness and collor differences between forms. ... The grayscale versions of these paintings demonstrate the near equiluminance of the brushstrokes within forms, and places in which brightness and color differences significantly influence the groupings that differentiate between forms, including the differentiation between the cathedral and the sky. ..."
  • image p120fig03.44 The Rouen cathedral at sunset generates very different boundary webs than it does in full sunlight, as illustrated by Figure 3.45.
    || Rouen Cathedral at sunset (Monet 1892-1894).
    • Lighting almost equiluminant
    • Most boundaries are thus caused by color differences, not luminance differences
    • Fine architectural details are obscured, leading to...
    • Coarser and more uniform boundary webs, so...
    • Less depth in the painting.
  • image p121fig03.45 The Rouen cathedral in full sunlight.
    || Rouen Cathedral full sunlight (Monet 1892-1894).
    • Lighting is strongly non-uniform across most of the painting
    • Strong boundaries due to both luminance and color differences
    • Fine architectural details are much clearer, leading to...
    • Finer and more non-uniform boundary webs, so...
    • Much more detail and depth
  • image p121fig03.46 The Rouen cathedral in full sunlight contains T-Junctions that are not salient in the painting of it at sunset. These are among the painting
  • image p123fig04.01 A classical example of how boundaries are barriers to filling-in.
    || Combining stabilized images with filling-in (Krauskopf 1963, Yarbus 1967). Image: Stabilize these boundaries with suction cup attached to retina or electronic feedback circuit. Percept: A visible effect of an invisible cause!
  • image p124fig04.02 The verical cusp of lesser and greater illuminance is the same in both images, but the one on the left prevents brightness from flowing around it by creating closed boundaries that tighly surround the cusp.
  • image p126fig04.03 A McCann Mondrian is an excellent display with which to illustrate how our brains discount the illuminant to compute the "real" colors of objects. See the text for details.
    || Color constancy: compute ratios. McCann Mondrian. Biological advantage: never see in bright light, eg tropical fish
    Discount the illuminantCompute lightness
    Different colors seen from the same spectrum
    ... similar to those seen in white light
    Physical basis: reflectance RATIOS!
  • image p128fig04.04 When a gradient of light illuminates a McCann Mondrian, there is a jump in the total light that is reflected at nearby positions where the reflectances of the patches change,
    || Compute reflectance changes at contours. Fill-in illuminant-discounted surface colors.
    leftright
    I + εI - ε
    A*(I + ε)B*(I - ε)
    A*(I + ε)/(B*(I - ε)) - 1 = A/B - 1
  • image p129fig04.05 Multiple-scale balanced competition chooses color contours where the reflectance of the patches change. These color contours discount the illuminant.
    || Compute reflectance changes at contours. Fill-in illuminant-discounted surface colors. Discount illuminant: compute color contours.
  • image p129fig04.06 Filling-in of color contours restores a surface percept with colors that substantially discount the illuminant.
    || Compute reflectance changes at contours. Fill-in illuminant-discounted surface colors. Fill-in surface color: hierarchical resolution of uncertainty.
  • image p130fig04.07 Simulation of brightness constancy under uniform illumination.
    || Simulation of brightness constancy (Grossberg & Todorovic 1988). Uniform illumination. [stimulus (S), feature (F), boundary (B), output]. B -> F -> S -> B: Veridical! Boundary peaks are spatially narrower than feature peaks.
  • image p131fig04.08 Simulation of brightness constancy under an illimination gradient. Note that the feature content pattern (F) is the same in both cases, so too is the boundary contour (B) pattern that is derived from it, and the final filled-in surface.
    || Simulation of brightness constancy. Discount the illuminant. [stimulus (S), feature (F), boundary (B), output]. B -> F -> S -> B: not veridical, but useful! Ratio-sensitive feature contours (F).
  • image p131fig04.09 Simulation of brightness contrast
    || Simulation of brightness contrast. [stimulus (S), feature (F), boundary (B), output].
  • image p132fig04.10 Simulation of brightness assimilation. Note how the equal steps on the left and right sides of the luminance profile are transformed into different brightness levels.
    || Simulation of brightness assimilation. [stimulus (S), feature (F), boundary (B), output].
  • image p132fig04.11 Simulations of a double step (left panel) and the Craik-O
  • image p133fig04.12 Simulation of the 2D COCE.
    || (Todorovic, Grossberg 1988). p132c2h0.6 "... 2D Craik-O
  • image p134fig04.13 Contrast constancy shows how the relative luminances when a picture is viewed in an illumination gradient can even be reversed to restore the correct reflectances due to discounting the illuminant.
  • image p134fig04.14 The kinds of displays that Michael Paradiso and Ken Nakayamas used to catch filling-in "in the act" and which Karl Arrington then simulated using the Grossberg and Todorovic 1988 model.
    || Experiments on filling-in. Catching "filling0in" in the act (Paradiso, Nakayama 1991). (Arrington 1994 Vision Research 34, 3371-3387) simulated these data using the model of Grossberg and Todorovic 1988.
  • image p138fig04.15 Simple cells are oriented contrast detectors, not edge detectors.
    || From oriented filtering to grouping and boundary completion (Hubei, Weisel 1968). Oriented receptive fields: SIMPLE CELLS. Sensitive to : orientation, [amount, direction] of contrast, spatial scale. Oriented local contrast detectors, not edge detectors!
  • image p139fig04.16 The simplest way to realize an odd simple cell receptive field and firing threshold.
    || "Simplest" simple cell model. need more complexity for processing natural scenes. Difference-of-Gaussian or Gabor filter (J. Daugman, D. Pollen...). Output signal vs cell activity. Threshold linear signal, half-wave rectification.
  • image p140fig04.17 Complex cells pool inputs from simple cells that are sensitive to opposite contrast polarities. Complex cells hereby become contrast invartiant, and can respond to contrasts of either polarity.
    || Complex cells: pool signals from like-oriented simple cells of opposite contrast polarity at the same position. They are "insensitive to contrast polarity". Half-wave rectification of inputs from simple cells.
  • image p141fig04.18 The images formed on the two retinas in response to a single object in the world are displaced by different amounts with respect to their foveas. This binocular disparity is a powerful cue for determining the depth of the object from an observer.
    || Binocular Disparity. Binocular disparities are used in the brain to reconstruct depth from 2D retinal inputs, for relatively near objects.
  • image p141fig04.19 A laminar cortical circuit for computing binocular disparities in layer 3B of V1 at binocular simple cells. These cells add positionally disparate inputes from like polarized monocular simple cells (layer 4 of V1). Binocular simple cells at each position that is sensitive to opposite polarities then add their outputs at complex cells in layer 2/3. Chapter 10 will explain how these laminar circuits work in greater detail.
    || Laminar cortical circuit for complex cells. [left, right] eye.
    V1 layerdescription
    2/3Acomplex cells
    3Bbinocular simple cells
    4monocular simple cells
  • image p142fig04.20 A Glass pattern and a reverse-contrast Glass pattern give rise to a different boundary groupings because simple cells can only pool signals from like-polarity visual features. See the text for details.
  • image p143fig04.21 Oriented simple cells can respond at the ends of thick enough bar ends, but not at the ends of thin enough lines. See the text for an explanation of why this is true, and its implications for visual system design.
    || Hierarchical resolution of uncertainty. For a given field size. Different responses occur at bar ends and line ends. For a thin line no detector perpendicular to line end can respond enough to close the boundary there. Network activity.
  • image p144fig04.22 Computer simulation of how simple and complex cells respond to the end of a line (gray region) that is thin enough relative to the receptive field size (thick dashed region in the left panel). These cells cannot detect the line end, as indicated by the lack of responses there in the left panel (oriented short lines denote the cells
  • image p145fig04.23 If end gaps were not closed by end cuts, then color would flow out of every line end!
    || A perceptual disaster in the feature contour system. feature contour, line boundary. input -> [boundary, surface]. boundary -> surface. Color would flow out of every line end! as it does during neon color spreading.
  • image p145fig04.24 A brain
  • image p146fig04.25 Networks of simple, complex, and hypercomplex cells can create end cuts as an example of hierarchical resolution of uncertainty. See the text for details.
    || How are end cuts created? (Grossberg 1984) Two stages of short-range competition. 1st stage: Simple cells -> complex cells -> hypercomplex - endstopped complex. First competitive stage- across position, same orientation; Second competitive stage- same position, across orientation. -> cooperation.
  • image p148fig04.26 End cuts are formed during neon color spreading in the same way that they are formed at line ends.
    || End cut during neon color spreading.
    FIRST competitive stageSECOND competitive stage
    within orientationacross orientation
    across positionwithin position
    to generate end cuts.
  • image p149fig04.27 Bipole cells can form boundaries that interpolate end cuts, and use their cooperative-competitive interactions to choose the boundary groupings that have the most support from them.
    || Bipole cells: boundary completion. long-range cooperation & short-range inhibition: complete winning boundary groupings and suppress weaker boundaries.
  • image p150fig04.28 Bipole cells have two branches (A and B), or poles, in their receptive fields. They help to carry out long-range boundary completion.
    || Bipole property. Boundary completion via long-range cooperation. Completing boundaries inwardly between pairs or great numbers of inducers in an oriented way. fuzzy "AND" gate.
  • image p151fig04.29 Experimental evidence of bipole cells in cortical area V2 was reported by Von der Heydt, Peterhans, and Baumgarter (1984).
    || Bipoles: first neurophysiological evidence (V2) (von der Heydt, Peterhans, Baumgartner 1984, Peterhans, von der Heydt 1988). (Grossberg 1984) prediction.
    Ordering:
    Stimulus (S)
    probe location *
    cells in V2
    response?
    ...(S)*...YES
    ...*...(S)NO
    (S)...*...NO
    (S)...*...(S)YES
    (S)...*...
    (more contrast)
    NO
    (S)...*.....(S)YES
    Evidence for receptive field.
  • image p151fig04.30 Anatomical evidence for long-range horizontal connections has also been reported, as illustrated by the example above from (Bosking etal 1997).
    || Anatomy: horizontal connections (V1) (Bosking etal 1997). tree shrew. [10, 20]*[20, 10, 0, -10, -20] (degrees).
  • image p152fig04.31 The predicted bipole cell receptive field (upper left corner) has been supported by both neurophysiological data and psychophysical data, and used in various forms by many modelers. See the text for details.
    || Bipoles through the ages. (Grossberg 1984; Grossberg, Mongolla 1985). (Field, Hayes, Hess 1993) "association field". (Heitger, von der Heydt 1993). (Williams, Jacobs 1997). cf. "relatability" geometric constraints on which countours get to group (Kellman & Shipley 1991). Also "tensor voting" (Ullman, Zucker, Mumford, Guy, Medioni, ...).
  • image p153fig04.32 The double filter network embodies simple, complex, and hypercomplex (or endstopped complex) cells. It feeds into a network of bipole cells that can complete boundaries when it properly interacts with the double filter.
    || Double filter and grouping network. Cells : simple -> complex -> hypercomplex (endstopping) -> bipole
    Grouping networkbipole cells
    Double filterhypercomplex cells
    endstopping
    complex cells
    simple cells
  • image p156fig04.33 A tripartite texture (top row) and two bipartite textures (bottom row) that illustrate how emergent boundary groupings can segregate textured regions from one another.
  • image p157fig04.34 Some textures that were simulated with mixed success by the complex channels model. In particular, the model gets the wrong answer for the textures in (g) and (i). The Boundary Contour System model of Figure 4.32, which includes both a double filter and a bipole grouping network, simulates the observed results.
  • image p159fig04.35 Spatial impenetrability prevents grouping between the pac-men figures in the left figure, but not in the figure on the right.
    || p158c2h0.75 "... In the image shown in the left panel, the horizontal boundaries of the background squares interfere with vertical boundary completion by vertically-oriented bipole cells, again by spatial impenetrability. In contrast, the vertical boundaries of the background squares are collinear with the vertical pac-man inducers, thereby supporting formation of the square boundaries. Finer aspects of these percepts, such as why the square ... (right panel) appears to lie in front of four partially occuded circular discs, as regularly occurs when the Kanizsa square can form (eg Figure 3.3), can be understood using FACADE theory mechanism that will shown below to explain many figure-ground percepts using natural extensions to the three dimensional world of boundary and and surface mechanisms that we have already discussed. ..."
  • image p159fig04.36 Graffiti art by Banksy exploits properties of amodal boundary completion and spatial impenetrability.
    || p159c1h0.75 perceptual psychologist Nava Rubin "... When the wall is smooth, Banksy leaves the regions previously covered by stencil unpainted, relying of observers
  • image p161fig04.37 Kanizsa squares that form either collinearly to their inducers (left panel) or perpendicular to them (right panel) confirm predictions of the BCS boundary completion model.
    || Analog-sensitive boundary completion. contour strength vs Kanizsa square image. Increases with "support ratio" (Shipley, Kellman 1992). Inverted-U (Lesher, Mingoloa 1993; cf Soriano, Spillmann, Bach 1994)(shifted gratings). p370h0.6 BCS = Boundary Contour System, FCS = Feature Contour System. p161c1h0.85 "... As predicted by the BCS, they found an Inverted-U in contour strength as a function of line density. ... This effect may be explained by the action of the short-range competition that occurs before the stage of long-range cooperative grouping by bipole cells (Figure 4.32). It is thus another example of the balance between cooperative and competitive mechanisms. ..."
  • image p162fig04.38 How long-range cooperation among bipole cells and short-range competition by hypercomplex cells work together to generate the inverted-U in boundary strength that is found in the data of Figure 4.37 (right panel).
    || Cooperation and competition during grouping.
    few lineswide spacing, inputs outside spatial range of competition, more inputs cause higher bipole activity
    more linesnarrower spacing, slightly weakens net input to bipoles from each inducer
    increasing line densitycauses inhibition to reduce net total input to bipoles
  • image p163fig04.39 A schematic of the LAMINART model that explains key aspects of laminar visual cortical anatomy and dynamics. LGN -> V1 [6, 4, 2/3] -> V2 [6, 4, 2/3]
    || p163c1h0.6 "... The first article abount laminar computing ... proposed how the laminar cortical model could process 2D pictures using bottom-up filtering and horizontal bipole grouping interactions (Grossbeerg, Mingolla, Ross 1997). In 1999, I was able to extend the model to also include top-down circuits for expectation and attention (Grossberg 1999)(right panel). Such a synthesis of laminar bottom-up, horizontal, and top-down circuits is characteristic of the cerebral cortex (left panel). I called it LAMINART because it began to show how properties of Adaptive Resonance Theory, or ART, notably the ART prediction about how top-down expectations and attention work, and are realized by identical cortical cells and circuits. You can immediately see from the schematic laminar circuit diagram ... (right panel) that circuits in V2 seem to repeat circuits in V1, albeit with a larger spatial scale, despite the fact that V1 and V2 carry out different functions, How this anatomical similarity can coexist with functional diversity will be clarified in subsequent sections and chapters. It enable dfifferent kinds of biological intelligence to communicate seamlessly while carrying out their different psychological functions. ..."
  • image p164fig04.40 The Koffka-Benussi ring. See the text for details.
    || p164c2h0.25 "... [left image] The luminance of the ring is intermediate between the luminances of the two background regions. Its perceived brightness is also between the brightnesses of the two background regions, and appears to be uniform throughout. The right image differs from the left only in that a vertical line divides the two halves of the ring where it intersects the two halves in the background. Although the luminance of the ring is still uniform throughout, the two halves of the rig now have noticeably different brightnesses, with the left half of the ring looking darker than the right half. How can drawing a line have such a profound effect on the brightnesses of surface positions that are so far away from the line? ..."
  • image p165fig04.41 The Kanizsa-Minguzzi ring. See the text for details.
    || p165c1h0.6 "... (left panel), the annulus is divided by two line segments into annular sectors of unequal area. Careful viewing shows that the smaller sector looks a little brighter than the larger one. (Kanizsa, Minguzzi 1986) noted that "this unexpected effect is not easily explained. In fact, it cannot be accounted for by any simple psychological mechanism such as lateral inhibition or freuency filtering. Furthermore, it does not seem obvious to invoke oganizational factors, like figural belongingness or figure-ground articulation."". p165c2h0.35 "... (Grossberg, Todorovic 1988). Our main claim is that the two radial lines play two roles, one in the formation of boundaries with which to contain the filling-in process, and the other as a source of feature contour signals that are filled-in within the annular regions to create a surface brightness percept. ..."
  • image p166fig04.42 Computer simulation of Kanizsa-Minguzzi ring percept. See the text for details.
  • image p167fig04.43 (a) How bipole cells cause end cuts. (b) The Necker cube generates a bistable percept of two 3D parallelopipeds. (c) Focusing spatial attention on one of the disks makes it look both nearer and darker, as (Tse 1995) noted and (Grossbert, Yazdanbakhsh 1995) explained.
    || T-junction sensitivity. image -> bipole cells -> boundary. (+) long-range cooperation, (-) short-range competition.
  • image p168fig04.44 Macrocircuit of the main boundary and surface formation stages that take place from the lateral geniculate nucleus, or LGN, through cortical areas [V1, V2, V4]. See the text for details.
    || image p168fig04.45 How ON and OFF feature contour (FC) activities give rise to filled-in surface regions when they are adjacent to a like oriented boundary, but not otherwise.
  • image p170fig04.46 Surface regions can fill-in using feature contour inputs (+ and - signs) if they are adjacent to, and collinear with, boundary contour inputs (solid) line, as in (a), but not otherwise, as in (b).
  • image p170fig04.47 A double-opponent network processes output signals from opponent ON and OFF Filling-In DOmains, or FIDOs.
    || OFF FIDO -> shunting networks -> ON FIDO -> shunting networks-> opponent interation -> FIDO outputs
  • image p171fig04.48 How closed boundaries contain filling-in of feature contour signals, whereas open boundaries allow color to spread to both sides of the boundary.
    || Before filling-in: boundary contour, illuminant-discounted feature contour; After filling-in: no gap, gap
  • image p171fig04.49 An example of DaVinci stereopsis in which the left eye sees more of the wall between A and C than the right eye does. The region between B and C is seen only by the left eye because the nearer wall between C and D occludes it from the right eye view.
  • image p173fig04.50 This figure illustrates how a closed boundary can be formed in a prescribed depth due to addition of binocular and monocular boundaries, but not at other depths.
    || How are closed 3D boundaries formed? V1 Binocular, V2 boundary, V2 surface; Prediction: monocular and horizontal boundaries are added to ALL binocular boundaries along the line of sight. Regions that are surrounded by a CLOSED boundary can depth-selectively contain filling-in of lightness and colored signals.
  • image p174fig04.51 The same feedback circuit that ensures complementary consistency between boundaries and surfaces also, automatically, initiates figure-ground separation! See the text for details.
    || before feedback: [V1 -> V2 pale stripe -> V2 thin stripe, "attention pointers" (Cavanagh etal 2010)]; after feedback: [V1 + V2 thin stripe] -> V2 pale stripe via contrast sensitive [exhitation, inhibition] for depths [1, 2] -> object recognition
  • image p174fig04.52 An example of how the 3D LAMINART model can transform the two monocular images of the random dot stereogream in the top row into the three depth-separated surfaces representations in the bottom row.
    || Stereogram surface percepts: surface lightnesses are segregated in depth (Fang and Grossberg 2009). [left, right] inputs, [far, fixation, near] planes. Contrast algorithms that just compute disparity matches and let computer code build the surface eg (Marr, Poggio, etal 1974).
  • image p176fig04.53 The on-center off-surround network within position and across depth helps to explain why brighter Kanizsa squares look closer.
    || inhibition vs. depth. p176c1h0.25 "... to qualitatively understand how this example of proximity-luminance covariance works. It follows directly from the boundary pruning by surface contour feedback signals (Figure 4.51) that achieves complementary consistency and initiates figure-ground perception. ...". p176c1h0.45 "... these inhibitory sigals are part of an off-surround network whose strength decreases as the depth difference increases between the surface that generates the signal and its recipient boundaries. ...". p176c1h0.8 "... Within FACADE theory, the perceived depth of a surface is controlled by the boundaries that act as its filling-in generators and barriers (Figure 3.22), since these boundaries select the depth-sselective FIDOs within whin filling-in can occur, and thereby achieve surface capture. These boundaries, in turn, are themselves strengthened after surface-to-boundary contour feedback eliminates redundant boundaries that cannot support sucessful filling-in (Figure 4.51). These surface contour feedback signals have precisely the properties that are needed to explain why brighter Kanizsa squares look closer! ..."
  • image p178fig04.54 Initial steps in figure-ground separation. See the text for details.
    ||
  • topLeftrepeats the image in Figure 1.3
    topRightshows again the long-range cooperation and short-range compeition that are controlled by the bipole grouping process (Figure 4.43a middle panel)
    bottomLeftshows the end gaps that are caused by these bipole grouping mechanisms
    bottomRightshows how surface filling-in is contained within the closed horizontal rectangular boundary, but spills out of the end gaps formed in the other two rectangles
  • image p178fig04.55 Amodal completion of boundaries and surfaces in V2.
    || Separated V2 boundaries: near, far (amodal boundary completion); Separated V2 surfaces: ?horizonal, vertical? (amodal surface filling-in).
  • image p179fig04.56 Final steps in generating a visible, figure-ground separated, 3D surface representation in V4 of the unoccluded parts of opaque surfaces.
    || Visible surface perception.
    Boundary enrichment:nearfarasymmetry between near & far
    V4horizontal rectanglehorizontal & vertical rectanglescannot use these (overlapping?) boundaries for occluded object recognition
    V2horizontal rectanglevertical rectangleuse these boundaries for occluded object recognition
    Visible surface filling-in:filling-in of entire vertical rectanglepartial filling in of horizontal rectanglevisible percept of unoccluded [vertical] surface
  • image p181fig04.57 Percepts of unimodal and bistable transparency (top row) as well as of a flat 2D surface (bottom row, left column) can be induced just by changing the relative contrasts in an image with a fixed geometry.
    || X junction
  • image p182fig04.58 LAMINART model processing stage that are sufficient to explain many percepts of transparency, including those summarized in Figure 4.57.
    || [left, right] eye, [LGN, V1 [6, 4, 3B, 2/3 A], V2 [4, 2/3]], [mo, bi]nocular cart [simple, complex] cells, [excita, inhibi]tory cart [connection, cell]s.
  • image p186fig05.01 Humans and other autonomous adaptive intelligent agents need to be able to learn both many-to-one and one-to-many maps.
    || Learn many-to-one (compression, naming) and one-to-many (expert knowledge) maps
  • image p186fig05.02 Learning a many-to-one map from multiple visual fonts of a letter to the letter
  • image p186fig05.03 Many-to-one maps can learn a huge variety of kinds of predictive information.
    || Many-to-one map, two stage compression: IF-THEN rules: [symptom, test, treatment]s; length of stay in hospital
  • image p189fig05.04 The hippocampus is one of several brain regions that are important in learning and remembering about objects and events that we experience throughout life. The book will describe several hippocampal processes that contribute to this achievement in different ways.
    || hypothalmic nuclei, amygdala, hippocampus, cingulate gyrus, corpus callosum, thalamus
  • image p192fig05.05 ON and OFF cells in the LGN respond differently to the sides and ends of lines.
    || [ON, OFF]-center, [OFF, ON]-surround (respectively). OFF-center cells maximum response at line end (interior), ON-center cells maximum response along sides (exterior)
  • image p192fig05.06 Bottom-up and top-down circuits between the LGN and cortical area V1. The top-down circuits obey the ART Matching Rule for matching with bottom-up input patterns and focussing attention on expected critical features.
    || Model V1-LGN circuits, version [1, 2]. retina -> LGN relay cells -> interneurons -> cortex [simple, endstopped] cells -> cortex complex cells
  • image p193fig05.07 A more detailed description of the connections between retinal ganglion cells, the LGN, and V1.
    ||
  • image p193fig05.08 The patterns of LGN activation and inhibition on the sides and ends of a line without the top-down feedback (A) and with it (C). The top-down distribution of excitation (+) and inhibition (-) are shown in (B).
    ||
  • image p194fig05.09 A computer simulation of the percept (D) that is generated by feature contours (B) and boundary contours (C) in response to an Ehrenstein disk stimulus (A).
    ||
  • image p198fig05.10 A competitive learning circuit learns to transform distributed feature patterns into selective responses of recognition categories.
    || Competitive learning and Self-Organized Maps (SOMs). input patterns -> feature level (F1) -> adaptive filter (T=ZS) ->
  • image p199fig05.11 Instar learning enables a bottom-up adaptive filter to become selectively tuned to particular feature patterns. Such pattern learning needs adaptive weights that can either increase or decrease to match the featural activations that they filter.
    || Instar learning STM->LTM: need both increases and decreases in strength for the LTM pattern to learn the STM pattern
  • image p200fig05.12 The duality of the outstar and instar networks is evident when they are drawn as above.
    ||
  • image p200fig05.13 Instar and outstar learning are often used to learn the adaptive weights in the bottom-up filters and top-down expectations that occur in ART. The ART Matching Rule for object attention enables top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features.
    || Expectations focus attention: feature pattern (STM), Bottom-Up adaptive filter (LTM), Category (STM), competition, Top-Down expectation (LTM); ART Matching Rule: STM before top-down matching, STM after top-down matching (attention!)
  • image p200fig05.14 Outstar learning enables individual sampling cells to learn distributed spatial patterns of activation at the network of cells that they sample. Again, both increases and decreases in LTM traces must be possible to enable them to match the activity pattern at the sampled cells.
    || Outstar learning, need both increases and decreases in ????
  • image p201fig05.15 An outstar can learn an arbitrary spatial pattern of activation at its sampled nodes, or cells. The net pattern that is learned is a time average of all the patterns that are active at the sampled nodes when the sampling node is active.
    || Spatial learning pattern, outstar learning.
  • image p202fig05.16 In the simplest example of category learning, the category that receives the largest total input from the feature level is chosen, and drives learning in the adaptive weights that abut it. Learning in this "classifying vector", denoted by zi, makes this vector more parallel to the input vector from the feature level that is driving the learning (dashed red arrow).
    || Geometry of choice and learning
  • image p202fig05.17 This figure summarizes the simplest equations whereby the adaptive weights of a winning category learn the input pattern that drove it to win, or more generally a time-average of all the input patterns that succeeded in doing so.
    || Geometry of choice and learning, learning trains the closest LTM vector
  • image p205fig05.18 How catastrophic forgetting can occur in a competitive learning or self-organizing map model due to basic properties of competition and associative learning.
    || Learning from pattern sequences, practicing a sequence of spatial patterns can recode all of them! When is learning stable? Input patterns cannot be too dense relative to the number of categories; Either: not to many distributed inputs relative to the number of categories, or not too many input clusters
  • image p207fig05.19 The ART hypothesis testing and learning cycle. See the text for details about how the attentional system and orienting system interact in order to incorporate learning of novel categories into the corpus of already learned categories without causing catastophic forgetting.
    ||
  • image p211fig05.20 The PN and N200 event-related potentials are computationally complementary events that are computed within the attentional and orienting systems.
    || PN and N200 are complementary waves. PN [top-down, conditionable, specific] match; N200 [bottom-up, unconditionable, nonspecific] mismatch
  • image p211fig05.21 Sequences of P120, N200, and P300 event-related potentials occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    || ERP support for mismatch-mediated reset: event-related potentials: human scalp potentials. ART predicted correlated sequences of P120-N200-P300 Event Related Potentials during oddball learning. P120 mismatch; N200 arousal/novelty; P300 STM reset. Confirmed in (Banquet and Grossberg 1987)
  • image p213fig05.22 Suppose that a very different exemplar activates a category than the one that originally learned how to do this.
    || By prior learning, X1 at F1 is coded at F2, Suppose that X2 incorrectly activates the same F2 code. How to correct the error? The problem occurs no matter how you define an "error"
  • image p213fig05.23 A category, symbol, or other highly compressed representation cannot determine whether an error has occurred.
    || Compression vs error correction. past vs present. Where is the knowledge than an error was made? Not at F2! The compressed code cannot tell the difference! X2 is at F1 when (green right triangle GRT) is at F2 defines the error. There is a mismatch between X1 and X2 at F1. How does the system know this?
  • image p214fig05.24 Learning of a top-down expectation must occur during bottom-up learning in the adaptive filter in order to be able to match the previously associated feature pattern with the one that is currently active.
    || Learning top-down expectations. When the code (green right triangle GRT) for X1 was learned at F2, GRT learned to read-out X1 at F1. [Bottom-Up, Top-Down] learning
  • image p214fig05.25 The sequence of events whereby a novel input pattern can activate a category which, in turn, reads out its learned top-down expectation to be matched against the input pattern. Error correction thus requires the use of a Match Detector that has properties of the Processing Negativity ERP.
    || How is an error corrected. During bottom-up learning, top-down learning must also occur so that the pattern that is read out top-down can be compared with the pattern that is activated by bottom-up inputs. Match detector: Processing Negativity ERP. 1. top-down, 2. conditionable, 3. specific, 4. match
  • image p214fig05.26 When a big enough mismatch occurs, the orienting system is activated and sends a burst of nonspecific arousal to the category level. This Mismatch Detector has properties of the N200 ERP.
    || Mismatch triggers nonspecific arousal. Mismatch at F1 eleicits a nonspecific event at F2. Call this event nonspecific arousal. N200 ERP Naatanen etal: 1. bottom-up, 2. unconditionable, 3. nonspecific, 4. mismatch
  • image p215fig05.27 Every event activates both the attentional system and the orienting system. This text explains why.
    || Attentional and Orienting systems. Every event has a cue (specific) and an arousal (nonspecific) function
  • image p215fig05.28 How a mismatch between bottom-up and top-down input patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level.
    || Mismatch -> inhibition -> arousal -> reset. BU input orienting arousal, BU+TD mismatch arousal and reset. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
  • image p220fig05.29 Vigilance is a gain parameter on inputs to the orienting system that regulates whether net excitation from bottom-up inputs or inhibition from activated categories will dominate the orienting system. If excitation wins, then a memory search for a better matching will occur. If inhibition wins, then the orienting system will remain quiet, thereby enabling resonance and learning to occur.
    || Vigilance control [resonate and learn, reset and search]. ρ is a sensitivity or gain parameter
  • image p221fig05.30 When a predictive disconfirmation occurs, vigilance increases enough to drive a search for a more predictive category. If vigilance increases just enough to exceed the analog match between features that survive top-down matching and the entire bottom-up input pattern, then minimax learning occurs. In this case, the minimum amount of category generalization is given up to correct the predictive error.
    || Match tracking realizes minimax learning principle. Given a predictive error, vigilance increases just enough to trigger search and thus acrifices the minimum generalization to correct the error ... and enables expert knowledge to be incrementally learned. predictive error -> vigilance increase just enough -> minimax learning
  • image p221fig05.31 A system like Fuzzy ARTMAP can learn to associate learned categories in one ART network with learned categories in a second ART network. Because both bottom-up and top-down interactions occur in both networks, a bottom-up input pattern to the first ART network can learn to generate a top-down output pattern from the second ART network.
    || Fuzzy ARTMAP. Match tracking realizes minimax learning principle: vigilence increases to just above the match ratio of prototype / exemplar, thereby triggering search
  • image p224fig05.32 Learning the alphabet with two different levels of vigilance. The learning in column (b) is higher than in column (a), leading to more concrete categories with less abstract prototypes. See the text for details.
    ||
  • image p225fig05.33 Some early ARTMAP benchmark studies. These successes led to the use of ARTMAP, and many variants that we and other groups have developed, in many large-scale applications in engineering and technology that has not abated even today.
    || see Early ARTMAP benchmark studies
  • image p225fig05.34 ARTMAP was successfully used to learn maps of natural terrains with many advantages over those of mapping projects that used AI expert systems. The advantages are so great that many mapping projects started to use this technology.
    || AI expert system - 1 year: field identification of natural regions; derivation of ad hoc rules for each region by expert geographers; correct 80,000 of 250,000 site labels; 230m (site-level) scale. ARTMAP system - 1 day: rapid, automatic, no natural regions or rules; confidence map; 30m (pixel-level) scale can see roads; equal accuracy at test sites
  • image p226fig05.35 I had shown in 1976 how a competitive learning or self-organizing map model could undergo catastrophic forgetting if the input environment was sufficiently dense and nonstationary, as illustrated by Figure 5.18. Later work with Gail Carpenter showed how, if the ART Matching Rule was shut off, repeating just four input patterns in the correct order could also casue catastrophic forgetting by causing superset recoding, as illustrated in Figure 5.36.
    || Code instability input sequences. D C A; B A; B C = ; |D|<|B|<|C|; where |E| is the number of features in the set E. Any set of input vectors that satisfy the above conditions will lead to unstable coding if they are periodically presented in the order ABCAD and the top-down ART Matching Rule is shut off.
  • image p226fig05.36 Column (a) shows catastrophic forgetting when the ART Matching Rule is not operative. It is due to superset recoding. Column (b) shows how category learning quickly stabilizes when the ART Martching Rule is restored.
    || Stabel and unstable learning, superset recoding
  • image p228fig05.37 A macrocircuit of the neurotrophic Spectrally Timed ART, or nSTART, model. I developed nSTART with my PhD student Daniel Franklin. It proposes how adaptively timed learning in the hippocampus, bolstered by Brain Derived Neurotrophic Factor, or BDNF, helps to ensure normal memory consolidation.
    || habituative gates, CS, US, Thalamus (sensory cortex, category learning, conditioned reinforcer learning, adaptively time learning and BDNF), Amygdala (incentive motivation learning), Hippocampus (BDNF), Prefrontal Cortex (attention), Pontine nuclei, Cerebellum (adaptively timed motor learning)
  • image p230fig05.38 The Synchronous Matching ART, or SMART, model includes spiking neurons in a laminar cortical hierarchy. I developed SMART with my PhD student Massimiliano Versace. By unlumping LAMINART to include spiking neurons, finer details of neurodynamics, such as the existence of faster gamma oscillations during good enough matches, and slower beta oscillations during bad enough mismatches, could be shown as emergent properties of network interactions.
    || Second order thalamus -> specific thalamic nucleus -> Thalamic reticulate nucleus -> neocortical laminar circuit [6ll, 6l, 5, 2/3, 1] -> Higher order cortex. Similar for First order thalamus -> First order cortex, with interconnection to Second order, nonspecific thalamic nucleus
  • image p231fig05.39 The SMART hypothesis testing and learning cycle predicts that vigilance increases when a mismatch in subcortical regions like the nonspecific thanlamus activates the nucleus basalis of Meynert which, in turn, broadcasts a burst of the neurotransmitter acetylcholine, or ACh, to deeper cortical layers. Due to the way in which LAMINART proposes that cortical matching and mismatching occurs, this ACh burst can increase vigilance and thereby trigger a memory search. See the text for details.
    || [BU input, [, non]specific thalamic nucleus, thalamic reticulate nucleus, neocortical laminar circuit] cart [Arousal, Reset, Search, Vigilance]
  • image p232fig05.40 Computer simulation of how the SMART mode generates (a) gamma oscillations if a good enough match occurs, or (c) beta oscillations if a bad enough match occurs. See the text for details.
    || Brain oscillations during match/mismatch, data, simulation. (a) TD corticothalamic feedback increases synchrony (Sillito etal 1994) (b) Match increases γ oscillations (c) Mismatch increases θ,β oscillations
  • image p232fig05.41 (a)-(c). The sequence of interlaminar events that SMART predicts during a mismatch reset. (d) Some of the compatible neurophysiological data.
    || Mismatch causes layer 5 dendritic spike that trigger reset. (a) Arousal causes increase in nonspecific thalamic nuclei firing rate and layer 5 dendritic and later somatic spikes (Karkum and Zhu 2002, Williams and Stuart 1999) (b) Layer 5 spikes reach layer 4 via layer 6i and inhibitory neurons (Lund and Boothe 1975, Gilbert and Wiesel 1979) (c) habituative neurotransmitters in layer 6i shift the balance of active cells in layer 4 (Grossberg 1972, 1976) (d) Dendritic stimulation fires layer 5 (Larkum and Zhu 2002) stimulation apical dendrites of nonspecific thalamus
  • image p233fig05.42 Mismatch-induced beta oscillations have been reported in at least three parts of the brain: V1, V4, and hippocampus. Althpough there may be other reasons for beta oscillations in the brain, those that are caused by a mismatch should be studied in concert with the gamma oscillations that occur during a good enough match. See tyhe text for details.
    || Is there evidence for the [gamma, beta] prediction? Yes, in at least three parts of the brain, (Buffalo EA, Fries P, Ladman R, Buschman TJ, Desimone R 2011, PNAS 108, 11262-11267) Does this difference in average oscillation frequencies in the superficial and deep layers reflect layer 4 reset? Superficial recording γ (gamma), Deep recording β (beta) (Berke etal 2008, hippocampus; Buschman and Miller 2009, FEF)
  • image p236fig05.43 The activation of the nucleus basalis of Meynert, and its subsequent release of ACh into deeper layers of neocortex, notably layer 5, is assumed to increase vigilance by reducing afterhyperpolarization (AHP) currents.
    || Vigilance control: mismatch-mediated acetylcholine release (Grossberg and Versace 2008). Acetylcholine (ACh) regulation by nonspecific thalamic nuclei via nucleus basalis of Meynert reduces AHP in layer 5 and causes a mismatch/reset thereby increasing vigilance. HIGH vigilance ~ sharp code, LOW vigilance ~ coarse code
  • image p240fig05.44 When an algebraic exemplar model is realized using only local computations, it starts looking like an ART prototype model.
    || How does the model know which exemplars are in category A? BU-TD learning. How does a NOVEL test item access category A?
  • image p241fig05.45 The 5-4 category structure is one example of how an ART network learns the same kinds of categories as human learners. See the text for details.
    || 5-4 Category structure. A1-A5: closer to the (1 1 1 1) prototype; B1-B4: closer to the (0 0 0 0) prototype
  • image p242fig05.46 Computer simulations of how two variants of Distributed ARTMAP incrementally learn the 5-4 category structure. See the text for details.
    || Distributed ARTMAP with [self-supervised learning, post-training LTM noise]
  • image p245fig05.47 How long-range excitatory connections and short-range disynaptic inhibitory connections realize the bipole grouping law.
    || stimulus -> boundary representation -> layer 2/3
  • image p246fig05.48 Microcircuits of the LAMINART model that I developed with Rajeev Raizada. See the text for details of how they integrate bottom-up adaptive filtering, horizontal bipole grouping, and top-down attentional matching that satisfied the ART Matching Rule.
    ||
  • image p248fig05.49 This circuit of the LAMINART model helps to explain properties of Up and Down states during slow wave sleep, and how disturbances in ACh dynamics can disrupt them.
    ||
  • image p252fig06.01 A surface-shroud resonance begins to form when the surface representations of objects bid for spatial attention. In addition to these topographic excitatory inputs, there is long-range inhibition of the spatial attention cells that determines which inputs will attract spatial attention.
    || Bottom-up spatial attention competition. [more, less] luminous perceptual surfaces -> competition -> spatial attention
  • image p253fig06.02 After bottom-up surface inputs activate spatial attentional cells, they send top-down topographic excitatory signals back to the surface representations. This recurrent shunting on-center off-surround network contrast enhances larger attentional activities while approximately normalizing the total spatial attentional activity. A surface-shroud resonance hereby forms that selects an attentional shroud, enhances the perceived contrast of the attended surface (light blue region), and maintains spatial attention on it.
    || Surface-shroud resonance. perceptual surfaces -> competition -> spatial attention. (Carrasco, Penpeci-Talgar, and Eckstein 2000, Reynolds and Desimone 2003)
  • image p254fig06.03 These interactions of the ARTSCAN Search model enable it to learn to recognize and name invariant object categories. Interactions between spatial attention in the Where cortical stream, via surface-shroud resonances, and object attention in the What cortical stream, that obeys the ART Matching Rule, coordinate these learning, recognition, and naming processes.
    || Retinal image -> On & OFF cell contrast normalization (retina/LGN) -> polarity [, in]sensitive contrast enhancement (V1) -> object [boundary (V2), surface (V2/V4)] -> surface contour (V2); What stream categories: volition control (BG). object boundary (V2) <-> view (ITp) <-> view integrator (ITp) <-> object (ITa) <-> [object-value (ORB), value (Amyg)] <-> name (PFC)
  • image p255fig06.04 The ARTSCAN Search model can also search for a desired target object in a scene, thereby clarifying how our brains solve the Where
  • image p257fig06.05 A curve tracing task with monkeys was used by Roelfsema, Lamme, and Spekreijse in 1998 to demonstrate how spatial attention can flow along object boundaries. See the text for details.
    || Attention flows along curves: Roelfsema etal 1998: Macaque V1. fixation (300ms) -> stimulus (600ms RF - target curve, distractor) -> saccade. Crossed-curve condition: attention flows across junction between smoothly connected curve segments, Gestalt good continuation
  • image p258fig06.06 Neurophysiological data and simulation of how attention can flow along a curve. See the text for details.
    || Simulation of Roelfsema etal 1998, data & simulation. Attention directed only to far end of curve. Propagates along active layer 2/3 grouping to distal neurons.
  • image p258fig06.07 A top-down spotlight of attention can also be converted into a shroud. This process begins when the spotlight triggers surface filling-in within a region. Figure 6.8 shows how it is completed.
    || Reconciling spotlights and shrouds: top-down attentional spotlight becomes a shroud. spotlight of attention, surface filling-in
  • image p259fig06.08 The distributed ARTSCAN, or dARTSCAN, model includes spatial attention in both PPC and PFC, and both fast-acting attention, triggered by transient cells in Where cortical areas such as MT, and slower-acting surface-shroud resonances in What cortical areas such as V4 and PPC. See the text for details.
    || dARTSCN spatial attention hierarchy, Fast (Where stream) Slow (What stream) (Foley, Grossberg, and Mingolia 2012). [transient cells (MT) ->, object surfaces (V4) <->] [object shrouds (PPC) <-> spatial shrouds (PPC/PFC)]
  • image p260fig06.09 Crowding in the periphery of the eye can be avoided by expanding the size and spacing of the letters to match the cortical magnification factor.
    || Crowding: visible objects and confused recognition. Accurate target recogition requires increased flanker spacing at higher eccentricity
  • image p260fig06.10 The cortical magnification factor transforms (A) artesian coordinates in the retina into (B) log polar coordinates in visual cortical area V1.
    ||
  • image p261fig06.11 If the sizes and distances between the letters stays the same as they are received by more peripheral parts of the retina, then all three letters may be covered by a single shroud, thereby preventing their individual perception and recognition.
    || Crowding: visible objects and confused recognition. log compression and center-surround processing cause... input same eccentricity, surface, object shroud, crowding threshold. object shrouds merge!
  • image p261fig06.12 Pop-out of the L among T
  • image p265fig06.13 The basal ganglia gate perceptual, cognitive, emotional, and more processes through parallel loops.
    || [motor, ocularmotor, dorsolateral, ventral-orbital, anterior cingulate] vs. [Thalamus, pallidum-subs, nigra, Striatum, Cortex]
  • image p267fig06.14 Feedback from object surfaces to object boundaries uses surface contours. This feedback assures complementary consistency and enables figure-ground separation. A corollary discharge of the surface contours can be used to compite salient object feature positions.
    || Perceptual consistency and figure-ground separation.
  • image p268fig06.15 The largest salient feature signal is chosen to determine the next target position of a saccadic eye movement. This This target position signal self-inhibits to enable the next most salient position to be foveated. In this way, multiple feature combinations of the object can be foveated and categorized. This process clarifies how the eyes can explire even novel objects before moving to other objects. These eye movements enable invariant categories to be learned. Each newly chosen target position is, moreover, an "attention pointer" whereby attention shifts to the newly foveated object position.
    || How are saccades within an object determined? Figure-ground outputs control eye movements via V3AA! Support for prediction (Theeuwes, Mathot, and Kingstone 2010), More support: "attention pointers" (Cavanaugh etal 2010), Even more support (Backus etal 2001, Caplovitz and Tse 2006, Galletti and Battaglia 1989, Nakamura and Colby 2000)
  • image p270fig06.16 The same target position signal that can command the next saccade also updates a gain field that predictively maintains the attentional shroud in head-centered coordinates, even before the eye movement is complete. This process keeps the shroud invariant under eye movements, so that it can continue to inhibit reset of an emerging invariant category as t is associated with multiple object views, even while the conscious surface representation shifts with each eye movement in retinotopic coordinates. This pdating process is often called predictive re mapping.
    || Predictive remapping of eye movements! From V3A to LIP. [spatial attention, object attention, figure-ground separation, eye movement remapping, visual search]. (Beauvillaib etal 2005, Carlson-Radvansky 1999, Cavanaugh etal 2001, Fecteau & Munoz 2003, Henderson & Hollingworth 2003, Irwin 1991)
  • image p271fig06.17 Persistent activity in IT cells is just what is needed to enable view-invariant object category learning by ARTSCAN to be generalized to [view, position, size]-invariant category learning by positional ARTSCAN, or pARTSCAN. See the text for details.
    || Persistent activity in IT. Physiological data show that persistent activity exist in IT (Fuester and Jervey 1981, Miyashita and Chang 1988, Tomita etal 1999). Adapted from (Tomita etal 1999 Nature)
  • image p272fig06.18 The pARTSCAN model can learn [view, position, size]-invariant categories by adding view category integrator cells that have the properties of persistent neurons in IT. These integrator cells get reset with the invariant object category, not the view category.
    || pARTSCAN: positionally-invariant object learning. (Cao, Grossberg, Markowitz 2011). IT cells with persistent activities are modeled by view category integrators in ITp. View-specific category cells are RESET as the eyes move within the object. View category integrator cells are NOT RESET when the view-specific category is reset. They are RESET along with invariant object category cells when a spatial attention shift occurs.
  • image p272fig06.19 The various parts of this figure explain why persistent activity is needed in order to learn positionally-invariant object categories, and how this fails when persistent activity is not available. See the text for details.
    ||
  • image p273fig06.20 pARTSCAN can simulate the IT cell recoding that Li and DiCarlo reported in their swapping experiments because the swapping procedure happens without causing a parietal reset burst to occur. Thus the originally activated invariant category remains activated and can get associated with the swapped object features.
    || Simulation of Li and DiCarlo swapping data. data (Li and DiCarlo 2008), model (Cao, Grossberg, Markowitz 2011). normalized response vs. exposure (swaps and/or hours)
  • image p274fig06.21 pARTSCAN can also simulate the trade-off in IT cell responses between position invariance and selectivity that was reported by Zoccolan etal 2007. This trade-off limits the amount of position invariance that can be learned by a cortical area like V1 that is constrained by the cortical magnification factor.
    || Trade-off in IT cell response properties. Inferotemporal cortex cells with greater position invariance respond less selectively to natural objects. invariance-tolerance, selectivity-sparseness. data (Zoccolan etal 2007) model (Grossberg, Markowitzm, Cao 2011). position tolerance (PT, degrees) vs sparseness (S)
  • image p274fig06.22 pARTSCAN can simulate how IT cortex processes image morphs, when it learns with high vigilance. See the text for details.
    || Akrami etal simulation: a case of high vigilance. tested on morphs between image pairs
  • image p275fig06.23 Data from (Akrami etal 2009) and our simulation of it. See the text for details.
    || IT responses to image morphs. data vs model
  • image p275fig06.24 Left and right eye stereogram inputs are constructed to generate percepts of objects in depth. These percepts include the features of the objects, not only their relative depths, a property that is not realized in some other models of steriopsis. See the text for details.
    || Sterogram surface percepts: surface lightnesses are segregated in depth (Fand, Grossberg 2009). Contrast algorithms that just compute disparity matches and let computer code build the surface, eg (Marr, Poggio 1974)
  • image p276fig06.25 In addition to the gain field that predictively maintains a shroud in head-centered coordinates during saccades, there are gain fields that predictively maintain binocular boundaries in head-centered coordinates so that they can maintain binocular fusion during saccades and control the filling-in of surfaces in retinotopic coordinates.
    || Surface-shroud resonance.
  • image p277fig06.26 Gain fields also enable predictive remapping that maintain binocular boundary fusion as the eyes move betweeen objects. See the text for details.
    || Predictive remapping maintains binocular boundary fusion even as eyes move between objects. retinotopic boundary -> invariant boundary (binocular)
  • image p278fig06.27 A surface-shroud resonance through the Where stream enables us to consciously see an object while a feature-category resonance into the What stream enables us to recognize it. Both kinds of resonances can synchronize via visual cortex so that we can know what an object is when we see it.
    || What kinds of resonances support knowing vs seeing? What stream [knowing, feature-prototype resonance], Where stream [seeing, surface-shroud resonance]
  • image p278fig06.28 If the feature-category resonances cannot form, say due to a lesion in IT, then a surface-shroud resonance can still support conscious seeing of an attended object, and looking at or reaching for it, even if the individual doing so knows nothing about the object, as occurs during visual agnosia. The surface-shroud resonance supports both spatial attention and releases commands that embody the intention to move towards the attended object.
    || What kinds of resonances support knowing vs seeing? visual agnosia: reaching without knowing Patient DF (Goodale etal 1991). Attention and intention both parietal cortical functions (Anderson, Essick, Siegel 1985; Gnadt, Andersen 1988; Synder, Batista, Andersen 1997, 1998)
  • image p283fig07.01 The usual boundary processing stages of [simple, complex, hypercomplex, bipole] cells enable our brains to correct uncontrolled persistence of previously excited cells just by adding habituative transmitter gates, or MTM traces, at appropriate places in the network.
    || Boundary processing with habituative gates. spatial competition with habituative gates, orientational competition: gated dipole, bipole grouping
  • image p284fig07.02 Psychophysical data (top row) and simulation (bottom row) of how persistence decreases with flash illuminance and duration.
    || Persistence data and simulations. (Francis, Grossberg, Mingolia 1994 Vision Research, 34, 1089-1104). Persistence decreases with flash illuminance and duration (Bowen, Pola, Matin 1974; Breitmeyer 1984; Coltheart 1980). Higher luminance or longer duration habituates the gated dipole ON channel more. Causes larger and faster rebound in the OFF channel to shut persisting ON activity off.
  • image p285fig07.03 Persistence decreases with flash illuminance and duration due to the way in which habituative transmitters regulate the strength of the rebound in response to offset of a stimulating input, and how this rebound inhibits previously activated bipole cells.
    || Persistence data and simulations (Francis, Grossberg, Mingolia 1994 Vision Research, 34, 1089-1104). Persistence decreases with flash illuminance and duration. Horizontal input excites a horizontal bipole cell, which supports persistence. Offset of the horizontal input causes a rebound of activity in the vertical pathway, which inhibits the horizontal bipole cell, thereby terminating persistence.
  • image p286fig07.04 Illusory contours persist longer than real contours because real contours have more inducers whose rebound at contour offset can cause faster boundary reset. Illusory contours also take longer to form than real contours, which explains the increasing portion of the curve.
    || Persistence data and simulations (Meyer, Ming 1988; Reynolds 1981). Increasing portion of curve is due to formation time of the illusory contour. Longer persistence is due to fewer bottom-up inducers of an illusory contour that has the same length as a real contour: only illuminance-derived edges generate reset signals. When bottom-up inducers are inhibited by OFF cell rebounds, their offset gradually propagates to the center of the illusory contour.
  • image p286fig07.05 This figure shows the propagation through time of illusory contour offset from the rebounded cells that got direct inputs to the center of the contour.
    || Persistence data and simulations. Illusory contours persist longer than real contours (Meyer, Ming 1988; Reynolds 1981). When bottom-up inducers are inhibited by OFF cell rebounds, their offset gradually propagates to the center of the illusory contour.
  • image p287fig07.06 The relative durations of persistence that occur due to an adaptation stimulus of the same or orthogonal orientation follow from the properties of the habituative gated dipoles that are embedded in the boundary completion system.
    || Persistence data and simulations. Change in persistence depends on whether adaptation stimulus has same or orthogonal orientation as test grating (Meyer, Lawson, Cohen 1975). If adaptation stimulus and test stimulus have the same orientation, they cause cumulative habituation, which causes a stronger reset signal, hence less persistence. When they are orthogonal, the competition on the ON channel is less, hence more persistence.
  • image p287fig07.07 Persistence increases with distance between a target and a masking stimulus due to weakening of the spatial competition in the first competitive stage of hypercomplex cells.
    || Persistence data and simulations. Persistence increases with distance between a target and a masking stimulus (Farrell, Pavel, Sperling 1990). There is less spatial competition from the masker to the target when they are more distant, hence the target is more persistent.
  • image p290fig08.01 Motion in a given direction pools all possible contrast-sensitive sources of information that are moving in that direction.
    ||
  • image p291fig08.02 Complex cells can respond to motion in opposite directions and from features with opposite contrast polarities.
    ||
  • image p292fig08.03 The MacKay and waterfall illusion aftereffects dramatically illustrate the different symmetries that occur in the orientational form stream and the directional motion stream.
    || Form and motion aftereffects. different inhibitory symmetries govern orientation and direction. illusions: [Form- MacKay 90°, Motion- waterfall 180°]. stimulus, aftereffect percept
  • image p293fig08.04 Most local motion signals on a moving object (red arrows) may not point in the direction of the object
  • image p295fig08.05 The perceived direction of an object is derived either from a small subset of feature tracking signals, or by voting among ambiguous signals when feature tracking signals are not available.
    || Aperture problem. Barberpole illusion (Wallach). How do sparse feature tracking signals capture so many ambiguous motion signals to determine the perceived motion direction?
  • image p296fig08.06 In the simplest example of apparent motion, two dots turning on and off out of phase in time generate a compelling percept of continuous motion between them.
    || Simplest long-range motion paradigm. ISI- interstimulus interval, SOA- stimulus onset synchrony
  • image p296fig08.07 When two flashes turn on and off out of phase with the correct range of interstimulus intervals, and not too far from one another, then either beta motion of phi motion are perceived.
    || Beta and Phi motion percepts. Beta motion: percepts of continuous motion of a well-defined object across empty intervening space. Phi motion: sense of "pure" motion without a concurrent percept of moving object. (Exner 1875) http://www.yorku.ca/eye/balls.htm
  • image p297fig08.08 When a second flash is more intense than the first flash, then apparent motion may occur from the second to the first flash.
    || Delta motion: motions from the second to the first flash. Data: (Kolers 1972; Korte 1915). Simulation: (Grossberg, Rudd 1992). This occurs when the luminance or contrast of the second flash is large compared to that of the first flash. Sustained and transient cells obey shunting dynamics whose averaging rates speed up with output intensity. The first flash to wane is the one that will be the source of the G-wave.
  • image p297fig08.09 Simulation of motion in opposite directions that is perceived when two later flashes occur on either side of the first flash.
    || Split motion. Data: (H.R. Silva 1926), Simulation: (Grossberg, Rudd 1992)
  • image p298fig08.10 Simulation of the motion speed-up that is perceived when flash duration decreases.
    || "The less you see it, the faster it moves". Data: (Giaschi, Anstis 1989), Simulation: (Grossberg, Rudd 1992). ISI = 0, flash duration decreases; SOA = constant, flash duration decreases
  • image p298fig08.11 This formotion percept is a double illusion due to boundary completion in the form stream followed by long-range apparent motion using the completed bioundaries in the motion stream.
    || Form-motion interactions. Apparent motion of illusory contours (Ramachandran 1985). Double illusion! Illusory contour is created in form stream V1-V2. Apparent motion of illusory contours occurs in motion stream due to a V2-MT interaction.
  • image p300fig08.12 A single flash activates a Gaussian receptive field across space whose maximum is chosen by a winner-take-all recurrent on-center off-surround network.
    || Gaussian receptive fields are sufficient! (Grossberg, Rudd 1992). Single flash. Suppose that a single flash causes a narrow peak of activity at the position where it occurs. It generates output signals through a Gaussian filter that produces a Gaussian activity profile at the next processing stage., A recurrent on-center off-surround network chooses the maximum activity and suppresses samaller activities. Winner-take-all
  • image p300fig08.13 As a flash waxes and wanes through time, so too do the activities of the cells in its Gaussian receptive field. Because the maximum of each Gaussian occurs at the same position, nothing is perceived to move.
    || Temporal profile of a single flash. Suppose that a single flash quickly turns on to maximum activity, stays there for a short time, and then shuts off. It causes an increase in activity, followed by an exponential decay of activity. The corresponding Gaussian profile waxes and wanes through time. Since the peak position of the Gaussian does not change through time, nothing moves.
  • image p300fig08.14 Visual inertia depicts how the effects of a flash decay after the flash shuts off.
    || Inertia (%) vs ISI (msec)
  • image p301fig08.15 If two flashes occur in succession, then the cell activation that is caused by the first one can be waning while the activation due to the second one is waxing.
    || Temporal profile of two flashes. Of two flashes occur in succession, the waning of the activity due to the first flash may overlap with the waxing of the activity due to the second flash.
  • image p301fig08.16 The sum of the waning Gaussian activity profile due to the first flash and the waxing Gaussian activity profile due to the second flash has a maximum that moves like a travelling wave from the first to the second flash.
    || Travelling wave (G-wave): long-range motion. If the Gaussian activity profiles of two flashes overlap sufficiently in space and time, then the sum of Gaussians produced by the waning of the first flash added to the Gaussian produced by the waxing of the second flash, can produce a single-peaked travelling wave from the position of the first flash to that of the second flash. The wave is then processed through a WTA choice network (Winner Take All). The resulting continuous motion percept is both long-range and sharp.
  • image p302fig08.17 An important constraint on whether long-rang apparent motion occurs is whether the Gaussian kernel is broad enough to span the distance between successive flashes.
    || Motion speed-up with increasing distance: For a fixed ISI, how does perceived velocity increase with distance between the flashes? Gaussian filter : Gp = exp{ -(j-i)^2 / (2*K^2) }. The largest separation, L_crit, for which sufficient spatial overlap between two Gaussians centered at locations i and j will exist to support a travelling wave of summed peak activity is : L_crit = 2*K
  • image p302fig08.18 This theorem shows how far away (L), given a fixed Gaussian width, two flashes can be to generate a wave of apparent motion between them.
    || G-wave properties (Grossberg 1977). Let flashes occur at positions i=0 and i=L. Suppose that d[dt: x0] = -A*x0 + J0; d[dt: xL] = -A*xL + JL; Define G(w,t) ...; Theorem 1 max_w G(w,t) moves continuously through time from w=0 to w=L if and only if L <= 2*K.
  • image p303fig08.19 The dashed red line divides combinations of flash distance L and Gaussian width K into two regions of no apparent motion (above the line) and apparent motion (below the line).
    || No motion vs motion at multiple scales.
  • image p303fig08.20 The G-wave speeds up with the distance between flashes at a fixed delay, and has a consitent motion across multiple spatial scales.
    || G-wave properties (Grossberg 1977). Theorem 2 (Equal half-time property) The time at which the motion signal reaches position w=L/2. Apparent motion speed-up with distance: this half-time is independent of the distance L between the two flashes. Consistent motion across scales: half-time is independent of the scale size K. Method of proof: elementary algebra and calculus (Grossberg, Rudd 1989 appendix)
  • image p304fig08.21 A computer simulation of the equal half-time property whereby the apparent motions within different scales that respond to the same flashes all reach the half-way point in the motion trajectory at the same time.
    || Equal half-time property: how multiple scales cooperate to generate motion percept. Travelling waves from Gaussian filters of different sizes bridge the same distance in comparable time. The time needed to bridge half the distance between flashes is the same.
  • image p304fig08.22 Data (top image) and simulation (bottom image) of Korte
  • image p305fig08.23 Despite its simplicity, the Terus display can induce one of four possible percepts, depending on the ISI.
    || Ternus motion. ISI [small- stationary, intermediate- element, larger- group] motion http://en.wikipedia.org/wiki/Ternus_illusion
  • image p305fig08.24 When each stimulus has an opposite contrast relative to the background, element motion is eliminated and replaced by group motion at intermediate values of the ISI.
    || Reverse-contrast Ternus motion. ISI [small- stationarity, intermediate- group (not element!), larger- group] motion.
  • image p306fig08.25 The Motion BCS model can explain and simulate all the long-range apparent motion percepts that this chapter describes.
    || Motion BCS model (Grossberg, Rudd 1989, 1992) Level 1: discount illuminant; Level 2: short-range filter, pool sustained simple cell inputs with like-oriented receptive fields aligned in a given direction. Sensitive to direction-of-contrast; Level 3: Transient celss with unoriented receptive field. Sensitive to direction-of-change
  • image p306fig08.26 The 3D FORMOTION model combines mechanisms for determining the relative depth of a visual form with mechanisms for both short-range and long-range motion filtering and grouping. A formotion interaction from V2 to MT is predicted to enable the motion stream to track objects moving in depth.
    || 3D Formotion model (Chey etal 1997; Grossberg etal 2001; Berzhanskaya etal 2007). Form [LGN contours -> simple cells orientation selectivity -> complex cells (contrast pooling, orientation selectivity, V1) -> hypercomplex cells (end-stopping, spatial sharpening) <-> bipole cells (grouping, cross-orientation competition) -> depth-separated boundaries (V2)], Motion: [LGN contours -> transient cells (directional stability, V1) -> short-range motion filter -> spatial competition -> long-range motion filter and boundary selection in depth (MT) <-> directional grouping, attentional priming (MST)]
  • image p307fig08.27 The distribution of transients through time at onsets and offsets of Ternus display flashes helps to determine whether element motion or group motion will be perceived.
    || Ternus motion. Element motion: zero or weak transients at positions 2 and 3; Group motion: strong transients at positions 2 and 3. Conditions that favor visual persistence and thus perceived stationarity of element (2,3) favor element motion (Braddick, Adlard 1978; Breitmeyer, Ritter 1986; Pantle, Peteresik 1980)
  • image p308fig08.28 The Gaussian distributions of activity that arise from the three simultaneous flashes in a Ternus display add to generate a maximum value at their midpoint. The motion of this group gives rise to group motion.
    || Ternus group motion simulation. If L < 2*K, Gaussian filter of three flashes forms one global maximum.
  • image p310fig08.29 When the individual component motions in (A) and (B) combine into a plaid motion (C), both their perceived direction and speed changes.
    ||
  • image p311fig08.30 The data of (Castet etal 1993) in the left image was simulated in the right image by the 3D FORMOTION model that I developed with my PhD student Jonathan Chey. These data provide insight into how feature tracking signals propagate from the ends of a line to its interior, where they capture consistent motion directional signals and inhibit inconsistent ones.
    || Solving the aperture problem. A key design problem: How do amplified feature tracking signals propagate within depth to select the cirrect motion directions at ambiguous positions? This propagation from feature tracking signals to the line interior determines perceived speed in Castet etal data, which is why speed depends on line tilt and length. Data: (Castet etal 1993), Simulation: (Chey etal 1997)
  • image p311fig08.31 Processing stages of the Motion BCS convert locally ambiguous motion signals from transient cells into a globally coherent percept of object motion, thereby solving the aperture problem.
    || Why are so many motion processing stages needed? change sensitive receptors -> directional transient cells -> directional short-range filter -> spatial and directional competition -> directional long-range filter (MT) <-> Directional grouping network
  • image p312fig08.32 Schematic of motion filtering circuits.
    || Level 1: Change sensitive units -> Level 2: transient cells -> Level 3: short-range spatial filters -> Level 4: intra-scale competition -> Level 5: inter-scale competition
  • image p312fig08.33 Processing motion signals by a population of speed-tuned neurons.
    ||
  • image p314fig08.34 The VISTARS model for visually-based spatial navigation. It uses the Motion BCS as a front end and feeds it output signals into two computationally complementary cortical processing streams for computing optic flow and target tracking information.
    || VISTARS navigation model (Browning, Grossberg, Mingolia 2009). Use FORMOTION model as front end for higher level navigational circuits: input natural image sequences -> estimate heading (MT+)-MSTd -> additive processing -> estimate object position (MT-)-MSTv direction and speed subtractive processing -> Complementary Computing. [optic flow navigation, object tracking]
  • image p315fig08.35 The output signals from the directional grouping network obey the ART Matching Rule. They thereby select consistent motion directional signals while suppressing inconsistent ones, and do not distort that the spared cells code. The aperture problem is hereby solved by the same mechanism that dynamically stabilizes the learning of directional grouping cells.
    || How to select correct direction and preserve speed estimates? Prediction: Feedback from MSTv to MT- obeys ART Matching Rule; Top-down, modulatory on-center, off-surround network (Grossberg 1976, 1980; Carpenter, Grossberg 1987, 1991); Explains how directional grouping network can stably develop and how top-down directional attention can work. (Cavanough 1992; Goner etal 1986; Sekuer, Ball 1977; Stelmach etal 1994). Directional grouping network (MSTv) <-> Directional long-range filter (MT). Modulatory on-center selects chosen direction and preserves speed. Off-surround inhibits incompatible directions.
  • image p316fig08.36 How the directional grouping network, notably properties of the ART Matching Rule, enables a small set of amplified feature tracking signals at the ends of a line to select consistent directions in the line interior, while suppressing inconsistent directions.
    || Motion capture by directional grouping feedback. Directional grouping network (MSTv) <-> Directional long-range filter (MT). It takes longer to capture ambiguous motion signals in the line interior as the length of the line increases cf (Castet etal 1993)
  • image p317fig08.37 Processing stages that transform the transient cell inputs in response to a tilted moving line into a global percept of the object
  • image p319fig08.38 The neurophysiological data from MT (left image) confirms the prediction embodied in the simulation of MT (right image) concerning the fact that it takes a long time for MT to compute an object
  • image p320fig08.39 Simulation of the barberpole illusion direction field at two times. Note that the initial multiple directions due to the feature tracking signals at the contiguous vertical and horizontal sides of the barberpole (upper image) get supplanted by the horizontal direction of the two horizontal sides (lower image).
    || Barberpole illusion (one line) simulation
  • image p321fig08.40 Visible occluders capture the boundaries that they share with moving edges. Invisible occluders do not. Consequently, the two types of motions are influenced by different combinations of feature tracking signals.
    || Motion grouping across occluders (J. Lorenceau, D. Alais 2001). Rotating contours observed through apertures. Determine direction of a circular motion. [, in]visible occluders http://persci.mit.edu/demos/square/square.html
  • image p322fig08.41 A percept of motion transparency can be achieved by using motion grouping feedback that embodies the "asymmetry between near and far" along with the usual opponent competition between opposite motion directions.
    || Motion transparency. near: big scale; far: small scale MSTv, "Asymmetry between near and far" Inhibition from near (large scales) to far (small scales) at each position
  • image p323fig08.42 The chopsticks illusion not only depends upon how feature tracking signals are altered by visible and invisible occluders, but also upon how the form system disambiguates the ambiguous region where the two chopsticks intersect and uses figure-ground mechanisms to separate them in depth.
    || Chopsticks: motion separation in depth (Anstis 1990). [, in]visible occluders [display, percept]
  • image p324fig08.43 Attention can flow along the boundaries of one chopstick and enable it to win the orientation competition where the two chopsticks cross, thereby enabling bipole grouping and figure-ground mechanisms to separate them in depth within the form cortical stream.
    || The ambiguous X-junction. motion system. Attention propagates along chopstick and enhances cell activations in one branch of a chopstick. MT-MST directional motion grouping helps to bridge the ambiguous position.
  • image p325fig08.44 Attentional feedback from MST-to-MT-to-V2 can strengthen one branch of a chopstick (left image). Then bipole cell activations that are strengthened by this feedback can complete that chopstick
  • image p325fig08.45 The feedback loop between MT/MST-to-V1-to-V2-to-MT/MST enables a percept of two chopsticks sliding one in front of the other while moving in opposite directions.
    || Closing formotion feedback loop. [formotion interaction, motion grouping] V1 -> V2 -> (MT <-> MST) -> V1
  • image p326fig08.46 How do we determine the relative motion direction of a part of a scene when it moves with a larger part that determines an object reference frame?
    || How do we perceive relative motion of object parts?
  • image p327fig08.47 Two classical examples of part motion in a moving reference frame illustrate the general situation where complex objects move while their multiplie parts may move in different directions relative to the direction of the reference frame.
    || Two kinds of percepts and variations (Johansson 1950). Symmetrically moving inducers: each do moves along a straight path, each part contributes equally to common motion; Duncker wheel (Duncker 1929): one dot moves on a cycloid, the other dot (the "center") moves stright, unequal contributipon from parts; If the dot is presented alone: seen as cycloid; if with center: seen as if it were on the rim of a wheel.
  • image p328fig08.48 How vector subtraction from the reference frame motion direction computes the part directions.
    || How vector decomposition can explain them. Common motion subtracted from retinal motion gives part motion: [retinal, common, part] motion
  • image p328fig08.49 A directional peak shift in a directional hypercolumn determines the part directions relative to a moving reference frame.
    || What is the mechanism of vector decomposition? (Grossberg, Leveille, Versace 2011). Prediction: directional peak shift! ...specifically, a peak shift due to Gaussian lateral inhibition. [retinal, part, common, relative] motion. shunting dynamics, self-normalization, contrast gain control
  • image p329fig08.50 The common motion direction of the two dots builds upon illusory contours that connect the dots as they move through time. The common motion directin signal can flow along these boundaries.
    || How is common motion direction computed? retinal motion. Bipole grouping in the form stream creates illusory contours between the dots. V2-MT formotion interaction injects the completed boundaries into the motion stream where they capture consistent motion signals. Motion of illusory contours is computed in the motion stream: cf. Ramanchandran
  • image p329fig08.51 Large and small scale boundaries differentially form illusory contours between the dots and boundaries that surround each of them respectively. These boundaries capture the motion signals that they will support via V2-to-MT formotion interaction. The MST-to-MT directional peak shift has not yet occurred.
    || Large scale: near. Can bridge gap between dots to form illusory contours. Spatial competition inhibits inner dot boundaries.; Small scale: far. Forms boundaries around dots.
  • image p330fig08.52 Direction fields of the object frame (left column) and of the two dot "parts" (right column) show the correct motion directions after the peak shift top-down expectation acts.
    || Simulation of motion vector decomposition. [Larger scale (nearer depth), Small scale (farther depth)] vs [Down, Up]
  • image p330fig08.53 Simulation of the various directional signals of the left dot through time. Note the amplification of the downward directional signal due to the combined action of the short-range and long-range directional signals.
    ||
  • image p331fig08.54 The simulated part directions of the rotating dot through time after the translational motion of the frame does its work via the top-down peak shift mechanism.
    || Cycloid. Motion directions of a single dot moving slowly along a cycloid curve through time.
  • image p331fig08.55 The rightward motion of the dot that determines the frame propagates along the illusory contour between the dots and thereby dominates the motion directions along the rim as well, thereby setting the stage for the peak shift mechanism.
    || Duncker Wheel: large scale. [cyc;oid, center] velocity -> rightward common velocity. Stable rightward motion at the center captures motion at the rim.
  • image p332fig08.56 Simulation of the Duncker Wheel motion through time. See the text for details.
    || Duncker Wheel: small scale. Temporal procession of activity in eight directions. Wheel motion as seen when directions are collapsed.
  • image p332fig08.57 The MODE model uses the Motion BCS as its front end, followed by a saccadic target selection circuit in the model LIP region that converts motion directions into movement directions. These movement choices are also under basal ganglia (BG) control. More will be explained about the BG in Chapters 13 and 15.
    || MODE (MOtion DEcision) model (Grossberg, Pilly 2008, Vision Research). Change sensitive receptors -> directional transient cells -> directiponal short-range filter -> spatial and directional competition -> directional long-range filter (MT) <-> directional grouping network (MSTv) -> saccadic target selection <-> gsting mechanism (BG). Representation of problem that solves the aperture problem (change sensitive receptors (CSR) -> directional grouping network (DGN, MSTv)). Gated movement choice (saccadic target selection & gating mechanism)
  • image p333fig08.58 Neurophysiological data (left image) and simulation (right image) of LIP data during correct trials on the RT task. See the text for details.
    || LIP responses during RT task correct trials (Roltman, Shadlen 2002). More coherence in favored direction causes faster cell activation. More coherence in opposite direction causes faster cell inhibition. Coherence stops playing a role in the final stages of LIP firing.
  • image p334fig08.59 Neurophysiological data (left column) and simulations (right column) of LIP responses for the FD task during both [correct, error] trials. See the text for details.
    || LIP responses for the FD task during both [correct, error] trials (Shadlen, Newsome 2001). LIP encodes the perceptual decision regardless of the true direction of the dots. Predictiveness of LIP responses on error trials decreases with increasing coherence.
  • image p334fig08.60 Behavioral data (left image) and simulation (right image) about accuracy in both the RT and FD tasks. See text for details
    || Behavioral data: % correct vs % coherence (Mazurek etal 2003; Roltman, Shadien 2002). More coherence in the motion causes more accurate decisions. RT task accuracy at weaker coherence levels is slightly better than FD task accuracy.
  • image p335fig08.61 Behavioral data (left image) and simulation (right image) about speed in correct and error trials of the RT task. See text for details.
    || Behavioral data: speed, correct and error trials (RT task) (Roltman, Shadien 2002). More coherence in the motion causes faster reaction time.
  • image p335fig08.62 More remarkable simulation fits (right column) to LIP neurophysiology data (left column) about where and when to move the eyes.
    || LIP encodes not only where, but also when, to move the eyes. ...No Bayes(Roltman, Shadien 2002). Firing rate (sp/s) vs time (ms). Slope of firing rate (sp/s^2) vs % correct.
  • image p338fig09.01 The brain regions that help to use visual information for navigating in the world and tracking objects are highlighted in yellow.
    || How does a moving observer use optic flow to navigate while tracking a moving object? [What ventral, Where dorsal] retina -> many locations -> PFC
  • image p338fig09.02 Heading, or the direction of self-motion (green dot), can be derived from the optic flow (red arrows) as an object, in this case an airplane landing, moves forward.
    || Heading and optic flow (Gibson 1950). Optic flow: scene motion generates a velocity field. Heading: direction of travel- self-motion direction. Heading from optic flow, focus of expansion (Gibson 1950). Humans determine heading accurately to within 1-2 degrees.
  • image p339fig09.03 When an observer moves forward, an expanding optic flow is caused. Eye rotations cause a translating flow. When these flows are combined, a spiral flow is caused. How do our brains compensate for eye rotations to compute the heading of the expanding optic flow?
    || Optic flow during navigation (adapted from Warren, Hannon 1990) [observer, retinal flow]: [linear movement, expansion], [eye rotation, translation], [combined motion, spiral]
  • image p339fig09.04 This figure emphasizes that the sum of the expansion and translation optic flows is a spiral optic flow. It thereby raises the question: How can the translation flow be subtracted from the spiral flow to recover the expansion flow?
    || Eye rotations add a uniform translation to an flow field. Resulting retinal patterns are spirals. Expansion + translation = spiral
  • image p340fig09.05 An outflow movement command, also called efference copy or corollary discharge, is the souce ot the signals whereby the commanded eye movement position is subtracted from spiral flow to recover expansion flow and, with it, heading.
    || Subtracting efference copy. Many experiments suggest that the brain internally subtracts the translational component due to eye movements. Efference copy subtracts the translational component using pathways that branch from outflow movement commands to the eye muscles.
  • image p340fig09.06 Corollary discharges are computed using a branch of the outflow movement commands that move their target muscles.
    ||
  • image p340fig09.07 Log polar remapping from the retina to cortical area V1 and beyond converts expansion, translation, and spiral flows on the retina into parallel flows, with different orientations, on the cortical map.
    || Log polar remapping of optic flow. retina -> cortex. Any combination of expansion and circular motion centered on the fovea maps to cortex as a single direction. Retinal Cartesian coordinates (x,y) map to cortical polar coordinates (r,theta). This makes it easy to compute directional receptive fields in the cortex!
  • image p341fig09.08 How the various optic flows on the retina are mapped through V1m MT, and MSTd to then compute heading in parietal cortex was modeled by (Grossberg, Mingolia, Pack 1999), using the crucial transformation via V1 log polar mapping into parallel cortical flow fields.
    || MSTd model (Grossberg, Mingolia, Pack 1999). Retinal motion -> V1 log polar mapping -> Each MT Gaussian RF sums motion in preferred direction -> Each MSTd cell sums MT cell inputs with same log polar direction -> Efference copy subtracts rotational flow from MSTd cells.
  • image p341fig09.09 Responses of MSTd cells that are used to compute heading. See the text for details.
    || Cortical area MSTd (adapted from Graziano, Anderson, Snowden 1994). MSTd cells are sensitive to spiral motion as combinations of rotation and expansion.
  • image p342fig09.10 Model simulations of how the peak of MSTd cell activation varies with changes of heading.
    || Heading in log polar space: Retina -> log polar -> MSTd cell. Log polar motion direction correlates with heading eccentricity.
  • image p342fig09.11 Psychophysical data (left panel) and computer simulation (right column) of the importance of efference copy in real movements. See the text for details.
    || Heading: move to wall and fixate stationary object (adapted from Warren, Hannon 1990). Inaccurate for simulated eye rotation, accurate for real eye rotation, need confirmation by efference copy!
  • image p343fig09.12 Transforming two retinal views of the Simpsons into log polar coordinates dramatizes the problem that our brains need to solve in order to separate, and recognize, overlapping figures.
    || View 1 cortical magnification. View 2 How do we know if we are still fixating on the same object?!
  • image p343fig09.13 When one scans the three different types of pears in the left image, as illustrated by the jagged blue curve with red movement end positions, and transforms the resulting retinal images via the cortical magnification factor, or log polar mapping, the result is the series of images in the right column. How do our brains figure out from such confusing data which views belong to which pear?
    || View-invariant object learning and recognition Three pears: Anjou, Bartlett, Comice. Which is the Bartlett pear? During unsupervised scanning and learning about the world, no one tells the brain what views belong to which objects while it learns view-invariant object categories. Cortical magnificantion in V1.
  • image p344fig09.14 (top row, left column) By fitting MT tuning curves with Gaussian receptive fields, a tuning width of 38° is estimated, and leads to the observed standard spiral tuning of 61° in MSTd. (bottom row, left column) The spiral tuning estimate in Figure 9.16 maximizes the position invariant of MSTd receptive fields. (top row, right column) Heading sensitivity is not impaired by these parameter choices.
    || [Spiral tuning (deg), position invariance (deg^(-1)), heading sensitivity] versus log polar direction tuning σ (deg)
  • image p345fig09.15 Double opponent directional receptive fields in MT are capable of detecting the motion of objects relative to each other and their backgrounds.
    || Motion opponency in MT (Born, Tootell 1992). Motion opponent (Grossberg etal), Differential motion (Royden etal), Subtractive motion cells (Neumann etal). ON center directionally selective: [excit, inhibit]ed by motion in [one, opponent] direction. OFF surround directionally selective: [excit, inhibit]ed by motion in [opponent, center] direction.
  • image p346fig09.16 A macrocircuit of some of the main brain regions that are used to move the eyes. Black boxes denote areas belonging to the saccadic eye movement systes (SAC), white boxes the smooth pursuit eye system (SPEM), and gray boxes, both systems. The abbreviations for the different brain regions are: LIP - Lateral Intra-Parietal area; FPA - Frontal Pursuit Area; MST - Middle Superior Temporal area; MT - Middle Temporal area; FEF - Frontal Eye Fields; NRPT - Nucleus Reticularis Tegmenti Pontis; DLPN - Dorso-Lateral Pontine Nuclei; SC - Superior Colliculus; CBM - CereBelluM; MVN/rLVN - Medial and Rostro-Lateral Vestibular Nucleii; PPRF - a Peri-Pontine Reticular Formation; TN - Tonic Neurons
    ||
  • image p347fig09.17 The leftward eye movement control channel in the model that I developed with Christopher Pack. See the text for details.
    || retinal image -> MT -> MST[v,d] -> pursuit
  • image p347fig09.18 These circuits between MSTv and MSTd enable predictive target tracking to be achieved by the pursuit system, notably when the eyes are successfully foveating a moving target. Solid arrows depict excitatory connections, dashed arrows depict inhibitory connections.
    ||
  • image p348fig09.19 How a constant pursuit speed that is commanded by MSTv cells starts by using target speed on the retina and ends by using backgound speed on the retina in the reverse direction during successful predictive pursuit.
    || target speed on retina, background speed on retina, pursuit speed command by MSTV cells
  • image p349fig09.20 Using virtual reality displays (left image), (Fajen, Warren 2003) collected data (right two images) about how observers avoid obstacles (open circular disks) as a function of their distance and angular position as they navigate towards a fixed goal (x). These data illustrate how goals act as attractors while obstacles act as repellers.
    || Steering from optic flow (Fajen, Warren 2003). goals are attractors, obstacles are repellers. Damped spring model explains human steering data.
  • image p349fig09.21 How attractor-repeller dynamics with Gaussians change the net steering gradient as the goal is approached.
    || Steering dynamics: goal approach. body-centered coordinates [obstacle, goal, heading] -> steering
  • image p350fig09.22 How the negative Gaussian of an obstacle causes a peak shift to avoid the obstacle without losing sight of how to reach the goal.
    || Steering dynamics: obstacle avoidance. body-centered coordinates [obstacle, goal, heading] -> steering
  • image p350fig09.23 Unidirectional transient cells respond to changes in all image contours as an auto navigates and urban scene while taking a video of it.
    || Unidirectional transient cells (Baloch, Grossberg 1997; Berzhanskaya, Grossberg, Mingolia 2007). Transient cells respond to leading and trailing boundaries. Transient cells response, driving video
  • image p351fig09.24 Directional transient cells respond most to motion in their preferred directions.
    || Directional transient cells. 8 directions, 3 speeds
  • image p351fig09.25 By the time MT+ is reached, directional transient cells and directional filters have begun to extract more global directional information from the image.
    || M+ computes global motion estimate. Estimate global motion from noisy local motion estimates.
  • image p352fig09.26 The final stage of the model computes a beautiful expansion optic flow that permits an easy estimate of the heading direction, with an accuracy that matches that of human navigators.
    || The model generates accurate heading (Warren, Hannon 1990; Royden, Crowell, Banks 1994). Maximally active MSTd cell = heading estimate. Accuracy matches human data. Random dots [mean +-1.5°, worst +-3.8°], Random dots with rotation [accurate with rotations <1°/s, rotation increases, error decreases], OpenGL & Yosemite benchmark +-1.5°, Driving video +-3°.
  • image p354fig10.01 The lamiar cortical circuit that realizes how we pay attention to an object sends signals from layer 6 of a higher cortical level to layer 6 of a lower cortical level and then back up to layer 4. This "folded feedback" circuit realizes a top-down, modulatory on-center, off-surround circuit that realizes the ART Matching Rule.
    || Top-down attention and folded feedback. Attentional signals also feed back into 6-to-4 on-center off-surround. 1-to-5-to-6 feedback path: Macaque (Lund, Booth 1975) cat (Gilbert, Wiesel 1979). V2-to-V1 feedback is on-center off-surround and affects layer 6 of V1 the most (Bullier etal 1996; Sandell, Schiller 1982). Attended stimuli enhanced, ignored stimuli suppressed. This circuit supports the predicted ART Matching Rule! [LGN, V[1,2][6->1]]
  • image p355fig10.02 Distinguishing processes of seeing vs knowing has been difficult because they interact so strongly.
    || Seeing vs. Knowing. Seeing and knowing [operate at different levels of the brain, use specialized circuits], but they [interact via feedback, use similar cortical designs, feedback is needed for conscious perception]. Cerebral Cortex: Seeing [V1-V4, MS-MST], Knowing [IT, PFC].
  • image p356fig10.03 Laminar computing achieves at least three basic properties of visual processing that have analogs in all biologically intelligent behaviors. These properties may be found in all cortical circuits in specialized form.
    || What does Laminar Computing achieve? 1. Self-stabilizing development and learning; 2. Seamless fusion of a) pre-attentive automatic bottom-up processing, b) attentive task-selective top-down processing; 3. Analog coherence: Solution of Binding Problem for perceptual grouping without loss of analog sensitivity. Even the earliest visual cortical stages carry out active adaptive information processing: [learn, group, attention]ing
  • image p357fig10.04 Laminar Computing achieves its properties by computing in a new way that sythesizes the best properties of feedforward and feedback interactions, analog and digital computations, and preattentive and attentive learning. The property of analog coherence enables coherent groupings and decisions to form without losing sensitivity to the amount of evidence that supports them.
    || Laminar Computing: a new way to compute. 1. Feedforward and feedback: a) Fast feedforward processing when data are unambiguous (eg Thorpe etal), b) slower feedback chooses among ambiguous alternatives [self-normalizing property, real-time probabiligy theory], c) A self-organizing system that trades certainty against speed: Goes beyond Bayesian models! 2. Analog and Digital: Analog Coherence combines the stability of digital with the sensitivity of analog. 3. Preattentive and Attentive Learning: Reconciles the differences of (eg) Helmholtz and Kanizsa, "A preattentive grouping is its own
  • image p359fig10.05 Activation of V1 is initiated, in part, by direct excitatory signals from the LGN to layer 4 of V1.
    || How are layer 2/3 bipole cells activated? Direct bottom-up activation of layer 4. LGN -> V1 layer 4. Strong bottom-up LGN input to layer 4 (Stratford etal 1996; Chung, Ferster 1998). Many details omitted.
  • image p359fig10.06 Another, albeit indirect, pathway from LGN exists that can also excite layer 4 of V1. Why are not these two pathways redundant? The answer, ultimately, how to do with how cortex learns, as well as with how it pays attention. See the text for details.
    || Another bottom-up input to layer 4: Why?? Layer 6-to-4 on-center off-surround (Grieve, Sillito 1991, 1995; Ahmedetal 1994, 1997). LGN projects to layers 6 and 4. Layer 6 excites spiny stellates in column above it. Medium range connections onto inhibitory neurons. 6-t-4 path acts as on-center off-curround.
  • image p359fig10.07 The two bottom-up pathways from LGN to layer 4 of V1 can together activate layer 4 and contrast-normalize layer 4 responses.
    || Bottom-up contrast normalization (Grossberg 1968, 1973; Sperling, Sondhi 1968; Heeger 1992; Douglas etal 1995; Shapley etal 2004). Together, direct LGN-to-4 path and 6-to-4 on-center off-surround provide contrast normalization if cells obey shunting or membrane equation dynamics.
  • image p360fig10.08 The bottom-up on-center off-surround from LGN-to-6-to-4 has a modulatory on-center because of its role in realizing the ART Matching Rule and, with it, the ability of the cortex to dynamically stabilize its learned memories.
    || Modulation of priming by 6-to-4 on-center (Stratford etal 1996; Callaway 1998). On-center 6-to-4 excitation is inhibited down to being modulatory (priming, subthreshold). On-center 6-to-4 excitation cannot activate layer 4 on its own. Clarifies need for direct path. Prediction: plays key role in stable grouping, development and learning. ART Matching Rule!
  • image p360fig10.09 Perceptual grouping is carried out in layer 2/3 by long-range horizontal excitatory recurrent connections, supplemented by short-range disynaptic inhibitory connections that together realize the bipole grouping properties that are diagrammed in Figure 10.10.
    || Grouping starts in layer 2/3. LGN-> 6-> 4-> 2/3: 1. Long-range horizontal excitation links collinear, coaxial receptive fields (Gilbert, Wiesel 1989; Bosking etal 1997; Schmidt etal 1997) 2. Short-range disynaptic inhibition of target pyramidal via pool of intraneurons (Hirsch, Gilbert 1991) 3. Unambiguous groupings can form and generate feedforward outputs quickly (Thorpe etal 1996).
  • image p361fig10.10 Bipole grouping is achieved by long-range horizontal recurrent connections that also give rise to short-range inhibitory interneurons which inhibit nearby bipole cells as well as each other.
    || Bipole property controls perceptual grouping. Collinear input on both sides. Excitatory inputs summate. Inhibitory inputs normalize, Shunting inhibition! Two-against-one. Cell is excited.
  • image p362fig10.11 Feedback between layer 2/3 to the layer 6-to-4-to-2/3 feedback loop chooses the strongest grouping in cases where there is more than one. If only one grouping exists, then the circuit can function very quickly in a feedforward manner. When multiple groupings exist, the cortex "runs as fast as it can" to select the one with the most evidence to support it using the self-normalizing inhibition in the layer 6-to-4 off-surround.
    || How is the final grouping selected? Folded feedback LGN-> 6-> 4-> 2/3. 1. Layer 2/3 groupings feed back into 6-to-4 on-center off-surround: a) direct layer 2/3 -to-6 path; b) can also go via layer 5 (Blasdel etal 1985; Kisvarday etal 1989). 2. Strongest grouping enhanced by its on-center. 3. Inputs to weaker groupings suppressed by off-surround. 4. Interlaminar feedback creates functional columns. Activities of conflicting groupings are reduced by self-normalizing inhibition, slowing processing; intracortical feedback selects and contrast-enhances the winning grouping, speeding processing.
  • image p363fig10.12 The same laminar circuit design repeats in V1 and V2, albeit with specializations that include longer horizontal grouping axoms and figure-ground separation interactions.
    || V2 repeats V1 circuitry at larger spatial scale, LGN-> V1[6,4,2/3]-> V2[6,4,2/3]. V2 layer 2/3 horizontal axons longer-range than in V1 (Amir etal 1993). Therefore, longer-range groupings can form in V2 (Von der Heydt etal 1984)
  • image p364fig10.13 The bottom-up adaptive filter, intracortical grouping circuit, and intercortical top-down attentional circuit all use the same competitive decision circuit between layers 6 and 4, called the attention-preattention interface, with which to select the featural patterns that will be processed.
    || Bottom-up filters and intracortical grouping feedback use the same 6-to-4 decision circuit, LGN-> Vx[6,4,2/3]. competitive decision circuit, modulatory on-center off-surround network. Top-down intercortical attention also uses the same 6-to-4 decision circuit!
  • image p364fig10.14 This figure emphasizes how preattentive intracortical groupings and top-down intercortical attention share the same modulatory on-center, off-surround layer 4-to-6 decision circuit.
    || Explanation: grouping and attention share the same modulatory decision circuit. Layer 6-6-4-2/3 pathway shown; also a layer 6-1-2/3 path. intercortical attention, both act via a modulatory on-center off-surround decision circuit, intracortical feedback from groupings
  • image p367fig10.15 Data (left column) and simulation (right column) of how attention prevents a masking stimulus from inhibiting the response to the on-center of the cell from which the recording was made.
    || Attention protects target from masking stimulus (Reynolds etal 1999; Grossberg, Raizada 2000).
  • image p367fig10.16 Neurophysiological data (left image) and simulation (right image) of how a low-contrast target can be facilitated if it is surrounded by a paid (31May2023 Howell - is word correct?) of collinear flankers, and suppresssed by them if it has high contrast.
    || Flankers can enhance or suppress targets (Polat etal 1998; Grossberg, Raizada 2000). target alone, target + flankers, flankers alone.
  • image p368fig10.17 Neurophysiological data (left image) and simulation (right image) showing that attention has a greater effect on low contrast than high contrast targets.
    || Attention has greater effect on low contrast targets (DeWeerd etal 1999; Raizada, Grossberg 2001). Threshold increase (deg) vs Grating contrast (%), [no, with] attention
  • image p368fig10.18 Neurophysiological data (left image) and simulation (right image) of relative on-cell activities when the input to that cell may also be surroubded by iso-orientation or perpendicular textures.
    || Texture reduces response to a bar: iso-orientation suppression (Knierim, van Essen 1992), perpendicular suppression (Raizada, Grossberg 2001)
  • image p369fig10.19 Data from (Watanabe etal 2001) showing perceptual learning of the coherent motion direction, despite the lack of extra-foveal attention and awareness of the moving stimuli.
    || Unconscious perceptual learning of motion direction, % correct for two tests, compared to chance level results.
  • image p371fig11.01 FACADE theory explains how the 3D boundaries and surfaces are formed with which we see the world in depth.
    || 3D Vision and figure-ground perception (Grossberg 1987, 1994, 1997). How are 3D boundaries and 3D surfaces formed? How the world looks without assuming naive realism. Form And Color And DEpth theory (FACADE). Prediction: Visible figure-ground-separated Form-And-Color-And-DEpth are represented in cortical area V4.
  • image p372fig11.02 FACADE theory explains how multiple depth-selective boundary representations can capture the surface lightnesses and colors at the correct depths. The fact that both surface qualia and depth are determined by a single process implies that, for example, a change in brightness can cause a change in depth.
    || 3D surface filling-in. From filling-in of surface lightness and color to filling-in of surface depth. Prediction: Depth-selective boundary-gated filling-in defines the 3D surfaces that we see. Prediction: A single process fills-in lightness, color, and depth. Can a change in brightness cause a change in depth? YES! eg proximity-luminance covariance (Egusa 1983, Schwartz, Sperling 1983). Why is depth not more unstable when lighting changes? Prediction: Discounting the illuminant limits variability.
  • image p373fig11.03 Both contrast-specific binocular fusion and contrast-invariant boundary perception are needed to properly see the world in depth.
    || How to unify contrast-specific binocular fusion with contrast-invariant boundary perception? Contrast-specific binocular fusion: [Left, right] eye view [, no] binocular fusion. Contrast-invariant boundary perception: contrast polarity along the gray square edge reverses; opposite polarities are pooled to form object boundary.
  • image p374fig11.04 The three processing stages of monocular simple cells, and complex cells accomplish both contrast-specific binocular fusion and contrast-invariant boundary perception.
    || Model unifies contrast-specific binocular fusion and contrast-invariant boundary perception (Ohzawa etal 1990; Grossberg, McLoughlin 1997). [Left, right] eye V1-4 simple cells-> V1-3B simple cells-> V1-2/3A complex cells. Contrast-specific stereoscopic fusion by disparity-selective simple cells. Contrast-invariant boundaries by pooling opposite polarity binocular simple cells at complex cells layer 2/3A.
  • image p374fig11.05 The brain uses a contrast constraint on binocular fusion to help ensure that only contrasts which are derived from the same objects in space are binoculary matched.
    || Contrast constraint on binocular fusion. Left and right input from same object has similar contrast, Percept changes when one contrast is different. Fusion only occurs between bars of similar contrast (McKee etal 1994)
  • image p375fig11.06 The contrast constraint on binocular fusion is realized by obligate cells in layer 3B of cortical area V1.
    || Model implements contrast constraint on binocular fusion (cf. "obligate" cells Poggio 1991). An ecological constraint on cortical development. [left, right] eye cart V1-[4 monocular simple, 3B binocular simple, complex2/3A] cells. Inhibitory cells (red) ensure that fusion occurs when contrasts in left and right eye are approximately equal.
  • image p375fig11.07 The 3D LAMINART model uses both monocular and binocular simple cells to binocularly fuse like image contrasts. The remainder of the model generates 3D boundary and surface representations of multiple kinds of experiments as well as of natural scenes.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A], V2 thin stripe [4->2/3A], V4]. V1 blob [V1-4 monocular, V1 interior binocular] simple cells. [complex, simple, inhibitory] cells, on-center off-surround
  • image p376fig11.08 The contrast constraint on binocular fusion is not sufficient to prevent many of the false binocular matches that satisfy this constraint.
    || How to solve the correspondance problem? How does the brain inhibit false matches? Contrast constraint is not enough. [stimulus, multiple possible binocular matches] - Which squares in the two retinal images must be fused to form the correct percept?
  • image p376fig11.09 The disparity filter in V2 helps to solve the correspondence problem by eliminating spurious contrasts using line-of-sight inhibition.
    || Model V2 disparity filter solves the correspondence problem. An ecological constraint on cortical development. [left, right] eye view: False matches (black) suppressed by line-of-sight inhibition (green lines). "Cells that fire together wire together".
  • image p376fig11.10 The 3D LAMINART model shows how the disparity filter can be integrated into the circuit that completes 3D boundary representations using bipole grouping cells. It also explains how surface contours can strengthen boundaries that succeed in generating closed filling-in domains.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A] surface contour, V2 thin stripe (monocular surface) [4->2/3A], V2 interior [disynaptic inhibitory interneurons, bipole grouping cells, disparity filter, V4 binocular surface]. [complex, simple, inhibitory] cells, on-center off-surround
  • image p377fig11.11 DaVinci stereopsis phenomena occur when only one eye can receive visual inputs from part of a 3D scene due to occlusion by a nearer surface.
    || How does monocular information contribute to depth perception? DaVinci steropsis (Gillam etal 1999). Only by utilizing monocular information can visual system create correct depth percept. [left, right] eye view
  • image p378fig11.12 How monocular and binocular information are combined in V1 and V2 in the 3D LAMINART model.
    || Model utilizes monocular information. [left, right] eye cart V1-[4 monocular simple, 3B binocular simple, complex2/3A [mo,bi]nocular] cells, V2-4 binocular complex cells. black = monocular cells, blue = binocular cells. In V2, monocular inputs add to binocular inputs along the line of sight and contribute to depth perception.
  • image p379fig11.13 How the 3D LAMINART model explains DaVinci stereopsis. All the stages of boundary and surface formation are color coded to clarify their explanation. Although each mechanism is very simple, when all of them act together, the correct depthful surface representation is generated. See the text for details.
    || DaVinci stereopsis (Nakayama, Shimojo 1990). An emergent property of the previous simple mechanisms working together. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] pair [Binocular match: boundaries of thick bar -> Add monocular boundaries along lines-of-sight -> Line-of-sight inhibition kills weaker vertical boundaries -> 3D surface percept not just a disparity match!] pair [Binocular match: right edge of thin and thick bars -> Strongest boundaries: binocular and monocular boundaries add -> Vertical boundaries from monocular left edge of thin bar survive -> Filling-in contained by connected boundaries]. cart [very near, near, fixation plane, far, very far]
  • image p380fig11.14 The model explanation of DaVinci stereopsis when the input stimuli have opposite contrast polarities.
    || Polarity-reversed Da Vinci stereopsis (Nakayama, Shimojo 1990). Same explanation! (... as Figure 11.13 ...) [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p381fig11.15 The same model mechanisms explain the surface percept that is generated by the variant of DaVinci stereopsis that Gillam, Blackburn, and Nakayama studied in 1999.
    || DaVinci stereopsis (Gillam, Blackburn, Nakayama 1999). same model mechanisms. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p382fig11.16 The version of DaVinci steropsis wherein three narrow rectangles are binocularly matched with one thick rectangle can also be explained is a similar way.
    || DaVinci stereopsis of [3 narrow, one thick] rectangles (Gillam, Blackburn, Nakayama 1999). [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p383fig11.17 The bars in the left and right images that are in the same positions are marked in red to simplify tracking how they are processed at subsequent stages.
    || The Venetian blind effect (Howard, Rogers 1995). Every second bar on L in same position as every third bar on R. These bars are marked in red; see them match in Fixation Plane. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p384fig11.18 Surface and surface-to-boundary surface contour signals that are generated by the Venetian blind image.
    || Venetian blind effect (Howard, Rogers 1995). Every second bar on L in same position as every third bar on R. PERCEPT: 3-bar ramps sloping up from L to R with step returns. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p385fig11.19 Dichoptic masking occurs when the bars in the left and right images have sufficiently different contrasts.
    || Dichoptic masking (McKee, Bravo, Smallman, Legge 1994). [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far]
  • image p387fig11.22 Simulation of the boundaries that are generated by the Julesz stereogram in Figure 4.59 (top row) without (second row) and with (third row) surface contour feedback.
    || Boundary cart [V2-2, V2, V1] cart [near, fixation, far]
  • image p388fig11.23 Simulation of the surface percept that is seen in response to a sparse stereogram. The challenge is to assign large regions of ambiguous white to the correct surface in depth.
    || [left, right] retinal input. Surface [near, fixation, far] V4
  • image p388fig11.24 Boundary groupings capture the ambiguous depth-ambiguous feature contour signals and lift them to the correct surface in depth.
    || [surface, boundary] cart [near, fixation, far] V2.
  • image p389fig11.25 Boundaries are not just edge detectors. If they were, a shaded ellipse would look flat, and uniformly gray.
    || 3D vision and figure-ground separation. Multiple-scale, depth-selective boundary webs. [dark-light, light-dark] boundaries -> complex cells! If boundaries were just edge detectors, there would be just a bounding edge of the ellipse. After filling-in, it would look like this:.
  • image p390fig11.26 Although larger scales sometimes look closer (left image), that is not always true, as the right image of (Brown, Weisstein 1988) illustrates. The latter percept is, moreover, bistable. These images show the importance of interactions between groupings and multiple scales to determine perceived surface depths.
    || Multiple-scale depth-selective groupings determine perceived depth (Brown, Weisstein 1988). As an object approaches, it gets bigger on the retina. Does a big scale (RF) always signal NEAR? NO! The same scale can signal either near or far. Some scales fuse more than one disparity.
  • image p391fig11.27 (left image) Each scale can binocularly fuse a subset of spatial scales, with larger scales fusing more scales and closer ones than small scales. (right image) Cortical hypercolumns enable binocular fusion to occur in a larger scale even as rivalry occurs in a smaller scale.
    || Multiple-scale grouping and size-disparity correlation. Depth-selective cooperation and competition among multiple scales determines perceived depth: a) Larger scales fuse more depth; b) Simultaneous fusion and rivalry. Boundary prining using surface contours: Surface-to-boundary feedback from the nearest surface that is surrounded by a connected boundary eliminates redundant boundaries at the same position and further depths.
  • image p391fig11.28 (left image) Ocular dominance columns respond selectively to inputs from one eye or the other. (right image) Inputs from the two eyes are mapped into layer 4C of V1, among other layers.
    || Cortex V1[1, 2/3, 4A, 4B, 4C, 5, 6], LGN
  • image p392fig11.29 Boundary webs of the smallest scales are closer to the boundary edge of the ellipse, and progressively larger scale webs penetrate ever deeper into the ellipse image, due to the amount of evidence that they need to fire. Taken together, they generate a multiple-scale boundary web with depth-selective properties that can capture depth-selective surface filling-in.
    || 3D vision and figure-ground separation. Multiple-scale, depth-selective boundary webs. Instead, different size detectors generate dense boundary webs at different positions and depths along the shading gradient. Small-far, Larger-nearer, Largest-nearest. Each boundary web captures the gray shading in small compartments at its position and depths. A shaded percept in depth results.
  • image p392fig11.30 Multiple scales interact with bipole cells that represent multiple depths, and conversely. See the text for details.
    || How multiple scales vote for multiple depths. Scale-to-depth and depth-to-scale maps. Smallest scale projects to, and receives feedback from, boundary groupings that represent the furthest depths. Largest scale connects to boundary groupings that represent all depths. multiple-[depth, scale] dot [grouping, filter] cells. [small <-> large] vs [far <-> near]
  • image p393fig11.31 (Todd, Akerstrom 1987) created a series of 2D images from discrete black patches on a white disk and showed how the perceived depth varies with the factors summarized in the figure. The LIGHTSHAFT model quantitatively simulated their data.
    || Factors determining depth-from-texture percept. Perceived depth varies with texture element width, but only when elements are elongated and sufficiently aligned with one another to form long-range groupings. Data of (Todd, Akerstrom 1987) simulated by the LIGHTSHAFT model of (Grossberg, Kuhlmann 2007). [HP, LP, CCE, CCS, RO]
  • image p393fig11.32 Kulikowski stereograms involve binocular matching of out-of-phase (a) Gaussians or (b) rectangles. The latter can generate a percept of simultaneous fusion and rivalry. See the text for why.
    ||
  • image p394fig11.33 The Kaufman stereogram also creates a percept of simultaneous fusion and rivalry. The square in depth remains fused and the perpendicular lines in the two images are pervceived as rivalrous.
    || 3D groupings determine perceived depth, stereogram (Kaufman 1974). Vertical illusory contours are at different disparities than those of bounding squares. Illusory square is seen in depth. Vertical illusory contours are binocularly fused and determine the perceived depth of the square. Thin, oblique lines, being perpendicular, are rivalrous: simultaneous fusion and rivalry.
  • image p395fig11.34 A comparison of the properties of other rivalry models with those of the 3D LAMINART model (surrounded by red border). Significantly, only 3D LAMINART explains both stable vision and rivalry (green border).
    || Comparison of rivalry models
  • image p396fig11.35 Three properties of bipole boundary grouping in V2 can explain how boundaries oscillate in response to rivalry-inducing stimuli. Because all boundaries are invisible, however, these properties are not sufficient to generate a conscious percept of rivalrous surfaces.
    || 3 V2 boundary properties cause binocular rivalry. 1. Bipole grouping, 2. Orientational competition, 3. Actovity-dependent habituation
  • image p397fig11.36 Simulation of the temporal dynamics of rivalrous, but coherent, boundary switching.
    || Simulation of 2D rivalry dynamics. [Inputs, Temporal dynamics of V2 layer 2/3 boundary cells] cart [left, right]
  • image p398fig11.37 Simulation of the no swap baseline condition of (Logothetis, Leopold, Sheinberg 1996).
    || [Binocular, [left, right] eye] activity
  • image p399fig11.38 Simulation of the swap condition of (Logothetis, Leopold, Sheinberg 1996).
    || [Binocular, [left, right] eye] activity
  • image p399fig11.39 Simulation of the eye rivalry data of (Lee, Blake 1999).
    || [Binocular, [left, right] eye] activity
  • image p400fig11.40 When planar 2D parallelograms are justaposed, the resultant forms generate 3D percepts that are sensitive to the configuration of angles and edges in the fugure. See the text for why.
    || 3D representation of 2D images, Monocular cues (eg angles) can interact together to yield 3D interpretation. Monocular cues by themselves are often ambiguous. Same angles and shapes, different surface slants. How do these ambiguous 2D shapes contextually define a 3D object form?
  • image p401fig11.41 The 3D LAMINART model proposes how angle cells and disparity-gradient interact through learning to generate 3D representations of slanted objects.
    || 3D LAMINART model. [LGN, V1, V2, V4] Four key additions: 1. Angle cells - tuned to various angles; 2. Disparity-gradient cells - tuned to disparity gradients in the image; 3. weights from [angle to disparity-gradient] cells - learned while viewing 3D image; Colinear grouping between [angle to disparity-gradient] cells - disambiguates ambiguous groupings.
  • image p401fig11.42 A hypothetical cortical hypercolumn structure proposes how angle cells and disparity-gradient cells, including bipole cells that stay within a given depth, may self-organize during development.
    || Hypercolumn representation of angles [leftm right] cart [far-to-near, zero, near-to-far]
  • image p402fig11.43 A pair of disparate images of a scene from the University of Tsukuba. Multiview imagre database.
    || input [left, right]
  • image p402fig11.44 3D scenic reconstruction of the image in Figure 11.43 by the 3D LAMINART model.
    || Disparity [5, 6, 8, 10, 11, 14]: images of objects in common depth planes
  • image p403fig11.45 The multiple boundary and surface scales that were used to simulate a reconstruction of the SAR image in Figure 3.24.
    || SAR processing by multiple scales. [boundaries before completion, boundaries after completion, surface filling-in] versus scale [small, medium, large]. large scale bipole
  • image p405fig12.01 A What ventral cortical stream and Where/How dorsal cortical stream have been described for audition, no less than for vision.
    || Partietal lobe: where; Temporal lobe: what. V1-> [[what: IT], [where: PPC-> DLPFC]]. A1-> [[what: [ST-> VLPFC], VLPFC], [where: [PPC-> DLPFC], DLPFC]].
  • image p407fig12.03 Neurophysiological data showing how motor cortical cells code different vectors that are sensitive to both the direction of the commanded movement and its length.
    || (a) Single primary motor cortex neuron, onset of movement -> on..., radial architecture... (b) Motor cortex neuronal population, radial architecture...
  • image p409fig12.04 (top half) Neurophysiological data of vector cell responses in motor cortex. (bottom half) VITE model simulations of a simple movement in which the model
  • image p410fig12.05 VITE simulation of velocity profile invariance if the same GO signal gates shorter (a) or longer (b) movements. Note the higher velocities in (b).
    || [[short, long] cart [G, dP/dt]] vs time. G = GO signal, dP/dt = velocity profile.
  • image p411fig12.07 The left column simulation by VITE shows the velocity profile when the GO signal (G) starts with the movement. The right signal column shows that the peak velocity is much greater if a second movement begins when the GO signal is already positive.
    || Higher peak velocity due to target switching. VITE simulation of higher peak speed if second target rides on first GO signal. [[first, second] target cart [G, dP/dt]] vs time. Second target GO is much higher. G = GO signal, dP/dt = velocity profile.
  • image p411fig12.08 Agonist-antagonist opponent organization of difference vector (DV) and present position vector (PPV) processing stages and how GO signals gate them.
    ||
  • image p412fig12.09 How a Vector Associative Map, or VAM, model uses mismatch learning during its development to calibrate inputs from a target position vector (T) and a present position vector (P) via mismatch learning of adaptive weights at the difference vector (D). See the text for details.
    || Vector Associative Map model (VAP). During critical period, Endogenous Random Generator (ERG+) tirns on, activates P, and causes random movements that sample workspace. When ERG+ shuts off, posture occurs. ERG- then turns on (rebound) and opens Now Print (NP) gate, that dumps P into T. Mismatch learning enables adaptive weights between T and D to change until D (the mismatch) appoaches 0. Then T and P are both correctly calibrated to represent the same positions.
  • image p413fig12.10 Processing stages in cortical areas 4 and 5 whereby the VITE model combines outflow VITE trajectory formation signals with inflow signals from the spinal cord and cerebellum that enable it to carry out movements with variable loads and in the presence of obstacles. See the text for details.
    || area 4 (rostral) <-> area 5 (caudal).
  • image p414fig12.11 Neurophysiological data from cortical areas 4 and 5 (every other column) and simulations thereof (other columns) during a reach.
    || activation vs time. (a) area 4 phasic RT (IFV) (b) area 4 tonic (OPV) (c) area 4 phasic-tonic (OFPV) (d) area 4 phasic MT (DVV) (e) area 5 phasic (DV) (f) area 5 tonic (PPV)
  • image p415fig12.12 The combined VITE, FLETE, cerebellar, and multi-joint opponent muscle model for trajectory formation in the presence of variable forces and obstacles.
    ||
  • image p416fig12.13 The DIRECT model learns, using a circular reaction that is energized by an Endogenous Random Generator, or ERG, to make motor-equivalent volitionally-activated reaches. This circular reaction learns a spatial representation of a target in space. It can hereby make accurate reaches with clamped joints and on its first try using a tool under visual guidance; see Figure 12.16.
    || DIRECT model (Bulloch, Grossberg, Guenther 1993). learns by circular reaction. learns spatial reresentation to me4diate between vision and action. motor-equivalent reaching. can reach target with clamped joints. can reach target with a TOOL on the first try under visual guidance. How did tool use arise?!
  • image p416fig12.14 Computer simulations of DIRECT reaches with (b) a tool, (c) a clamped elbow, and (d) with a blindfold, among other constraints.
    || Computer simulationsd of direct reaches [unconstrained, with TOOL, elbow clamped at 140°, blindfolded]
  • image p417fig12.15 The DIRECT and DIVA models have homologous circuits to learn and control motor-equivalent reaching and speaking, with tool use and coarticulation resulting properties. See the text for why.
    || From Seeing and Reaching to Hearing and Speaking, Circular reactions (Piaget 1945, 1951, 1952). Homologous circuits for development and learning of motor-equivalent REACHING and SPEAKING. DIRECT TOOL use (Bullock, Grossberg, Guenther 1993), DIVA Coarticulation (Guenther 1995)
  • image p418fig12.16 Anatomical interpretations of the DIVA model processing stages.
    || [Feedforward control system (FF), Feedback control subsystem (FB)]. Speech sound map (Left Ventral Premotor Cortex (LVPC)), Cerebellum, Articulatory velocity and position maps (Motor Cortex (MC)), Somatosensory Error Map (Inferior Parietal Cortex (IPC)), Auditory Error Map (Superior Temporal Cortex (STC)), Auditory State Map (Superior Temporal Cortex)), Somatosensory State Map (Inferior Parietal Cortex)), articulatory musculature via subcortical nuclei, auditory feedback via subcortical nuclei
  • image p419fig12.17 The auditory continuity illusion illustrates the ART Matching Rule at the level of auditory streaming. Its "backwards in time" effect of future context on past conscious perception is a signature of resonance.
    || Auditory continuity illusion. input, percept. Backwards in time - How does a future sound let past sound continue through noise? Resonance! - It takes a while to kick in. After it starts, a future tone can maintain it much more quickly. Why does this not happen if there is no noise? - ART Matching Rule! TD harmonic filter is maodulatory without BU input. It cannot create something out of nothing.
  • image p420fig12.18 The ARTSTREAM model explains and simulates the auditory continuity illusion as an example of a spectral-pitch resonance. Interactions of ART Matching Rule and asymmetric competition mechanisms in cortical strip maps explain how the tone selects the consistent frequency from the noise in its own stream while separating the rest of the noise into another stream.
    || ARTSTREAM model (Grossberg 1999; Grossberg, Govindarajan, Wyse, Cohen 2004). SPINET. Frequency and pitch strips. Bottom Up (BU) harmonic sieve. Top Down (TD) harmonic ART matching. Exclusive allocation. Learn pitch categories based on early harmonic processing. A stream is a Spectral-Pitch Resonance!
  • image p422fig12.19 The ARTSTREAM model includes mechanisms for deriving streams both from pitch and from source direction. See the text for details.
    || [left, right] cart Peripheral processing = [input signal-> outer & middle ear preemphasis-> basilar membrane gammatone filterbank-> energy measure]. Spectral stream layer-> spectral summation layer-> delays-> [f-, tau] plane-> pitch stream layer-> pitch summation layer.
  • image p423fig12.20 The Spatial Pitch Network, or SPINET, model shows how a log polar spatial representation of the sound frequency spectrum can be derived from auditory signals occuring in time. The spatial representation allows the ARTSTREAM model to compute spatially distinct auditory streams.
    || SPINET model (Spatial Pitch Network) (Cohen, Grossberg, Wyse 1995). 1. input sound 2. Gamma-tone filter bank 3. Shaort-term average energy spectrum 4. MAP transfer function 5. On-center off-surround and rectification 6. Harmonic weighting 7. Harmonic summation and competition -> PITCH
  • image p424fig12.21 One of the many types of data about pitch processing that are simulated by the SPINET model. See the text for details.
    || Pitch shifts with component shifts (Patterson, Wightman 1976; Schouten 1962). Pitch vs lowest harmonic number.
  • image p424fig12.22 Decomposition of a sound (bottom row) in terms of three of its harmonics (top three rows).
    ||
  • image p425fig12.23 ARTSTREAM simulations of the auditory continuity illusion and other streaming properties (left column, top row). When two tones are separated by silence (Input), a percept of silence also separates them in a spectral-pitch resonance. (left column, bottom row). When two tones are separated by broadband noise, the percept of tone continues through the noise in one stream (stream 1) while the remainder of the noise occurs in a different stream (stream 2). (right column) Some of the other streaming properties that have been simulated by the ARTSTREAM model.
    || Auditory continuity does not occur without noise. Auditory continuity in noise. Other simulated streaming data.
  • image p426fig12.24 Spectrograms of /ba/ and /pa/ show the transient and sustained parts of their spectrograms.
    ||
  • image p428fig12.25 (left architecture) Auditory-articulatory feedback loop whereby babbled sounds active learning in an imitative map that is later used to learn to reproduce the sounds of other speakers. An articulatory-to-auditory expectation renders learning possible by making the auditory and motor data dimensionally consistent, as in the motor theory of speech. (right architecture) Parallel streams in the ARTSPEECH model for learning speaker-independent speech and language meaning, including a mechanism for speaker normalization (right cortical stream) and for learning speaker-dependent vocalic qualities (left cortical stream).
    || left: Speaker-dependent vocalic qualities; right: Speaker-independent speech and language meaning
  • image p430fig12.26 The NormNet model shows how speaker normalization can be achieved using specializations of the same mechanisms that create auditory streams. See the text for how.
    || [Anchor vs Stream] log frequency map. -> diagonals-> Speaker-independent acoustic item information-> [BU adaptive filter, TD learned expectation]-> leaned item recognition categories
  • image p431fig12.27 The strip maps that occur in ARTSTREAM and NormNet are variants of a cortical design that aalso creates ocular dominance columns in the visual cortex.
    || Adult organization of V1 (Grinvald etal http://www.weizmann.ac.il/brain/images/cubes.html). (1) Occular dominance columns (OCDs): Alternating strips of cortex respond preferentially to visual inputs of each eye (R/L corresponds to Right and Left eye inputs in the figure); Orientation columns: A smooth pattern of changing orientation preference within each ODC. Organized in a pinwheel like fashion.
  • image p432fig12.28 (left image) The SpaN model simulates how spatial representations of numerical quantities are generated in the parietal cortex. (right image) Behavior numerosity data and SpaN model simulations of it.
    || (Left) preprocessor-> spatial number map-> Comparison wave. (Right) data axis: number of lever presses; model axis: node position in the spatial number axis
  • image p433fig12.29 Learning of place-value number maps language categories in the What cortical stream into numerical strip maps in the Where cortical stream. See the text for details.
    || (1) spoken word "seven"-> (2) What processing stream- learned number category <-> (3) What-Where learned assoociations <- (4) Where processing stream- spatial number map <-(5) visual clues of seven objects
  • image p436fig12.30 The conscious ARTWORD, or cARTWORD, laminar cortical speech model simulates how future context can disambiguate noisy past speech sounds in such a way that the completed percept is consciously heard to proceed from past to future as a feature-item-list resonant wave propagates through time.
    || cARTWORD: Laminar cortical model macrocircuit (Grossberg, Kazerounian 2011) Simulates PHONEMIC RESTORATION: Cognitive Working Memory (processed item sequences) - [Excitatory-> inhibitory-> habituative-> adaptive filter-> adaptive filter-> adaptive filter with depletable synapse-> Acoustic [item, feature]
  • image p436fig12.31 Working memories do not store longer sequences of events in the correct temporal order. Instead, items at the beginning and end of the list are oftem called first, and with the highest probability.
    || Working memory. How to design a working memory to code "Temporal Order Information" in STM before it is stored in LTM. Speech, language, sensory-motor control, cognitive planning. eg repeat a telephone number unless you are distracted first. Temporal order STM is often imperfect, eg Free Recall. [probability, order] of recall vs list position. WHY?
  • image p437fig12.32 Data from a free recall experiment illustrate the bowed serial position curve.
    || Serial position function for free recall Data: (Murdock 1962 JEP 64, 482-488). % correct vs position of word on a 40-word list. Primacy gradient can be a mixture of STM and LTM read-out.
  • image p437fig12.33 Item and Order working memory models explain free recall data, as well as many other psychological and neurobiological data, by simulating how temporal series of events are stored as evolving spatial patterns of activity at content-addressable item categories. The categories with the largest activities are rehearsed first, and self-inhibit their activity as they do so in order to prevent tem from being rehearsed perseveratively. The laws whereby the items are stored in working memory obey basic design principles concerning list categories, or chunks, of sequences of stored items can be stably remembered.
    || Working memory models: item and order, or competitive queuing (Grossberg 1978; Houghton 1990; Page, Norris 1998). Event sequence in time stored as an evolving spatial pattern of activity. Primacy gradient of working memory activation stores correct temporal order at content-addressable cells. Maximally activated cell populations is performed next when a rehearsal wave is turned on. Output signal from chosen cell population inhibits its own activity to prevent perseveration: inhibition of return. Iterate until entire sequence is performed.
  • image p438fig12.34 The LTM Invariance Principle insists that words being stored in working memory for the first time (eg MYSELF) do not cause catastrophic forgetting of the categories that have already been learned for their subwords (eg MY, SELF, and ELF) or other subset linguistic groups.
    || LTM invariance principle. unfamiliar STM -> LTM familiar. How does STM storage of SELF influence STM storage of MY? It should not recode LTM of either MY or SELF!
  • image p439fig12.35 The Normalization Rule insists that the total activity of stored items in working memory has an upper bound that is approximately independent of the number of items that are stored.
    || Normalization Rule (Grossberg 1978). Total STM activity has a finite bound independent of the number of items (limited capacity of STM). Activity vs Items for [slow, quick] asymptotic energy growth.
  • image p439fig12.36 (1) Inputs to Item and Order working memories are stored by content-addressable item categories. (2) The relative activities of the item categories code the temporal order of performance. (3) In addition to excitatory recurrent signals from each working memory cell (population) to itself, there are also inhibitory recurrent signals to other working memory cells, in order to solve the noise-saturation dilemma. (4) A nonspecific rehearsal wave allows the most active cell to be rehearsed first. (5) As an item is being rehearsed, it inhibits its own activity using a feedback inhibitory interneuron. Persevervation performance is hereby prevented.
    || Item and order working memories. (1) Content-addressable item codes (2) Temporal order stored as relative sizes of item activities (3) Competition between working memory cells: Competition balances the positive feedback that enables the cells to remain active. Without it, cell activities may all saturate at their maximal values-> Noise saturation dilemma again! (4) Read-out by nonspecific reheasal wave- Largest activity is the first out (5) STM reset self-inhibition prevents perseveration: [input/self-excitatory, rehearsal wave]-> [output, self-inhibition]
  • image p440fig12.37 Simulation of a primacy gradient for a short list (left image) being transformed into a bowed gradient for a longer list (right image). Activities of cells that store the longer list are smaller die to the Normalization Rule, which follows from the shunting inhibition in the working memory network.
    || Primacy bow as more items stored. [activities, final y] (Left) Primacy gradient 6 items (Right) Bowed gradient 20 items
  • image p441fig12.38 The LTM Invariance Principle is realized if the relative sizes of the inputs to the list chunk level stay the same as more items are stored in working memory. This property, in turn, follows from shunting previously stored working memory activities when a ne4w item occurs.
    || LTM Invariance principle. Choose STM activities so that newly stored STM activities may alter the size of old STM activities without recoding their LTM patterns. In particular: New events do not change the relative activities of past event sequences, but may reduce their absolute activites. Why? Bottom-up adaptive filtering uses dot products: T(j) = sum[i=1 to n: x(i)*z(i,j) = total input to v(j). The relative sizes of inputs to coding nodes v(j) are preserved. x(i) -> w*x(i), 0 < w <= 1, leaves all past ratios T(j)/T(k) unchanged.
  • image p442fig12.39 (left column, top row) How a shunt plus normalization can lead to a bow in the stored working memory spatial pattern. Time increases in each row as every item is stored with activity 1 before it is shunted by w due to each successive item
  • image p442fig12.40 Given the hypothesis in Figure 12.39 (right column, bottom row) and a generalized concept of steady, albeit possibly decreasing, attention to each item as it is stored in working memory, only a primacy, or bowed gradient of activity across the working memory items can be stored.
    || LTM Invariance + Normalization. (... given conditions ...) Then the x(i) can ONLY form: [primacy gradient, recency gradient, unimodal bow]
  • image p443fig12.41 Neurophysiological data from the Averbeck etal sequential copying experiments show the predicted primacy gradient in working memory and the self-inhibition of activity as an item is stored. When only the last item remains stored, it has the highest activity becasuse it has been freed from inhibition by earlier items.
    || Neurophysiology of sequential copying
  • image p444fig12.42 The LIST PARSE laminar cortical model of working memory and list chunking that I published with Lance Pearson in 2008 simulated the Averbeck etal data in Figure 12.41, as in the left column of the figure. It also simulated cognitive data about working memory storage by human subjects. See the text for details.
    || LIST PARSE: Laminar cortical model of working memory and list chunking (Grossberg, Pearson 2008). Simulates data about: [immediate, delayed, continuous] distractor free recall; immediate serial recall; and variable-speed sequential perfomance of motor acts. [velocity, acceleration] vs time (ms) from recall cue.
  • image p445fig12.43 The LIST PARSE laminar cortical Cognitive Working Memory circuit, that is proposed to occur in ventrolateral prefrontal cortex, is homologous to the LAMINART circuit circuit that models aspects of how visual cortex sees. The Motor Working Memory, VITE Trajectory Generator, and Variable-Rate Volitional Control circuits model how other brain regions, including dorsolateral prefrontal cortex, motor cortex, cerebellum, and basal ganglia, interact with the Cognitive Working Memory to control working memory storage and variable-rate performance of item sequences.
    || List parse circuit diagram. Connectivity convention. sequence chunks [<- BU filter, TD expectation ->] working memory. Working memory and sequence chunking circuit is homologous to visual LAMINART circuit!
  • image p446fig12.44 (left column, top row) LIST PARSE can model linguistic data from human subjects. In this figure, model parameters are fixed to enable a close fit to data about error-type distributions in immediate free recall experiments, notably transposition errors. (right column, top row) Simulation and data showing bowing of the serial position curve, including an extended primacy gradient. (left column, bottom row) The simulation curve overlays data about list length effects, notably the increasing recall difficulty of longer lists during immediate serial recall (ISR). (right column, bottom row) Simulation (bottom image) and data (top image) of the limited temporal extent for recall.
    || (1. TL) Error-type distributions in immediate serial recall (Hanson etal 1996). % occurence vs serial position. Graph convention: Data- dashed lines; Simulations- solid lines. Six letter visual ISR. Order errors- transpositions of neighboring items are the most common. Model explanation: Noisy activation levels change relative oorder in primacy gradient. Similar activation of neighboring items most susceptible to noise. Model paramenters fitted on these data. (2. TR) Bowing of serial position curve (Cowan etal 1999). % correct vs serial position. Auditory ISR with various list lengths (graphs shifted rightward): For [, sub-]span lists- extended primacy, with one (or two) item recency; Auditory presentation- enhanced performance for last items. LIST PARSE: End effects- first and last items half as many members; Echoic memory- last presented item retained in separate store. (3. BL) List length effects, circles (Crannell, Parrish 1968), squares (Baddeley, Hitch 1975), solid line- simulation. % list correct vs list length. Variable list length ISR: longer lists are more difficult to recall. LIST PARSE: More items- closer activation levels and lower absolute activity level with enough inputs; Noise is more likely to produce order errors, Activity levels more likely to drop below threshold;. (4. BR) Limited temporal extent for recall (Murdoch 1961). % recalled vs retention interval (s). ISR task with distractor-filled retention intervals (to prevent rehersal): Increasing retention interval - decreases probabilty of recalling list correctly; Load dependence- longer list more affected by delays; Performance plateau- subjects reach apparent asymptote. LIST PARSE: Increase convergence of activities with time; loss of order information;.
  • image p447fig12.45 (left column) LIST PARSE simulations of the proportion of order errors as a function of serial position for 6 item lists with (a) an extended pause of 7 time units between the third and fourth items, and (b) pauses of 5 time units (solid curve) and 10 time units (dashed curve) between all items. (right column) Simulations (solid curves) and data (dashed curves) illustrating close model fits in various immediate free recall tasks.
    || (Left) Temporal grouping and presentation variability. Temporal grouping: Inserting an extended pause leads to inter-group bowing; Significantly different times of integration and activity levels across pause, fewer interchanges. (Right) Immediate free recall, and [delayed, continuous] distractor-free recall. Overt rehersal IFR task with super-span (ie 20 item) lists: Extended recency- even more extended with shorter ISIs; Increased probability of recall with diminished time from last rehearsal; Early items in list rehearsed most;. LIST PARSE (unique) for long lists: Incoming items from recency gradient; Rehearsal (re-presentation) based upon level of activity.
  • image p448fig12.46 A Masking Field working memory is a multiple-scale self-similar recurrent shunting on-center off-surround network. It can learn list chunks that respond selectively to lists of item chunks of variable length that are stored in an item working memory at the previous processing stage. Chunks that code for longer lists (eg MY vs MYSELF) are larger, and give rise to stronger recurrent inhibitory neurons (red arrows).
    || How to code variable length lists? MASKING FIELDS code list chunks of variable length (Cohen, Grossberg 1986, 1987; Grossberg, Kazerounian 2011, 2016; Grossberg, Meyers 2000; Grossberg, Pearson 2008). Multiple-scale self-similar WM: Masking field, adaptive filter. Variable length coding- Masjking fields select list chunks that are sensitive to WM sequences of variable length; Selectivity- Larger cells selectively code code longer lists; Assymetric competition- Larger cells can inhibit smaller cells more than conversely MAgic Number 7! Temporal order- different list chunks respond to the same items in different orders eg LEFT vs FELT;.
  • image p449fig12.47 This figure illustrates the self-similarity in a Masking Field of both its recurrent inhibitory connections (red arrows) and its top-down excitatory priming signals (green arrows) to the item chunk working memory.
    || Both recurrent inhibition and top-down excitatory priming are self-similar in a masking field. MYSELF <-> [MY, MYSELF]
  • image p452fig12.48 (left column) In experiments of (Repp etal 1978), the silence duration between the words GRAY and SHIP was varied, as was the duration of the fricative noise in S, with surprising results. (right column) The red arrow directs our attention to surprising perceptual changes as silence and noise durations increase. See the text for details.
    || Perceptual integration of acoustic cues, data (Repp etal 1978). GRAY-> silence duration-> SHIP (noise duration from start of word). Noise duration vs silence duration: GRAY SHIP <-> [GREAT SHIP <-> GRAY CHIP] <-> GREAT CHIP.
  • image p453fig12.49 The ARTWORD model that I published in 2000 with my PhD student Christopher Myers simulates data such as the (Repp etal 1978) data in Figure 12.68. See the text for details.
    || ARTWORD model (Grossberg, Myers 2000). Input phonetic features-> Phonemic item working memory-> Masking Field unitized lists-> Automatic gain control-> Phonemic item working memory. [habituative gate, adaptive filter]s.
  • image p453fig12.50 The ARTWORD perception cycle shows how sequences of items activate possible list chunks, which compete among each other and begin to send their top-down expectations back to the item working memory. An item-list resonance develops through time as a result.
    || ARTWORD perception cycle. (a) bottom-up activation (b) list chunk competition (c) item-list resonance (d) chunk reset due to habituative collapse.
  • image p454fig12.51 (left column) Even as a resonance with the list chunk GRAY begins to develop, if the delay between "gray" and "chip" is increased, greater habituation of this resonance may allow the GREAT chunk to begin to win, thereby smoothly transfering the item-list resonance from GRAY to GREAT through time. (right column) Simulation of a resonant treansfer from GRAY to GREAT, and back again as the silence interval between the words {gray" and "chip" increases. The red region between GRAY and GREAT curves calls attention to when GREAT wins. See the text for details.
    || Resonant transfer, as silence interval increases. (left) Delay GRAY resonance weakens. A delayed additional item can facilitate perception of a longer list. (right) GRAY-> GREAT-> GRAY.
  • image p455fig12.52 Simulation of cARTWORD dynamics in response to the complete list /1/-/2/-/3/. The relevant responses are surrounded by a red box.
    || Presentation of a normal sequence: input /1/-/2/-/3/. |c(i,1)-5| vs time (msec). List chunks select most predictive code. Order stored in WM layers. Resonant activity of /1/-/2/-/3/ in item and feature layers corresponds to conscious speech percept.
  • image p456fig12.53 Simulation of cARTWORD dynamics in response to the partial list /1/-silence-/3/ with /2/ replaced by silence. Only the representations of these items can be seen in the red box.
    || Presentation with silence duration: input /1/-silence-/3/. |c(i,1)-5| vs time (msec). List chunks select most predictive code. Order stored in WM layers. Gap in resonant activity of /1/-silence-/3/ in item and feature layers corresponds to perceived silence.
  • image p456fig12.54 Item /2/ is restored in the correct list position in response to the list /1/-noise-/3/.
    || Presentation with noise: input /1/-noise-/3/. |c(i,1)-5| vs time (msec). List chunks select the most predictive code. Order restored in WM layers. Resonant activity of /1/-/2/-/3/ in item and feature layers corresponds to restoration of item /2/ replaced by noise in input.
  • image p457fig12.55 Item /4/ is restored in the correct list position in response to the list /1/-noise-/5/. This and the previous figure show how future context can disambiguate past noisy sequences that are otherwise identical.
    || Presentation with noise: input /1/-noise-/5/. |c(i,1)-5| vs time (msec). List chunks select the most predictive code. Order restored in WM layers. Resonant activity of /1/-/4/-/3/ in item and feature layers corresponds to restoration of item /4/ replaced by noise in input.
  • image p459fig12.56 (Grossberg, Pearson 2008) proposed that the ability of working memories to store repeated items in a sequence represents rank information about the position of an item in a list using numerical hypercolumns in the prefrontal cortex (circels with numbered sectors: 1,2,3,4). These numerical hypercolumns are conjointly activated by inputs from item categories and from the analog spatial representation of numerosity in the parietal cortex. Thes parietal representations (overlapping Gausian activity profiles that obey a Weber Law) had earlier been modeled by (Grossberg, Repin 2003). See the text for details.
    || Item-order-rank working memory, rank information from parietal numerosity cicuit (Grossberg, Peaarson 2008; Grossberg, Repin 2003). [Sensory working memory-> adaptive filter-> list chunk-> attentive prime-> Motor working memory]-> [large, small] numbers-> transfer functions with variable thresholds and slopes-> uniform input-> integrator amplitude-> number of transient sensory signals.
  • image p460fig12.57 The lisTELOS architecture explains and simulates how sequences of saccadic eye movement commands can be stored in a spatial working memory and recalled. Multiple brain regions are needed to coordinate these processes, notably three different basal ganglia loops to replace saccade storage, choice, and performance, and the supplementary eye fields (SEF) to choose the next saccadic command from a stored sequence. Because all working memories use a similar network design, this model can be used as a prototype for storing and recalling many other kinds of cognitive, spatial, and motor information. See the text for details.
    || lisTELOS model- Spatial working memory (Silver, Grossberg, Bulloch, Histed, Miller 2011). Simulates how [PPC, PFC, SEF, FEF, SC] interact with 3 BG loops to learn and perform sequences of saccadic eye movements.
  • image p461fig12.58 The lisTELOS model built upon key processes that were earlier modeled by the TELOS model. See the text for details.
    || TELOS model (Brown, Bulloch, Grossberg 1999, 2004). shows [BG nigro-[thalamic, collicular], FEF, ITa, PFC, PNR-THAL, PPC, SEF, SC, V1, V4/ITp, Visual Cortex input] and [GABA].
  • image p462fig12.59 The TELOS model clarifies how reactive vs. planned eye movements may be properly balanced against one another, notably how a fast reactive movement is prevented from occuring in response to onset of a cue that requires a different, and more contextually appropriate, response, even if the latter response takes longer to be chosen and performed. The circuit explains how "the brain knows it before it knows" what this latter response should be by changing the balance of excitation to inhibition in the basal ganglie (BG) to keep the reactive gate stays shut until the correct target position can be chosen by a frontal-parietal resonance.
    || Balancing reactive vs. planned movements (Brown, Bulloch, Grossberg 2004). (a) shows [FEF, PPC]-> [BG, SC], and BG-> SC. (b) FTE vs time (msec) for [fixation, saccade, overlap, gap, delayed saccade] tasks.
  • image p463fig12.60 Rank-related activity in prefrontal cortex and supplementary eye fields from two different experiments. See the text for details.
    || Rank-related activity in PFC and SEF. Prefrontal cortex (Averbeck etal 2003) [sqare, inverted triangle]. Supplementary eye field (Isoda, Tanju 2002).
  • image p464fig12.61 (left column) A microstimulating electrode causes a spatial gradient of habituation. (right column) The spatial gradient of habituation that is caused by microstimulation alters the order of saccadic performance of a stored sequence, but not which saccades are performed, using interactions between the prefrontal cortex (PFC) working memory and the supplemental eye field (SEF) saccadic choice.
    || (left) Microstimulation causes habituation (Grossberg 1968). Stimulation caused habituation. Cells close to the stimulation site habituate most strongly. (right) Stimulation biases selection PFC-> SEF-> SEF. PFC Activity gradient in working memory, SEF Microstimulation causes habituation, During selection habituated nodes are less likely to win this competition.
  • image p464fig12.62 The most habituated positions have their neuronal activites most reduced, other things being equal, as illustrated by the gradient from deep habituation (red) to less habituation (pink). The saccadic performance orders (black arrows) consequently tend to end in the most habituated positions that have been stored.
    || The most habituated position is foveated last. For each pair of cues, the cue closest to the stimulation site is most habituated -- and least likely to be selected. Because stimulation spreads in all directions, saccade trajectories tend to converge.
  • image p465fig12.63 Neurophysiological data (left image) and lisTELOS stimulation (right figure) showing how microstimulation biases saccadic performance order but not the positions to which the saccades will be directed. See the text for details.
    || Saccade trajectories converge to a single location in space. Microstimulation biased selection so saccade trajectories converged toward a single location in space. [Data, model] contra <-> Ipsi (msec)
  • image p467fig12.64 Some of the auditory cortical regions that respond to sustained or transient sounds. See text for details.
    || Some auditory cortical regions. Core <-> belt <-> parabelt. [Belt, Core, ls, PAi, Parabelt, PGa, TAs, TE, TP, TPO, st s].
  • image p468fig12.65 Linguistic properties of the PHONET model and some of the data that it simulates. The upper left image summarizes the asymmetric transient-to-sustained gain control that helps to create invariant intraword ratios during variable-rate speech. The lower left image summarizes the rate-dependent gain control of the ARTPHONE model that creates rate-invariant working memory representations in response to sequences of variable-rate speech. The right image summarizes the kind of paradoxical VC-CV category boundary data of (Repp 1980) that ARTPHONE simulates. See the text for details.
    || (left upper) [transient, sustained] [working memory, filter, category]. (left lower) phone inputs-> [input rate estimate, features], Features w <- habituative transmitter gates -> categories-> rate invariant phonetic output, input rate estimate-> gain control-> [features, categories] rate-dependent integration of categories and features. (right) % 2-stop vs VC-CV silent interval (msec): [ib-ga, ib-ba, iga, iba].
  • image p469fig12.66 (left column) A schematic of how preserving relative duration, as in the first and third images, of consonant and vowel pairs can preserve a percept, in this case of /ba/, but not doing so, as in the first and second images, can cause a change in percept, as from /ba/ to /wa/, as in the data of (Miller, Liberman 1979) that PHONET simulates. (right column) Changing frequency extent can also cause a /ba/ - /wa/ transition, as shown in data of (Schwab, Sawusch, Nusbaum 1981) that PHONET also simulates.
    || (left image) Maintaining relative duration as speech speeds up preserves percept (Miller, Liberman 1979). frequency vs time- [/ba/, /wa/, /ba/] (right image) Changing frequency extent causes /b/-/wa/ transition (Schwab, Sawusch, Nusbaum 1981). frequency vs time- [/ba/, /wa/] Dt extent.
  • image p469fig12.67 PHONET contains transient and sustained cells that respond to different kinds of sounds, notably the transients of certain consonants and the sustained sounds of certain vowels. It then uses the transient working memory to gain contol the integration rate of the sustained working memory to which these different detectors input.
    || Phonetic model summary. (left) Acoustic tokens [consonant, vowel]. (middle) Acoustic detectors [transient (sensitive to rate), Sustained (sensitive to duration)]. (right) Working memory, Spatially stored transient pattern (extent) + gain control-> spatially stored sustained pattern.
  • image p471fig12.68 A mismatch reset of /b/ in response to the /g/ in [ib]-[ga] can rapidly shut off the [ib] percept, leading to the percept of [ga] after an interval of silence. In contrast, resonant fusion of the two occurences of /b/ in [ib]-[ba] can cause a continuous percept of sound [iba] to occur during times at which silence is heard in response to [ib]-[ga].
    || Mismatch vs resonant fusion
  • image p473fig12.69 Error rate and mean reaction time (RT) data from the lexical decision experiments of (Schvanevelt, McDonald 1981). ART Matching Rule properties explain these data in (Grossberg, Stone 1986).
    || (left) Error rate vs type of prime [R, N, U], [non,] word. (right) Mean RT (msec) vs type of prime [R, N, U], [non,] word.
  • image p474fig12.70 The kind of model macrocircuit that was used in (Grossberg, Stone 1986) to explain lexical decision task data.
    || inputs-> A1 <-> A2 oconic sensory features <-> A3 item and order in sensory STM <-> A4 list parsing in STM (masking field) <-> A5 semantic network (self-feedback). [A4, A5] <-> V* visual object recognition system. M1-> [outputs, A1]. M1 <-> M2 iconic motor features <-> M3 item and order in motor STM. A2-> M2. A3-> M3.
  • image p476fig12.71 Word frequency data of (Underwood, Freund 1970) that were explained in (Grossberg, Stone 1986).
    || percent errors vs frequency of old words [L-H to H-H, L-L to H-L].
  • image p481fig13.01 Macrocircuit of the functional stages and anatomical interpretations of the Cognitive-Emotional-Motor, or CogEM, model.
    || Drive-> hypothalamus value categories <-> amygdala incentive motivational learning-> Orbitofrontal cortex- object-value categories <-> sensory cortex- invariant object categories- conditioned reinforcer learning-> amygdala-> hypothalamus.
  • image p483fig13.02 The object-value categories in the orbitofrontal cortex require converging specific inputs from the sensory cortex and nonspecific incentive motivational inputs from the amygdala in order to fire. When the orbitofrontal cortex fires, it can deliver top-down ART Matching Rule priming signals to the sensory cortical area by which it was activated, thereby helping to choose the active recognition categories there that have the most emotional support, while suppressing others, leading to attentional blocking of irrelevant cues.
    || Cognitive-Emotional-Motor (CogEM) model. Drive-> amygdala incentive motivational learning-> orbitofrontal cortex- need converging cue and incentive iputs to fire <-> sensory cortex- conditioned renforcer learning-> amygdala. CS-> sensory cortex. Motivated attention closes the cognitive-emotional feedback loop, focuses on relevant cues, and causes blocking of irrelevant cues.
  • image p483fig13.03 The predicted processing stages of CogEM have been supported by anatomical studies of connections between sensory cortices, amygdala, and orbitofrontal cortex.
    || Adapted from (Barbas 1995). sensory cortices = [visual, somatosensory, auditory, gustatory, olfactory]. sensory cortices-> amygdala-> orbital prefrontal cortex. sensory cortices-> orbital prefrontal cortex. [visual cortex, amygdala]-> lateral prefrontal cortex.
  • image p484fig13.04 The top-down feedback from the orbitofrontal cortex closes a feedback loop that supports a cognitive-emotional resonance. If this resonance can be sustained long enough, it enables us to have feelings at the same time that we experience the categories that caused them.
    || Cognitive-Emotional resonance. Basis of "core consciousness" and "the feeling of what happens". (Damasio 1999) derives heuristic version of CogEM model from his clinical data. Drive-> amygdala-> prefrontal cortex-> sensory cortex, resonance around the latter 3. How is this resonance maintained long enough to become conscious?
  • image p484fig13.05 Classical conditioning is perhaps the simplest kind of associative learning.
    || Classical conditioning (nonstationary prediction). Bell (CS)-> (CR), Shock (US)-> Fear (UR), associative learning.
  • image p485fig13.06 (left column) An inverted-U occurs in conditioned reinforcer strength as a function of the ISI between the CS and the US. Why is learning attenuated at 0 ISI? (right column) Some classical conditioning data that illustrate the inverted-U in conditioning as a function of the ISI.
    || InterStimulus Interval (ISI) effect. Data from (Dmith etal 1969; Schneiderman, Gormezano 1964).
  • image p485fig13.07 The paradigm of secondary conditioning. See the text for details.
    || Secondary conditioning (Advertising!). [CS1, C2] become conditioned reinforcers.
  • image p486fig13.08 The blocking paradigm illustrates how cues that do not predict different consequences may fail to be attended.
    || Blocking- minimal adaptive prediction. Phase [I, II] - CS2 is irrelevant.
  • image p486fig13.09 Equally salient cues can be conditioned in parallel to an emotional consequence.
    || Parallel processing of equally salient cues vs overshadowing (Pavlov).
  • image p486fig13.10 Blocking follows if both secondary conditioning and attenuation of conditioning at a zero ISI occur.
    || Blocking = ISI + secondary conditioning.
  • image p487fig13.11 The three main properties of CogEM that help to explain how attentional blocking occurs.
    || CogEM explanation of attentional blocking. Internal drive input <-> Conditioned reinforcer learning (self-recurrent) <-> Competition for STM <- Motor learning. 1. Sensory representations compete for limited capacity STM. 2. Previously reinforced cues amplify their STM via positive feedback. 3. Other dues lose STM via competition.
  • image p488fig13.12 (left column) How incentive motivational feedback amplifies activity of a sensory cortical cell population. (right column) A sensory cortical cell population whose activity is amplified by incentive motivational feedback can suppress the activities of less activated populations via self-normalizing recurrent competitive interactions.
    || Motivational feedback and blocking. (left) sensory input CS, STM activity without motivational feedback, STM activity with motivational feedback. (right) STM suppressed by competition, STM amplified by (+) feedback.
  • image p489fig13.13 (top row) If a positive ISI separates onset of a CS and US, then the CS can sample the consequences of the US during the time interval before it is inhibited by it. (bottom row) A CogEM simulation of the inverted-U in conditioning as a function of the ISI betweeen CS and US.
    || Positive ISI and conditioning.
  • image p490fig13.14 In order for conditioning to work properly, the sensory representation needs to have at least two successive processing stages. See the text for why.
    || Model of Cognitive-Emotional circuit. Drive-> Drive representation-> ??? <-> Sensory STM <-CS
  • image p490fig13.15 The CogEM circuit is an ancient design that is found even in mollusks like Aplysia. See the text for details.
    || Aplysia (Buononamo, Baxter, Byrne, Neural Networks 1990; Grossberg, Behavioral and Brain Sciences 1983). Facilitator neuron ~ drive representation.
  • image p492fig13.16 (left column) In order to satisfy all four postulates, there needs to be UCS-activated arousal of polyvalent CS-activated sampling neuron. (right column) The arousal needs to be nonspecific in order to activate any of the CSs that could be paired with the UCS.
    || Polyvalent CS sampling and US-activated nonspecific arousal.
  • image p493fig13.17 (top row) Overcoming the ostensible contradiction that seems to occur when attempting to simultaneously realize hypotheses (3) and (4). (bottom row) The problem is overcome by assuming the existence of US-activated drive representation to which CSs can be associated, and that activate nonspecific incentive motivational feedback to sensory representations.
    || Learning nonspecific arousal and CR read-out. (top) Learning to control nonspecific arousal, Learning to read-out the CR (bottom) Drive representation, Incentive motivation.
  • image p494fig13.18 Realizing the above constraints favor one particular circuit. Circuits (a) and (b) are impossible. Circuit (d) allows previously occurring sensory cues to be stored in STM. Circuit (e) in addition enables a CS can be stored in STM without initiating conditioning in the absence of a US.
    || Learning to control nonspecific arousal and read-out of the CR: two stages of CS. (d) & (e) polyvalent cells.
  • image p494fig13.19 (left column, top row) Secondary conditioning of both arousal and a specific response are now possible. (bottom row) The CogEM circuit may be naturally extended to include multiple drive representations and inputs. (right column, top row) The incentive motivational pathways is also conditionable in order to enable motivational sets to be learned.
    || Secondary conditioning. Homology: conditionable incentive motivation. Multiple drive representations and inputs.
  • image p496fig13.20 (top image) A single avalanche sampling cell can learn an arbitrary space-time pattern by sampling it as a temporally ordered series of spatial patterns using a series of outstars. Once an avalanche
  • image p497fig13.21 (left column) An early embodiment of nonspecific arousal was a command cell in such primitive animals as crayfish. (right column) The songbird pattern generator is also an avalanche. This kind of circuit raises the question of how the connections self-organize through developmental learning.
    || Nonspecific arousal as a command cell. Crayfish swimmerets (Stein 1971). Songbird pattern generator (Fee etal 2002)+. Motor-> RA-> HVC(RA).
  • image p498fig13.22 (left column, top row) Adaptive filtering and conditioned arousal are both needed to regulate what cues can learn to activate particular space-time patterns. These developments lead inexorably to basic cognitive abilities, as embodied in the 3D LAMINART models for 3D vision and figure-ground perception (Chapter 11) and the 3D ARTSCAN SEARCH model for invariant object learning, recognition, and 3D search (Chapter 6). (right column, top row) Conditioned arousal enables only emotionally important cues to activate a motivationally relevant space-time pattern. (bottom row) Conditioned arousal and drive representations arise naturally from the unlumping of avalanche circuits to make them selective to motivationally important cues. The MOTIVATOR model is a natural outcome of this unlumping process (this chapter).
    || (top) Adaptive filtering and Conditioned arousal. Towards Cognition: need to filter inputs to the command cell. Towards Emotion: important signals turn arousal ON and OFF. (bottom) Conditioned arousal and Drive representations. Competition between conditioned arousal sources at drive representations, eg amygdala.
  • image p499fig13.23 (left column) Self-organization in avalanches includes adaptive filtering by outstars [?instars?], serial learning of temporal order, and learned read-out of spatial patterns by outstars. (right column) Serial learning of temporal order occurs in recurrent associative networks.
    || (left) Self-organizing avalanches [instars, serial learning, outstars]. (right) Serial list learning.
  • image p500fig13.24 Both primary excitatory and inhibitory conditioning can occur using opponent processes and their antagonistic rebounds.
    || Opponent processing. Cognitive drive associations. Primary associations: excitatory [CS, US, Fear], inhibitory [CS, US, Fear, Relief rebound].
  • image p501fig13.25 When an unbiased transducer is embodied by a finite rate physical process, mass action by a chemical transmitter is the result.
    || Unbiased transducer (Grossberg 1968). S=input, T=output, ?SB?=SB B is the gain. Suppose T is due to release of chemical transmitter y at a synapse: release rate T = S*y (mass action); Accumulation y ~= B.
  • image p501fig13.26 A simple differential equation describes the processes of transmitter accumulation and release that do their best, at a finite rate, to carry out unbiased transduction.
    || Transmitter accumulation and release. Transmitter y cannot be restored at an infinite rate: T = S*ym y ~= B, Differential equations: d[dt: y] = A*(B - y) - S*y = accumulate - release. Transmitter y tries to recover to ensure unbiased transduction. What if it falls behind? Evolution has exploited the good properties that happen then.
  • image p502fig13.27 Despite the fact that less transmitter y is available after persistent activation by a larger input signal S, the gated output signal S*y is larger die to the mass action gating of S by y.
    || Minor mathematical miracle. At equilibrium: 0 = d[dt: y] = A*(B - y) - S*y. Transmitter y decreases when input S increases: y = A*B/(A + S). However, output S*y increases with S!: S*y = S*A*B/(A + S) (gate, mass action).
  • image p502fig13.28 Fast increments and decrements in an input S lead to slow habituation of the habituative gate, or medium-term memory, transmitter y. The output T is a product of these fast and slow variables, and consequently exhibits overshoots, habituation, and undershoots in its response.
    || Habituative transmitter gate: Input; Habituative gate d[dt: y] = A*(B - y) - S*y; Output [overshoot, habituation, undershoot]s Weber Law.
  • image p503fig13.29 The ON response to a phasic ON input has Weber Law properties due to the divisive terms in its equilibrium response, which are due to the habituative transmitter.
    || ON-response to phasic ON-input. S1 = f(I+J): y1 = A*B/(A+S1), T1 = s1*y1 = A*B*S1/(A+S1); S2 = f(I): y2 = A*B/(A+S2), T2 = s2*y2 = A*B*S2/(A+S2);. ON = T1 - T2 = (A^2*B*(f(I+J)-f(I)) / (A+f(I)) / (A+f(I+J)) Note Weber Law. When f has a threshold, small I requires larger J to fire due to numerator, but makes suprathreshold ON bigger due to denominator. When I is large, quadratic in denominator and upper bound of f make ON small.
  • image p504fig13.30 OFF rebound occurs when the ON-input shuts off due to the imbalance that is caused by the ON input in the habituation of the transmitters in the ON and OFF channels. The relative sizes of ON responses and OFF rebounds is determined by the arousal level I.
    || OFF-rebound due to phasic input offset. Shut off J (Not I!). Then: S1 = f(I), S2 = f(I); y1 ~= A*B/(A+f(I+J)) < y2 ~= A*B/(A+f(I)) y1 and y2 are SLOW; T1 = S1*y1, T2 = S2*y2, T1 < T2;. OFF = T2 - T1 = A*B*f(I)*(f(I+J) - f(I)) / (A+f(I)) / (A + f(I+J)), Note Weber Law due to remembered previous input. Arousal sets sensitivity of rebound: OFF/ON = f(I)/A. Why is the rebound transient? Note equal f(I) inputs.
  • image p504fig13.31 Behavioral contrast can occur during reinforcement learning due to decreases in either positive or negative reinforcers. See Figure 13.32 for illustrative operant conditioning data.
    || Behavioral contrast: rebounds! Shock level vs trials. 1. A sudden decrease in frequency or amount of food can act as a negative reinforcer: Frustration. 2. A sudden decrease in frequency or amount of shock can act as a positive reinforcer: Relief.
  • image p505fig13.32 Response suppression and the subsequent antagonist rebounds are both calibrated by the inducing shock levels.
    || Behavioral contrast (Reynolds 1968). Responses per minute (VI schedule) vs Trial shock level.
  • image p505fig13.33 An unexpected event can disconfirm ongoing processing by triggering a burst of nonspecific arousal that causes antagonistic rebounds in currently active gated dipoles, whether cognitive or affective.
    || Novelty reset: rebound to arousal onset. 1. Equilibrate to I and J: S1 = f(I+J); y1 = A*B/(A+S1); S2 = f(I+J); y2 = A*B/(A+S2);. 2. Keep phasic input J fixed; increase arousal I to I* = I + ∆I: (a) OFF reaction if T1 < T2; OFF = T2 - T1 = f(I*+J)*y2 - f(I*)*y1 = { A*B*(f(I*) - f(I*+J)) - B*(f(I*)*f(I+J) - f(I)*f(I*+J)) } / (A+f(I)) / (A + f(I+J)). 3. How to interpret this complicated equation?
  • image p506fig13.34 With a linear signal function, one can prove that the rebound increases with both the previous phasic input intensity J and the unexpectedness of the disconfirming event that caused the burst of nonspecific arousal.
    || Novelty reset: rebound to arousal onset.
  • image p506fig13.35 A shock, or other reinforcing event, can have multiple cognitive and emotional effects on different brain processes.
    || Multiple functional roles of shock. 1. Reinforcement sign reversal: An isolated shock is a negative reinforcer; In certain contexts, a shock can be a positive reinforcer. 2. STM-LTM interaction: Prior shock levels need to be remembered (LTM) and used to calibrate the effect of the present shock (STM). 3. Discriminative and situational cues: The present shock level is unexpected (novel) with respect to the shock levels that have previously been contingent upon experimental cues: shock as a [1.reinforcer, 2. sensory cue, 3. expectancy].
  • image p509fig13.36 How can life-long learning occur without passive forgetting or associative saturation?
    || Associative learning. 1. Forgetting (eg remember childhood experiences): forgetting [is NOT passive, is Selective]; 2. Selective: larger memory capacity; 3. Problem: why doesn
  • image p510fig13.37 A disconfirmed expectation can cause an antagonistic rebound that inhibits prior incentive motivational feedback, but by itself is insufficient to prevent associative saturation.
    || Learn on-response. 1. CS-> ON, disconfirmed expectation-> antagonistic rebound, OFF-channel is conditioned 2. CS-> [ON, OFF]-> net, zero net output. What about associative saturation?
  • image p510fig13.38 Dissociation of the read-out of previously learned adaptive weights, or LTM traces, and of the read-in of new weight values enables back-propagating dendritic action potentials to teach the new adaptive weight values.
    || Dissociation of LTM read-out and read-in. Backpropagating dendritic action potentials as teaching signals. 1. LTM Denditic spines (Rall 1960
  • image p510fig13.39 Shunting competition and informational noise suppression in affective gated dipoles, plus back-propagating action potentials for teaching signals, enable the net normalized adaptive weights to be learned. They never saturate!
    || Learn net dipole output pattern. Opponent "decision" controls learning. Cf. competitive learning. Learning signal, opponent extinction.
  • image p512fig13.40 A conditioning paradigm that illustrates what it means for conditioned excitators to extinguish.
    || Conditioned excitor extinguishes. 1. Learning phase: CS1 bell-> US, CS1-> Fear(-). 2. Forgetting phase: CS1 bell-> Forgetting. 3. The expectation of shock is disconfirmed.
  • image p513fig13.41 A conditioning paradigm that illustrates what it means for conditioned inhibitors not to extinguish.
    || Conditioned inhibitor does not extinguish. 1. Learning phase: CS1 light-> shock, CS1-> Fear(-); Forgetting phase: n/a;. 2. Learning phase : CS1 + CS bell-> no shock; CS2-> relief;. Forgetting phase: CS2 bell- no forgetting. SAME CS could be used! SAME "teacher" in forgetting phase! Something else must be going on , or else causality would be violated!
  • image p513fig13.42 A conditioned excitor extinguishes because the expectation that was learned of a shock during the learning phase is disconfirmed during the forgetting phase.
    || Conditioned excitor extinguishes. Learning phase: CS1 bell-> US; CS1-> Fear(-); CS1-> shock; CS1 is conditioned to an expectation of shock. Forgetting phase: CS2 bell-> forgetting;. The expectation of shock is disconfirmed.
  • image p513fig13.43 A conditioned inhibitor does not extinguish because the expectation that was learned of no shock during the learning phase is not disconfirmed during the forgetting phase.
    || Conditioned excitor extinguishes. 1. Learning phase: CS1 light-> Shock; CS1-> Fear(-);. Forgetting phase: n/a;. 2. Learning phase: CS1 bell + CS2-> NO shock; CS2-> relief(+); CS2-> no shock;. Forgetting phase: CS2 bell!-> no forgetting;. The expectation that "no shock" follows CS2 is NOT disconfirmed!
  • image p514fig13.44 Analog of the COgEM model in Figure 6.1 of (Damasio 1999).
    || (a) map of object X-> map of proto-self at inaugural instant-> [, map of proto-self modified]-> assembly of second-order map. (b) map of object X enhanced-> second-order map imaged.
  • image p519fig14.01 Coronal sections of prefrontal cortex. Note particulary the areas 11, 13, 14, and 12o.
    ||
  • image p520fig14.02 Macrocircuit of the main brain regions, and connections between them, that are modelled in the unified predictive Adaptive Resonance Theory (pART) of cognitive-emotional and working memory dynamics. Abbreviations in red denote brain regions used in cognitive-emotional dynamics. Those in green denote brain regions used in working memory dynamics. Black abbreviations denote brain regions that carry out visual perception, learning and recognition of visual object categories, and motion perception, spatial representation and target tracking. Arrow denote non-excitatory synapses. Hemidiscs denote adpative excitatory synapses. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown. Also not shown are output signals from cortical areas to motor responses. V1: striate, or primary, visual cortex; V2 and V4: areas of prestriate visual cortex; MT: Middle Temporal cortex; MST: Medial Superior Temporal area; ITp: posterior InferoTemporal cortex; ITa: anterior InferoTemporal cortex; PPC: Posterior parietal Cortex; LIP: Lateral InterParietal area; VPA: Ventral PreArculate gyrus; FEF: Frontal Eye Fields; PHC: ParaHippocampal Cortex; DLPFC: DorsoLateral Hippocampal Cortex; HIPPO: hippocampus; LH: Lateral Hypothalamus; BG: Basal Ganglia; AMYG: AMYGdala; OFC: OrbitoFrontal Cortex; PRC: PeriRhinal Cortex; VPS: Ventral bank of the Principal Sulcus; VLPFC: VentroLateral PreFrontal Cortex. See the text for further details.
    ||
  • image p523fig14.03 (a) The MOTIVATOR neural model generalizes CogEM by also including the basal ganglia. It can hereby explain and simulate complementary functions of the amygdala and basal ganglia (SNc) during conditioning and learned performance. The basal ganglia generate Now Print signals in response to unexpected rewards. These signals modulate learning of new associations in many brain regions. The amygdala supports motivated attention to trigger actions that are expected to occur in response to conditioned or unconditioned stimuli. Object Categories represent visual or gustatory inputs in anterior inferotemporal (ITA) and rhinal (RHIN) cortices, respectively. Value Categories represent the value of anticipated outcomes on the basis of hunger and satiety inputs, in amygdala (AMYG) and lateral hypothalamus (LH). Object-Value Categories resolve the value of competing perceptual stimuli in medial (MORB) and lateral (ORB) orbitofrontal cortex. The Reward Expectation Filter detects the omission or delivery of rewards using a circuit that spans ventral striatum (VS), ventral pallidum (VP), striosomal delay (SD) cells in the ventral striatum, the pedunculopontine nucleus (PPTN) and midbrain dopaminergic neurons of the substantia nigra pars compacta/ventral tegmental area (SNc/VTA). The circuit that processes CS-related visual information (ITA, AMTG, ORB) operates in parallel with a circuit that processes US-related visual and gustatory information (RHIN, AMYG, MORB). (b) Reciprocal adaptive connections between hypothalamus and amygdala enable amygdala cells to become learned value categories. The bottom region represents hypothalmic cells, which receive converging taste and metabolite inputs whereby they become taste-drive cells. Bottom-up signals from activity patterns across these cells activate competing value category, or US Value Representations, in the amygdala. A winning value category learns to respond selectively to specific combinations of taste-drive activity patterns and sends adaptive top-down priming signals back to the taste-drive cells that activated it. CS-activated conditioned reinforcer signals are also associatively linked to value categories. Adpative connections end in (approximately) hemidiscs. See the text for details.
    ||
  • image p524fig14.04 (a) Model basal ganglia circuit for the control of dopaminergic Now Print signals from the substantia nigra pars compacta, or SNc, in response to unexpected rewards. Cortical inputs (Ii), activated by conditioned stimuli, learn to excite the SNc via a multi-stage pathway from the vantral striatum (S) to the ventral pallidum and then on to the PPTN (P) and the SNc (D). The inputs Ii excite the ventral striatum via adaptive weights WIS, and the ventral striatum excites the SNc with strength W_PD. The striosomes, which contain an adaptive spectral timing mechanism [xij, Gij, Yij, Zij], learn to generate adaptively timed signals that inhibit reward-related activation of the SNc. Primary reward signals (I_R) from the lateral hypothalamus both excite the PPTN directly (with strength W_RP) and act as training signals to the ventral striatum S (with strength W_RS) that trains the weights W_IS. Arrowheads denote excitatory pathways, circles denote inhibitory pathways, and hemidiscs denote synapses at which learning occurs. Thick pathways denote dopaminergic signals.
    ||
  • image p530fig14.05 Displays used by (Buschman, Miller 2007) in their visual search experiments. See the text foir details.
    || Fixation 500 ms-> Sample 1000 ms-> Delay 500 ms-> Visual [pop-out, search]- reaction time.
  • image p531fig14.06 Classification of scenic properties as texture categories by the ARTSCENE model. See the text for details.
    || Image-> Feature extraction (texture principal component rankings)-> Learning feature-to-scene mapping (texture category principal component rankings)<- scene class. Large-to-small attentional shrouds as principle component higher.
  • image p531fig14.07 Voting in the ARTSCENE model achieves even better prediction of scene type. See the text for details.
    || Image-> Feature extraction (texture principal component rankings)-> Learning feature-to-scene mapping (texture category principal component rankings)-> evidence accumulation (sum)-> scene class winner-take-all inference. Large-to-small attentional shrouds as principle component higher.
  • image p532fig14.08 Macrocircuit of the ARTSCENE Search neural model for learning to search for desired objects by using the sequences of already experienced objects and their locations to predict what and where the desired object is. V1 = First visual area or primary visual cortex; V2 = Second visual area; V4 = Fourth visual area; PPC = Posterior Parietal Cortex; ITp = posterior InferoTemporal cortex; ITa = anterior InferoTemporal cortex; MTL = Medial Temporal Lobe; PHC = ParaHippoCampal cortex; PRC = PeriRhinal Cortex; PFC = PreFrontal Cortex; DLPFC = DorsoLateral PreFrontal Cortex; VPFC = Ventral PFC; SC = Superior Colliculus.
    ||
  • image p533fig14.09 Search data and ARTSCENE Search simulations of them in each pair of images from (A) to (F). See the text for details.
    || 6*[data vs simulation], [Response time (ms) versus epoch].
  • image p540fig15.01 The timing of CS and US inputs in the delay and trace conditioning paradigms.
    || Delay and trace conditioning paradigms. [CS, US] vs [Delay, Trace]. To perform an adaptively timed CR, trace conditioning requires a CS memory trace over the Inter-Stimulus Interval (ISI).
  • image p541fig15.02 The neurotrophic Spectrally Timed Adaptive Resonance Theory, or nSTART, model of (Franklin, Grossberg 2017) includes hippocampus to enable adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between CS and US.
    || Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. [CS, US] -> Sensory Cortex (SC) <- motivational attention <-> category learning -> Prefrontal Cortex (PFC). SC conditioned reinforcement learning-> Amygdala (cannot bridge the temporal gap) incentive motivational learning-> PFC. SC adaptively timer learning and BDNF-> Hippocampus (can bridge the temporal gap) BDNF-> PFC. PFC adaptively timed motor learning-> cerebellum.
  • image p541fig15.03 Stages in the processing of adaptively timed conditioning, leading to timed responses in (d) that exhibit both individual Weber laws and an inverted U in conditioning as a function of ISI. See the text for details.
    || Curves of [Response vs ISI].
  • image p542fig15.04 Conditioning data from (Smith 1968; Millenson etal 1977). The former shows the kind of Weber Law and inverted U that were simulated in Figure 15.3. The latter shows that, if there are two ISIs during an experiment, then the animals learn to adaptively time their responses with two properly scaled Weber laws.
    || (left) One ISI (Smith 1968) [mean membrane extension (mm) versus time after CS onset (msec)]. (right) Two ISIs (Millenson etal 1977) [200, 100] msec CS test trials, [mean momentary CS amplitude (mm) vs time after CS onset (msec)]. (bottom) Conditioned eye blinks, made with nictitating membrane and/or eyelid, are adaptively timed: peak closure occurs at expected time(s) of arrival of the US following the CS and obeys a Weber Law.
  • image p543fig15.05 Simulation of conditioning with two ISIs that generate their own Weber Laws, as in the data shown in Figure 15.4.
    || Learning with two ISIs: simulation: R = sum[all: f(xi)*yi*xi] vs msec. Each peak obeys Weber Law! strong evidence for spectral learning.
  • image p543fig15.06 The circuit between dentate granule cells and CA1 hippocampal pyramid cells seems to compute spectrally timed responses. See the text for details.
    || Hippocampal interpretation. 1. Dentate granule cells (Berger, Berry, Thompson 1986): "increasing firing...in the CS period...the latency...was constant". 2. Pyramidal cells: "Temporal model" Dentate granule cells-> CA3 pyramids. 3. Convergence (Squire etal 1989): 1e6 granule cells, 1.6e5 CA3 pyramids. 80-to-1 (ri).
  • image p544fig15.07 In response to a step CS and sustained storage by I_CS of that input, a spectrum of responses xi at different rates ri develops through time.
    || Spectral timing: activation. CS-> I_CS-> All xi. STM sensory representation. Spectral activation d[dt: xi] = ri*[-A*xi + (1 - B*xi)*I_CS].
  • image p544fig15.08 The spectral activities xi generate sigmoid signals f(xi) before the signals are, in turn, gated by habituative transmitters yi.
    || Habituative transmitter gate. transmitter.
  • image p544fig15.09 As always, the habituative transmitter gate yi increases in response to accumulation and decreases due to gated inactivation, leading to the kinds of transmitter and output responses in the right hand column.
    || Habituative transmitter gate (Grossberg 1968). 1. d[dt: yi] = c*(1-yi) - D*f(xi)*yi, C-term - accumulation, D-term gated inactivation. 2. Sigmoid signal f(xi) = xi^n / (B^n + xi^n). 3. Gated output signal f(xi)*yi.
  • image p545fig15.10 When the activity spectrum xi generates a spectrum of sigmoidal signals (f(xi), the corresponding transmitters habituate at different rates. The output signals f(xi)*yi therefore generate a series of unimodal activity profiles that peak at different times, as in Figure 15.3a.
    || A timed spectrum of sampling intervals. [f(xi) activation, yi habituation, f(xi)*yi gated sampling] spectra. gated = sampling intervals.
  • image p545fig15.11 The adaptive weight, or LTM trace , zi learns from the US input I_US at times when the sampling signal f(xi)*yi is on. It then gates the habituative sampling signal f(xi)*yi to generate a doubly gated response f(xi)*yi*zi.
    || Associative learning, gated steepest descent learning (Grossberg 1969). d[dt: zi] = E*f(xi)*yi*[-zi + I_US], E-term read-out of CS gated signal, []-term read-out of US. Output from each population: f(xi)*yi*zi doubly gated signal.
  • image p546fig15.12 The adaptive weights zi in the spectrum learn fastest whose sampling signals are large when the US occurs, as illustrated by the green region in this simulation of (Grossberg, Schmajuk 1989).
    || Computer simulation of spectral learning. (left) fast (right) slow. Constant ISI: 6 cells fast to slow, 4 learning trials, 1 test trial.
  • image p546fig15.13 The total learned response is a sum R of all the doubly gated signals in the spectrum.
    || Adaptive timing is a population property. Total output signal: R = sum[i: f(xi)*yi*zi]. Adaptive timing is a collective property of the circuit. "Random" spectrum of rates achieves good collective timing.
  • image p547fig15.14 An individual
  • image p547fig15.15 Expected non-occurences do not prevent the processing of sensory events and their expectations. Rather, they prevent mismatches of those expectations from triggering orienting reactions.
    || Expected non-occurrence of goal. Some rewards are reliable but delayed in time. Does not lead to orienting reactions: How? Both expected and unexpected nonoccurrences are diue to mismatch of a sensory event with learned expectations. Expected non-occurrences do not inhibit sensory matching: eg a pigeon can see an earlier-than-usual food pellet. Hypothesis: Expected non-occurrences inhibit the process whereby sensory mismatch activates orienting reactions. Mismatch not-> orient.
  • image p548fig15.16 Homologous recognition learning and reinforcement learning macrocicuits enable adaptively timed conditioning in the reinforcement learning circuit to increase inhibition of the orienting system at times when a mismatch in the recognition system would have reduced inhibition of it.
    || Homolog between ART and CogEM model, complementary systems. [Recognition, Reinforcement] learning vs [Attentional, Orienting] system. Reinforcement: timing, drive representation.
  • image p548fig15.17 The timing paradox asks how inhibition of an orienting response (-) can be spread throughout the ISI, yet accurately timed responding can be excited (+) at the end of the ISI.
    || Timing paradox. [CS light, US shock] vs t. ISI = InterStimulus Interval = expected delay of reinforcer. Want timing to be accurate. Want to inhibit exploratory behaviour throught ISI.
  • image p549fig15.18 The Weber Law solves the timing paradox by creating an adaptively timed response throughout the ISI that peaks at the ISI. Within the reinforcement learning circuit, this response can maintain inhibition of the orienting system A at the same time as it generates adaptively timed incentive motivation to the orbitofrontal cortex.
    || Weber Law: reconciling accurate and distributed timing. Resolution: Output can inhibit orienting, peak response probability. What about different ISIs? Standard deviation = peak time. Weber law rule.
  • image p549fig15.19 How the adaptively timed hippocampal spectrum T inhibits (red arrow) the orienting system A as motivated attention in orbitofrontal cortex Si(2) peaks at the ISI.
    || Conditioning, Attention, and Timing circuit. Hippocampus spectrum-> Amgdala orienting system-> neocortex motivational attention. Adaptive timing inhibits orienting system and maintains adaptively timed Motivated Attention on the CS.
  • image p550fig15.20 Adaptively timed conditioning of Long Term Depression, or LTD, occurs in the cerebellum at synapses between parallel fibres and Purkinje cells, thereby reducing inhibition of subcortical nucleus cells and enabling them to express their learned movement gains within the learned time interval. Also see Figure 15.21.
    || [CS-Activated input pathways parallel fibres, US-Activated climbing fibres]-> [Subcortical nucleus (gain control), Cerebella cortex- Purkinje cells (timing)].
  • image p551fig15.21 The most important cells types and circuitry of the cerebellum: Purkinje cells (PC) receive excitatory inputs from the climbing fibres (CF) that originate in the inferior olive (IO) and from parallel fibres (PF), which are the axons for granule cells (GC). GCs, in turn, receive inputs from the mossy fibres (MF) coming from the precerebellar nuclei (PCN). The PF also inhibit PC via basket cells (BC), thereby helping to select the most highly activated PC. The PC generate inhibitory outputs from the cerebellum cortex to the deep cerebellar nuclei (DCN), as in Figure 15.20. Excitatory signals are denoted by (+) and inhibitory signals by (-). Other notations: GL- granular layer; GoC- golgi cells; ML- molecular layer; PCL- Purkinje cell layer; SC- stellate cell; WM- white matter.
    ||
  • image p551fig15.22 Responses of a retinal cone in the turtle retina to brief flashes of light of increasing intensity.
    || response vs msc.
  • image p552fig15.23 Cerebellar biochemistry that supports the hypothesis of how mGluR supports adaptively timed conditioning at cerebellar Purkinje cells. AMPA, Amino-3-hydroxy-5-methyl4-isoxazole priopionic acid-sensitive glutamate receptor; cGMP, cyclic guanosine monophosphate; DAG, diacylglycerol; glu, glutamate; GC, guanylyl cyclase; gK, Ca+-dependent K+ channel protein; GTP, guanosine triphosphate; IP 3
  • image p556fig15.24 (a) Data showing normally timed responding (solid curve) and short latency responses after lesioning cerebellar cortex (dashed curve). (b) computer simulation of short latency response after ablation of model cerebellar cortex.
    ||
  • image p557fig15.25 Computer simulations of (a) adaptively timed long term depression at Purkinje cells, and (b) adaptively timed activation of cereballar nuclear cells.
    || response vs time (msec)
  • image p557fig15.26 Brain regions and processes that contribute to autistic behavioral symptoms when they become imbalanced in prescribed ways.
    || Basal Gamglia prolonged gate opening <-> { Amygdala emotionally depressed-> [hippocampus- hyperspecific learning; Cerebellum- adaptive timing fails; hypofrontal blocking fails, no Theory of Mind]-> Neocortex; Neocortex- rewards not received-> Amygdala}.
  • image p559fig15.27 Brain regions and processes that contribute to the release of dopaminergic Now Print signals by the substantia nigra pars compacta, or SNc, in response to unexpected reinforcing events. See the text for details.
    || Model of spectrally timed SNc learning (Brown, Bulloch, Grossberg 1999). Delayed inhibitory expectations of reward. Dopamine cells signal an error in reqard prediction timing or magnitude. Immediate excitatory predictions of reward. Lateral hypothalamus (Primary Reward Input)-> [(+)ventral striatum <-> ventral pallidium (+)-> PPTN(+)-> SNc]. SNc-> [dopamine signal -> ventral striatum, Striosomal cells]. Conditioned Stimuli (CS)(+)-> [ventral striatum, striosomal cells]. Striosomal cells(-)-> SNc.
  • image p559fig15.28 Neurophysiological data (left column) and model simulations (right column) of SNc responses. See the text for details.
    || membrane potential vs time
  • image p560fig15.29 Excitatory pathways that support activation of the SNc by a US and the conditioning of a CS to the US.
    || Excitatory pathway. Primary reward (apple juice) briefly excites lateral hypothalamus. Hypothalamic-PPTN excitation causes SNc dopamine burst. Hypothalamic activity excites ventral striatum for training. Active CS working memory signals learn to excite ventral striatum. Lateral hypothalamus (Primary Reward Input)-> [(+)ventral striatum <-> ventral pallidium(+)-> PPTN(+)-> SNc]. SNc-> [dopamine signal -> ventral striatum. Conditioned Stimuli working memory trace (CS)(+)-> ventral striatum.
  • image p560fig15.30 The inhibitory pathway from striosomal cells to the SNc is able to inhibit the SNc when a reward occurs with expected timing and magnitude.
    || Inhibitory pathway. Learning: CS-striosomal LTP occurs due to a three-way coincidence [An active CS working memory input, a Ca2+ spike, a dopamine burst]; Signaling: The delayed Ca2+ spike facilitates striosomal-SNc inhibition;. Striosomal cells learn to predict both timing and magnitude of reward signal to cancel it: reward expectation;. Conditioned stimuli (CS) LTP-> Striosomal cells <- dopamine | (-)-> SNc->.
  • image p561fig15.31 The CS activates a population of striosomal cells that respond with different delays in order to enable adaptively timed inhibition of the SNc.
    || Expectation timing (Fiala, Grossberg, Bulloch 1996; Grossberg, Merrill 1992, 1996; Grossberg, Schmajuk 1989). How do cells bridge hundreds of milliseconds? Timing spectrum (msec). 1. CS activates a population of cells with delayed transient signals: MGluR. 2. Each has a different delay, so that the range of delays covers the entire interval. 3. Delayed transients gate both learning and read-out of expectations.
  • image p561fig15.32 The SNc can generate both dopamine bursts and dips in response to rewards whose amplitude is unexpectedly large or small.
    || Inhibitory pathway: expectation magnitude. 1. If reward is greater than expected, a dopamine burst causes striosomal expectation to increase. 2. If reward is less than expected, a dopamine dip causes striosomal expectation to decrease. 3. This is a negative feedback control system for learning. Conditioned stimuli (CS)-> Striosomal cells <- dopamine | (-)-> SNc->.
  • image p563fig15.33 The basal ganglia gate neural processing in many parts of the brain. The feedback loop through the lateral orbitofrontal cortex (blue arrow, lateral orbitofrontal) is the one that MOTIVATOR models.
    || MOTIVATOR models one of several thalamocortical loops through basal ganglia (Adapted from Fundamental Neuroscience. 2002 Copyright Elsevier). [cortex-> striatum-> pallidum S. nigra-> thalamus] vs [motor, oculomotor, dorsolateral prefrontal, lateral orbitofrontal, anterior cingulate]. thalamus-> [striatum, cortex].
  • image p563fig15.34 The colored regions are distinct parts of the basal ganglia in the loops depicted in Figure 15.33.
    || Distinct basal ganglia zones for each loop (Adapted from Fundamental Neuroscience. 2002 Copyright Elsevier).
  • image p564fig15.35 (a) A pair of recurrent shunting on-center off-surround networks for control of the fore limbs and hind limbs. (b) Varying the GO signal to these networks can trigger changes in movement gaits. See the text for details.
    ||
  • image p565fig15.36 (a) The FOVEATE model circuit for the control of saccadic eye movements within the peri-pontine reticular formation. (b) A simulated saccade staircase. See the text for details.
    || [left, right] eye FOVEATE model. [vertical vs horizontal] position (deg).
  • image p566fig15.37 Steps in the FOVEATE model
  • image p567fig15.38 (a) The Gated Pacemaker model for the control of circadian rythms is a recurrent shunting on-center off-surround network whose excitatory feedback signals are gated by habituative transmitters. Tonic arousal signals energize the pacemaker. Diurnal (left) and nocturnal (right) pacemakers are determined by whether phasic light signals turn the pacemaker on or off. An activity-dependent fatigue signal prevents the pacemaker from becoming overly active for too long. (b) Two simulations of circadian activity cycles during different schedules of light (L) and dark (D). See the text for details.
    || sourceOn-> on-cells (recurrent) <-(-) (-)> off-cells (recurrent) <-sourceOff. on-cells-> activity-> off-cells. off-cells-> fatigue. Diurnal: sourceOn=[light, arousal]; sourceOff=arousal;. Nocturnal: sourceOn=arousal; sourceOff=[arousal, light];.
  • image p568fig15.39 Circuits of the MOTIVATOR model that show hypothalamic gated dipoles.
    || [inputs, -> [object, value] categories-> object-value categories-> [reward expectation filter, [FEF, EAT] outputs]. reward expectation filter [DA dip, arousal burst]-> alpha1 non-specific arousal-> value categories. Msi drive inputs-> value categories.
  • image p569fig15.40 The direct and indirect basal ganglia circuits that control GO and STOP movement signals. See the text for details.
    || [Direct path GO(+), Indirect path STOP(+), dopamine from SNc(+-)]-> striatum. GO-> GPi/SNr-> Thalamus (VA/Vlo) <-> frontal cortex. STOP-> GPe <-> STN-> GPi/SNr. NAc-> GPi/SNr.
  • image p573fig16.01 The experimental chamber (A) and neurophysiological recordings from a rat hippocampus (B) that led to the discovery of place cells. See the text for details.
    ||
  • image p574fig16.02 Neurophysiological recordings of 18 different place cell receptive fields. See the text for details.
    ||
  • image p575fig16.03 As a rat navigates in its experimental chamber (black curves), neurophysiological recordings disclose the firing patterns (in red) of (a) a hippocampal place cell and (b) an entrorhinal grid cell.
    ||
  • image p578fig16.04 Cross-sections of the hippocampal regions and the inputs to them. See the text for details.
    || EC-> CA1-> CA3-> DG. Layers [V/V1, II, II].
  • image p580fig16.05 Macrocircuit of the GridPlaceMap model, which can learn both 2D grid cells and place cells in response to realistic trajectories of navigating rats using a hierarchy of SOMs with identical equations.
    || GridPlaceMap model: rate-based and spiking (Pilly, Grossberg 2012). Pre-wired 1D stripe cells, learns both 2D frid and place cells! Same laws for both; both select most frequent and energetic inputs. Place cells emerge gradually in response to developing grid cells. [place-> grid-> stripe] cells-> path integration-> vestibular signals
  • image p581fig16.06 The learning of hexagonal grid cell receptive fields as an animal navigates an open field is a natural consequence of simple trigonometric properties of the positions at which the firing of stripe cells that are tuned to different directions will co-occur.
    || The Trigonometry of spatial navigation. Coactivation of stripe cells.
  • image p582fig16.07 Stripe cells were predicted in (Mhatre, Gorchetchnikov, Grossberg 2012) to convert linear velocity signals into the distances travelled in particular directions. They are modeled by directionally-sensitive ring attractors, which help to explain their periodic activation as an animal continues to move in a given direction. See the text for details.
    || Stripe cells. Stripe cells are predicted to exist in (or no later than) EC layer (III, V/VI). Linear path integrators: represent distance traveled using linear velocity modulated with head direction signal. Ring attractor circuit: the activity bump represents distance traveled, stripe cells with same spatial period and directional preference fire with different spatial phases at different ring positions. Distance is computed directly, it does not require decoding by oscillatory interference. Periodic stripe cell activation due to ring anatomy: periodic boundary conditions. Stripe firing fields with multiple orientations, phases and scales.
  • image p582fig16.08 Some experimental evidence for stripe-like cell receptive fields has been reported. The band cells posited by Neil Burgess also exhibit the one-dimensional firing symmetry of stripe cells, but are modeled by oscillatory intererence. See the text for details.
    || Evidence for stripe-like cells. Entorhinal cortex data (Sargolini, Fyhn, Hafting, McNaughton, Witter, Moser, Moser 2006; Krupic, Burgess, O
  • image p583fig16.09 The GRIDSmap model used algorithmically defined stripe cells to process realistic rat trajectories. The stripe cell outputs then formed inputs to the adaptive filter of a self-organizing map which learned hexagonal grid cell receptive fields.
    || GRIDSmap. Self-organizing map receives inputs from stripe cells and learns to respond to most frequent co-activation patterns. Stripe cells combine speed and head direction to create a periodic 1D position code. Virtual rat navigated using live rat trajectories from Moser Lab. Speed and head direction drives stripe cells.
  • image p583fig16.10 The GRIDSmap model is embedded into a more complete representation of the processing stages from receipt of angular head velocity and linear velocity signals to this learning of place cells.
    || GRIDSmap. Pre-wired 2D stripe cells, learns 2D grid cells. vestibular cells [angular head velocity-> head direction cells, linear velocity]-> stripe cells- small scale 1D periodic spatial code (ECIII)-> SOM grid cells entorhinal cortex- small scale 2D periodic spatial scale-> SOM place cells hippocampal cortex- large scale 2D spatial code (dentate/CA3). Unified hierarchy of SOMs.
  • image p584fig16.11 GRIDSmap simulation of the learning of hexagonal grid fields. See the text for details.
    || Simulation results. Multiple phases per scale. response vs lenght scale (0.5m+).
  • image p584fig16.12 Temporal development of grid cell receptive fields on successive learning trials (1,3,5,7,25,50,75,100).
    || Temporal development of grid fields. Cells begin to exhibit grid structure by 3rd trial. Orientations of the emergent grid rotate to align with each other over trials.
  • image p585fig16.13 Hexagonal grid cell receptive fields develop if their stripe cell directional preferences are separated by 7, 10, 15, 20, or random numbers degrees. The number and directional selectivities of stripe cells can thus be chosen within broad limits without undermining grid cell development.
    ||
  • image p585fig16.14 Superimposing firing of stripe cells whose directional preferences differ by 60 degrees supports learning hexagonal grid cell receptive fields in GRIDSmap.
    || GRIDSmap: from stripe cells to grid cells. Grid-cell Regularity from Integrated Distance through Self-organizing map. Superimposing firing of stripe cells oriented at intervals of 60 degrees. Hexagonal grid!
  • image p586fig16.15 Superimposing stripe cells oriented by 45 degrees does not lead to learning of rectangular grids in GRIDSmap, but it does in an oscillatory inference model.
    || Why is a hexagonal grid favored? Superimposing firing of stripe cells oriented at intervals of 45 degrees. Rectangular grid. This and many other possibilities do not happen in vivo. They do happen in the oscillatory inference model. How are they prevented in GRIDSmap?
  • image p586fig16.16 In the place cell learning model of (Gorchetnikov, Grossberg 2007), three populations of five cells each of entorhinal grid cells (only two are shown) with different spatial periods input to the model
  • image p587fig16.17 A finer analysis of the 2D trigonometry of spatial navigation showed that both the frequency and amplitude of coactivations by stripe cells determine the learning of hexagonal grid fields.
    || A refined analysis: SOM amplifies most frequent and energetic coactivations (Pilly, Grossberg 2012). [linear track, 2D environment]. (left) Stripe fields separated by 90°. 25 coactivations by 2 inputs. (right) Stripe fields separated by 60°. 23 coactivations by 3 inputs.
  • image p588fig16.18 Simulations of coordinated learning of grid cell receptive fields (second row) and unimodal place cell receptive fields (third row) by the hierarchy of SOMs in the GridPlaceMap model. Note the exquisite regularity of the hexagonal grid cell firing fields.
    || [stripe, grid, place] cells vs [spikes on trajectory, unsmoothed rate map, smoothed rate map].
  • image p589fig16.19 Neurophysiological data showing the smaller dorsal grid cell scales and the larger ventral grid cell scales.
    || Spatial scale of grid cells increase along the MEC dorsoventral axis (Hafting etal 2005; Sargolini etal 2006; Brun etal 2008). [dorsal (left), ventral (right)] cart [rate map, autocortelogram]. How does the spatial scale increase along the MEC dorsoventral axis?
  • image p590fig16.20 Integration rate of grid cells decreases along the dorsoventral gradient of the Medial Entorhinal Cortex, or MEC.
    || Dorsoventral gradient in the rate of synaptic integration of MEC layer II stellate cells (Garden etal 2008). Cross-section of [Hp, CC, LEC, MEC. (A left column) [dorsal, ventral] mV? vs msec. (B center column) [half width (ms), rise time (ms), amplitude (mV)] vs location (μm). (C right upper) responses (D right lower) width (ms) vs loacation (μm).
  • image p590fig16.21 Frequency of membrane potential oscillations in grid cells decreases along the dorsoventral gradient of the MEC.
    || Dorsoventral gradient in the frequency of membrane potential oscillations of MEC layer II stellate cells (Giocomo etal 2007). (C left column) Oscillation (Hz) vs distance from dorsal surface (mm). (D right upper) [dorsal, ventral oscillations 5mV-500ms. (E right lower) [dorsal, ventral oscillations 100ms. Both membrane potential oscillation frequency and resonance frequency decrease from the dorsal to ventral end of MEC.
  • image p591fig16.22 Time constants and duration of afterhyperpolarization currents of grid cells increase along the dorsoventral gradient of the MEC.
    || Dorsoventral gradient in afterhyperpolarization (AHP) kinetics of MEC layer II stellate cells (Navratilova etal 2012). [mAHP time constant (ms), Half-width (mm)] vs distance from the dorsal surface (mm), at [-55, -50, -45] mV. Time constants and duration of AHP increase from the dorsal to the ventral end of MEC layer II. Effectively, the relative refractory period is longer for ventral stellate cells in MEC layer II.
  • image p591fig16.23 The Spectral Spacing Model uses a rate gradient to learn a spatial gradient of grid cell receptive field sizes along the dorsoventral gradient of the MEC.
    || Spectral spacing model. Map cells responding to stripe cell inputs of multiple scales. Grid cells: MEC layer II (small scale 2D spatial code). Stripe cells: PaS / MEC deep layer (small scale 1D spatial code). Path Integration. Vestibular signals- linear velocity and angular head velocity. SOM. How do entorhinal cells solve the scale selection problem?
  • image p592fig16.24 Parameter settings in the Spectral Spacing Model that were used in simulations.
    || Simulation settings. Activity vs distance (cm). Learning trials: 40.
  • image p593fig16.25 Spectral Spacing Model STM, MTM, and LTM equations. The rate spectrum that determines the dorsoventral gradient of multiple grid cell properties is defined by μm.
    || Spectral Spacing Model equations. [STM, MTM, LTM]. μm = rate spectrum.
  • image p593fig16.26 Data (left column) and simulations (right column) of the gradient of increasing grid cell spacing along the dorsoventral axis of MEC.
    || Gradient of grid spacing along dorsoventral axis of MEC (Brun etal 2008). data-[Distance (m?), Median grid spacing (m?)] simulations-[Grid spacing (cm), Grid spacing (cm)] vs response rate.
  • image p594fig16.27 Data (left column) and simulations (right column) of the gradient of increasing grid cell field width along the dorsoventral axis of MEC.
    || Gradient of field width along dorsoventral axis of MEC (Brun etal 2008). data-[Distance (m?), Width autocorr peak (m?)] simulations-[Grid field width (cm), Width autocorr peak (cm)] vs response rate.
  • image p595fig16.28 Data (left column) and simulations (right column) about peak and mean grid cell response rates along the dorsoventral axis of MEC.
    || Peak and mean rates at different locations along DV axis of MEC (Brun etal 2008). Peak rate (Hz) vs [data- DV quarter, simulations- Response rate].
  • image p596fig16.29 Data (top row) and simulations (bottom row) showing decreasing frequency of subthreshold membrane potential oscillations along the DV axis of MEC.
    || Subthreshold membrane potential oscillations at different locations along DV axis of MEC (Giocomo etal 2020; Yoshida etal 2011). Data [oscillations (Hz) vs distance from dorsal surface (mm) @[-50, -45] mV, Frequency (Hz) vs [-58, -54, -50] mV]. Simulations MPO frequency (Hz) s [response, habituation] rate.
  • image p596fig16.30 Data (top row) and simulations (bottom row) of spatial phases of learned grid and place cells.
    || Spatial phases of learned grid and place cells (Hafting etal 2005). Data: Cross-correlogram of rate maps of two grid cells; Distribution of phase difference: distance from origin to nearest peak in cross-correlogram. Simulations: Grid cell histogram of spatial correlation coefficients; Place cell histogram of spatial correlation coefficients.
  • image p597fig16.31 Data (a) and simulations (b-d) about multimodal place cell receptive fields in large spaces. The simulations are the result of learned place fields.
    || Multimodal place cell firing in large spaces (Fenton etal 2008; Henriksen etal 2010; Park etal 2011). Number of cells (%) vs Number of place fields. [2, 3] place fields, 100*100 cm space.
  • image p597fig16.32 Data (top row) and simulations (bottom row) about grid cell development in juvenile rats. Grid score increases (a-b and d), whereas grid spacing remains fairly flat (c and e).
    || Model fits data about grid cell development (Wills etal 2010; Langston etal 2010). Data: [Gridness, grid score, inter-field distance (cm)]. Simulations: [Gridness score, Grid spacing (cm)] vs trial.
  • image p598fig16.33 Data (top row) and simulations (bottom row) of changes in place cell properties in juvenile rats, notably about spatial information (a,c) and inter-trial stability (b,d).
    || Model fits data about grid cell development (Wills etal 2010). [Data, Simulation] vs [spatial information, inter-trial stability]. x-axis [age (postnatal day), trial].
  • image p598fig16.34 The spiking GridPlaceMap model generates theta-modulated place and grid cell firing, unlike the rate-based model.
    || Theta-modulated cells in spiking model. [place, grid] cell vs [membrane potential (mV vs time), frequency vs inter-spike intervals (s), power spectra (normalized power vs frequency (Hz))].
  • image p599fig16.35 Data (a) and simulations (b,c) about anatomically overlapping grid cell modules. (a) shows the anatomical distribution of grid cells belonging to different modules in one animal. DV location (mm) vs postrhinal border. (b) shows the simulated distribution of learned grid cell spacings from two stripe cell scales. frequency (%) vs grid spacing (cm). mu = [1, 0.6]. (c) shows what happens when half the cells respond with one rate and half another rate. (d) shows the same with three rates. (e-g) show spatial maps and autocorrelograms of grid cells that arise from the different rates in (d). [rate map, autocorelogram] vs [score [1.07, 0.5, 0.67], spacing (cm) [23.58, 41, 63.64]].
    ||
  • image p600fig16.36 The entorhinal-hipppocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories. See the text for details.
    || Entorhinal-hippocampal interactions as an ART system. Hippocampal place cells as spatial categories. Angular head velocity-> head direction cells-> stripe cells- small scale 1D periodic code (ECIII) SOM-> grid cells- small scale 2D periodic code (ECII) SOM-> place cells- larger scale spatial map (DG/CA3)-> place cells (CA1)-> conjunctive-coding cells (EC V/VI)-> top-down feedback back to stripe cells- small scale 1D periodic code (ECIII). stripe cells- small scale 1D periodic code (ECIII)-> place cells (CA1).
  • image p602fig16.37 Data showing the effect of hippocampal inactivation by muscimol on grid cell firing before, during, and six hours after the muscimol, reading from left to right.
    || Hippocampal inactivation disrupts grid cells (Bonnevie etal 2013). muscimole inactivation. spikes on trajectory: [before, after min [6-20, 20-40, 40-60, 6h]]. rate map (Hz) [18.6, 11.4, 9.5, 6.7, 10.8]. spatial autocorrelogram g=[1.12, 0.05, -0.34, 0.09, 1.27].
  • image p603fig16.38 Role of hippocampal feedback in maintaining grid fields. (a) Data showing the effect of hippocampal inactivation before and during muscimol inhibition of hippocampal cells, as in Figure 16.37. (b) Model simulation with normal grid fields. (c) Model simulation that emulates the effect of hippocampal inhibition on grid fields.
    || (a) Data: hippocampal inactivation [before, after] cart [spikes on trajectory (p: [18.6, 6.7] Hz), spatial autocorrelogram (g= [1.12, 0.09])]. (b) Model: noise-free path integration, [spikes on trajectory (p: 14.56 Hz), rate map, spatial autocorrelogram (g= 1.41), dynamic autocorrelogram (g=0.6)]. (c) Model: noisy path integration + non-specific tonic inhibition, [spikes on trajectory (p: 11.33 Hz), rate map, spatial autocorrelogram (g= 0.05), dynamic autocorrelogram (g=0.047)].
  • image p605fig16.39 Data showing effects of medial septum (MS) inactivation on grid cells and network theta oscillations in medial entorhinal cortex (MEC). (A) Examples of disruption in the spatial expression of the hexagonal grid structure for two grid cells (Brandon etal 2011). (B) Temporal reduction in the power and frequency of network theta oscillations (Koenig etal 2011). (C) Temporary reduction in the gridness score, mean firing rate, and patial stability of grid cells (Koenig etal 2011).
    || Disruptive effects of Medial Septum inactivation in Medial Entorhinal Cortex (Brandon etal 2011; Koenig etal 2011). (A) Rate map [rate map, spatial autocorrelations, trajectory] vs [baseline, sub-sampled, medial septum inactivation, 3-6 hour recovery, 24 hour recovery], [rate map (Hz- m, p), spatial autocorrelations (gridness)][ 1.2, 7.2, 1.1; 0.25, 1.7, 0.6; 0.25, 2.5, -0.53; 0.7, 5.1, 0.55; 1.0, 5.3, 1.3; 2.1, 15, 0.19; 1.7, 12, 0.71; 1.7, 3.2, -0.22; 1.8, 9.1, 0.68; 2.5, 13, 0.46]. (B) [normalized power at 7-9 Hz, frequency (Hz)] vs 5-minute periods. (C) [mean gridness score (+-SEM), mean firing rate (% of baseline), mean correlation coeff (+-SEM)] vs 10-minute periods.
  • image p607fig16.40 Effects of medial septum (MS) inactivation on grid cells. (a) Each row shows data and different data-derived measures of grid cell responsiveness, starting from the left with the baseline response to the middle column with maximal inhibition. (b) Data showing the temporary reduction in the gridness scores during MS inactivation, followed by recovery. (c) Simulation of the collapse in gridness, achieved by reduction in cell response rates to mimic reduced cholinergic transmission. (d,e) Simulations of the reduction in gridness scores in (d) by reduction of cell response rates, in (e) by changing the leak conductance. See the text for details.
    ||
  • image p611fig16.41 How back-propagating action potentials, supplemented by recurrent inhibitory interneurons, control both learning within the synapses on the apical dendrites of winning pyramidal cells, and regulate a rythm by which associative read-out is dissociated from read-in. See the text for details.
    ||
  • image p612fig16.42 Macrocircuit of the main SOVEREIGN subsystems.
    || [reward input, drive input, drive representation (DR), visual working memory and planning system (VWMPS), visual form and motion system (VFMS), motor approach and orienting system (MAOS), visual input (VisIn), motor working memory and planning system (MWMPS), motor approach and orienting system (MAOS), motor plant (MotP), Proprioceptive Input (PropIn), Vestibular Input (VesIn), Environmental feedback (EnvFB). DR [incentive motivational learning-> [VWMPS, MWMPS], -> VFMS, -> MAOS], VWMPS [conditioned reinforcer learning-> DR, MAOS], VFMS [visual object categories-> VWMPS, reactive movement commands-> MAOS], MWMPS [conditioned reinforcer learning-> DR, planned movement commands-> MAOS], MAOS [motor map positions-> MWMPS, motor outflow-> MotP], VisIn-> VFMS, VesIn-> MAOS, EnvFB-> [VisIn, MotP, VesIn].
  • image p613fig16.43 The main visual form and motion processing stream mechanisms of SOVEREIGN, many of them described at length in previous chapters.
    || Render 3-D scene (R3DS), figure-ground separation (FGS), log-polar transform (LPT), Gaussian coarse-coding (GCC), Invariant visual target map (IVTM), What Fuzzy ART (WhatFuzz), body spatial coordinates (BSC), where reactive visual TPV storage (WRVTS), Directional transient cell network (DTCN), Motion direction hemifild map (MDHM), Hemifiled left/right scoring (HLRS), reactive visual control signal (RVCS), Parvo/Magno/Erg competition (PMEC), Approach and Orient GOp (AOGp), GOm (GOm). R3DS [parvo-> FGS, magno-> DTCN], FGS-> [LPT, WRVTS], LPT-> GCC-> IVTM-> WhatFuzz, BSC-> [RVTS, PMEC], PMEC-> [gateRVTS-> RVTS, gateRVCS-> RVCS], DTCN-> MDHM-> HLRS, HLRS-> [PMEC, RVCS], AOGp-> gateRVTS, GOm-> gateRVCS.
  • image p613fig16.44 The main target position vector (TPV), difference vector (DV), and volitional GO computations in SOVEREIGN that bring together reactive and planned signals to control decision-making and action. See the text for details.
    || Reactive visual TPV (RVT), NETs (NETs), S-MV mismatch (SMVM), NETmv (NETmv), reactive visual TPV storage (RVTS), reactive DV1 (RD1), NET (NET), motivated what and where decisions (MWWD), Planned DV1 (PD1), tonic (Tonic), top-down readout mismatch (TDRM), Parvo gate (tonic) (PG), Orienting GOp offset (OGpO). RVT-> [NETs, RVTS], NETs-> [SMVM, NET], SMVM-> NET, NETmv-> SMVM, RVTS-> [NETs, RD1], NET-> [RD1, PD1, TDRM], MWWD-> PD1, PD1-> Tonic-> TDRMPG-> NETs, OGpO-> [NETmv, PD1].
  • image p614fig16.45 The main distance (d) and angle (a) computations that bring together and learn dimensionally-consistent visual and motor information whereby to make the currently best decisions and actions. See the text for details.
    || Reactive Visual TPV [m storage], NETm S-MV mismatch, MV mismatch, NETmv, PPVv, PPVm, Vestibular feedback, motor copy.
  • image p615fig16.46 SOVEREIGN uses homologous processing stages to model the (a) What cortical stream and the (b) Where cortical stream, including their cognitive working memories and chunking networks, and their modulation by motivational mechanisms. See the text for details.
    ||
  • image p615fig16.47 SOVEREIGN models how multiple READ circuits, operating in parallel in response to multiple internal drive sources, can be coordinated to realize a sensory-drive heterarchy that can maximally amplify the motivationally most currently favored option.
    ||
  • image p616fig16.48 SOVEREIGN was tested using a virtual reality 3D rendering of a cross maze (a) with different visual cues at the end of each corridor.
    ||
  • image p616fig16.49 The animat learned to convert (a) inefficient exploration of the maze into (b) an efficient direct learned path to the goal.
    ||
  • image p617fig16.50 The perirhinal and parahippocampal cortices enable adaptively timed reinforcement learning and spatial navigational processes that are modeled by Spectral Spacing models in the What and Where cortical streams, respectively, to be fused in the hippocampus.
    || What and Where inputs to the hippocampus (Diana, Yonelinas, Ranganath 2007). Adaptively timed conditioning and spatial naviga039tbl01.03 tion. Hippocampus <-> Entorhinal Cortex <-> [Perirhinal Cortex <-> what, Parahippocampal Cortex <-> where].
  • image p627tbl17.01 Homologs between reaction-diffusion and recurrent shunting cellular network models of development.
    || byRows: (reaction-diffusion, recurrent shunting net) (activator, excitatory activity) (inhibitor, inhibitory activity) (morphogenic source density, inputs) (firing of morphogen gradient, contrast enhancement) (maintenance of morphogen gradient, short-term memory) (power or sigmoidal signal functions, power or sigmoidal signal functions) (on-center off-surround interactions via diffusion, on-center off-surround interactions via signals) (self-stabilizing distributions of morphogens if inhibitors equilibrate rapidly, short-term memory pattern if inhibitors equilibrate rapidly) (periodic pulses if inhibitors equilibrate slowly, periodic pulses if inhibitors equilibrate slowly) (regulation, adaptation).
  • image p628fig17.01 A hydra
    ||
  • image p628fig17.02 Schematics of how different cuts and grafts of the normal Hydra in (a) may (*) or may not lead to the growth of a new head. See the text for details.
    ||
  • image p629fig17.03 How an initial morphogenetic gradient may be contrast enhanced to exceed the threshold for head formation in its most active region.
    || head formation threshold, final gradient, initial gradient.
  • image p630fig17.04 Morphogenesis: more ratios (Wolpert 1969). Shape preserved as size increases. French flag problem. Use cellular models! (Grossberg 1976, 1978) vs chemical or fluid reaction-diffusion models (Turing 1952; Gierer, Meinhardt 1972).
    ||
  • image p631fig17.05 How a blastula develops into a gastrula. See the text for details.
    || 1. The vegetal pole of the blastula flattens, [Animal, vegetal] hemisphere, blastocoel. 2. Some cells change shape and move inward to form the archenteron, Elastopore. 3. Other cells break free, becoming mesenchyme. 4. Then extensions of mesenchyme cells attach to the overlying ctoderm, Archenteron. 5. The archenteron elongates, assisted by the contraction of mesenchyme cells. 6. The mouth will form, where the archenteron meets ectoderm. 7. The blastopone will form the anus of the mature animal. [Mesenchyme, Ectoderm, Endoderm, Blastocoel, Archenteron, Mesenchyme]. Concept 38.3, www.macmillanhighered.com
  • image p634fig17.06 Summing over a population of cells with binary output signals whose firing thresholds are Gaussianly distributed (left image) generates a total output signal that grows in a sigmoidal fashion with increasing input size (dashed vertical line).
    || How binary cells with a Gaussian distribution of output thresholds generates a sigmoidal population signal. [# of binary cells with threshold T, Total output signal] vs Cell firing thresholds T. Cell population with firing thresholds Gaussianly distributed around a mean value. As input increases (dashed line), more cells in population fire with binary signals. Total population output obeys a sigmoid signal function f.
  • Introduction webPage, questions driving this "webSite" (collection of webPages, defined by the menu above) are :
  • How difficult would it be to augment "Transformer Neural Networks" (TrNNs) with Grossberg This section is repeated in the Introduction webPage.
  • Grossberg 2021 p229c2h0.60 SMART computer simulations demonstrate that a good enough match of a top-down expectation with a bottom-up feature pattern generates an attentive resonance during which the spikes of active cells synchronize in the gamma frequency range of 20-70 Hz (Figure 5.40). Many labs have reported a link between attention and gamma oscillations in the brain, including two articles published in 2001, one from the laboratory of Robert Desimone when he was at the the National Institute of Mental Health in Bethseda (Fries, Reynolds, Rorie, Desimone 2001), and the other from the laboratory of Wolf Singer in Frankfurt (Engel, Fries, Singer 2001). You
  • Grossberg 2021 p081c2h0.66 sub-section: How does one evolve a computational brain?
    The above discussion illustrates that no single step of theoretical derivation can derive a whole brain. One needs a method for deriving a brain in stages, or cycles, much as evolution has incrementally discovered ever more complex brains over many thousands of years. The following theoretical method has been successfully applied many times since I first used it in 1957. It embodies a kind of conceptual evolutionary process for deriving a brain.

    Because "brain evolution needs to achieve behavioural success", we need to start with data that embodiey indices of behavioral success. That is why, as illustrated in Figure 2.37 Modelling method and cycle, one starts with Behavioral Data from scores or hundreds of psychological experiments. These data are analyszed as the result of an individual adapting autonomously in real time to a changing world. This is the Arty of Modeling. It requires that one be able to infer from static data curves the dynamical processes that control individual behaviors occuring in real time. One of the hardest things that I teach to my students to do is "how to think in real time" to be able to carry out this speculative leap.

    Properly carried out, this analysis leads to the discovery of new Design Principles that are embodied by these behavioral processes. The Design Principles highlight the functional meaning of the data, and clarify how individual behaviors occurring in real time give rise to these static data curves.

    These principles are then converted into the simplest Mathematical Model using a method of minimal anatomies, which is a form of Occam
  • [definitions, models] of consciousness.html -
  • What is consciousness: from historical to Grossberg -
  • directory status & updates copyrights
  • directory status & updates copyrights
  • For greater speed of use, once downloaded you can replace "/home/bill/web/" in links with <your subDirectory where you have stored the files>. I have bash functions a
  • directory status & updates copyrights
  • Browser scan directory

    Software programming & code QNial programming language Linux bash scripts TradingView PineScripts [en, de]crypt instructions oops - haven Wilson 1977 Cosmic trigger, Howells review.html
    oops - haven
    oops - haven
    directory status & updates copyrights
  • 09Jun2021 webSite status - Of ~1,000+ "user usable" links on this webSite (not including links in [adobe pdf, word processing, spreadheet, etc] files), there are 60-100 problematic links including including links to ~30 external sites that have changed or disappeared. The vast majority of links are in the "Neural Network Conference Guides", in documentation pages(eg QNial manual), while perhaps 300+ are in the "normal" webPages.
    ./ ./ ./ ./ ./ ./ ./ ./ ./ ./ ./ ./ ./ ./ ./ ./ ./ ./ ./ date.html#582 0583.html 0581.html 0576.html mailto:jjain@cisco.com ./ ./ mailto:online@nationalpost.com /styles/network.css /styles/nationalpost_colours.css /search/business.html /search/people.html /weather/ /national/nationalpost/subscribe/index_about.html /national/nationalpost/myaccount/index.html /national/nationalpost/subscribe/ee_regis_privacy.html /national/nationalpost/subscribe/logout.html /national/nationalpost/news/index.html /national/nationalpost/news/section.html?section=World /national/nationalpost/news/section.html?section=Canada /national/nationalpost/news/issuesideas/index.html /national/nationalpost/news/toronto/index.html /national/nationalpost/news/editorialsletters/index.html#editorials /national/nationalpost/news/editorialsletters/index.html#letters /national/nationalpost/news/artslife/index.html /national/nationalpost/news/sports/index.html /national/nationalpost/news/sports/index.html /national/nationalpost/specials/driversedge/index.html /national/nationalpost/financialpost/index.html /national/nationalpost/news/artslife/postmovies/index.html /national/nationalpost/financialpost/investing/ /national/nationalpost/financialpost/fpmarketdata/ /national/nationalpost/news/toronto/ /national/nationalpost/columnists/ /national/nationalpost/diversions/ /national/nationalpost/soundoff/ /national/nationalpost/news/headlinescan.html /national/nationalpost/news/archives/ /national/nationalpost/news/onlineextras/ /national/nationalpost/shopping/index.html /national/nationalpost/subscribe/nplanding.html /national/nationalpost/news/g3/ /national/nationalpost/news/artslife/weddings/ /national/nationalpost/specials/driversedge/ /national/nationalpost/financialpost/fpentrepreneur/ /national/nationalpost/news/artslife/weekendpost/ /national/nationalpost/financialpost/fpweekend/ /national/nationalpost/specials/posttravel/ /national/nationalpost/specials/posthomes/ /national/nationalpost/npb/ /national/nationalpost/subscribe/renew.html /national/nationalpost/subscribe/update.html /national/nationalpost/subscribe/help.html /national/nationalpost/news/corrections/ /national/nationalpost/info/contactus/mailto.html?name=queries&SUBJECT=News Tip /national/nationalpost/info/contactus/mailto.html?name=Letters&SUBJECT=Letter To The Editor /national/nationalpost/info/contactus/mailto.html?name=Queries&SUBJECT=Press Release /national/nationalpost/info/contactus/mailto.html?name=Online&SUBJECT=The New NPO /national/nationalpost/mobile/ /national/nationalpost/contests/ /national/nationalpost/info/contactus/ /national/nationalpost/info/advertise/ /national/ /national/globalnational/ /national/features/ /entertainment/tvtimes/ /national/nationalpost/financialpost/ /national/nationalpost/news/index.html javascript:void window.open('/components/printstory/printstory4.aspx?id=ebdbedc5-7526-460e-afab-6bc2a03d42b1', '', 'width=700,height=400,location=no,menubar=yes,scrollbars=yes,resizable=yes') javascript:void window.open('/components/sendstory/sendstory4.aspx?id=ebdbedc5-7526-460e-afab-6bc2a03d42b1&referrer=http://www.canada.com/national/nationalpost/financialpost/story.html?id=ebdbedc5-7526-460e-afab-6bc2a03d42b1', '', 'width=450,height=410,location=no,toolbar=no,menubar=no,scrollbars=yes,resizable=no') ' + google_info.feedback_url + document.write('
  • Menu Menu Menu Menu Menu Menu Menu Menu Menu Menu Menu Menu Menu Menu Menu Menu This article appears in the October 27, 2000 issue of Executive Intelligence Review. Subscribe to EIR Menu
  • Messages sorted by: [ date ][ thread ][ subject ][ author ]
  • Next message: Jas Jain: "Re: Inflation"
  • Previous message: Jas Jain: "Re: Thank you...:)"
  • Maybe in reply to: Dan Brindle: "Earnings growth" Jas Jain (jjain@cisco.com)
    http://www.yardeni.com/public/sp52_c.pdf
    (http://stats.bls.gov:80/cgi-bin/surveymost?wp).] Data from before
    Menu
  • Menu
    Subscribe to the NATIONAL POST
    Regina
    11°
    Partly cloudy
    Search canada.com   About Us   Advertise   Site Map   Privacy   Terms   FAQ   Our Partners Menu Menu Menu Menu Menu Menu
  • 2005 IJCNN website - official/
  • 2006 WCCI website - official/
  • 2007 IJCNN Orlando website/
  • Holidays - neural networks and genomics.html
  • Howell 2005 - Presentation, Junk DNA and Neural Networks conjecture on directions and implications.ppt
  • Howell 2006 - Genetic specification of neural networks, draft concepts and implications.pdf
  • Howell 2006 - Presentation, Genetic Specification of Recurrent Neural Networks Initial Thoughts.ppt
  • Menu
    Menu Menu Menu Menu Menu Menu Menu Menu Menu Menu Menu Menu Menu Menu Menu Menu Menu Menu The GNU Free Documentation License; Menu Menu Menu Menu Menu Menu Menu Menu Menu Menu Menu Menu Menu Menu Menu Menu Menu Menu Menu Menu The GNU Free Documentation License; Menu Menu Menu Menu
    1. IJCNN07 Orlando Florida USA - Publicity Chair, Guest Editor for the Neural Networks Special Issue
    2. IJCNN06 Vancouver BC Canada - International Liaison
    3. Menu
    Menu Menu Menu
    @SchmidhuberAI This is a point-for-point critique of ACM (see Executive Summary I, V, II, XII, XIX, XXI, XIII, XIV, XX, XVII). (A) speech recognition, (B) natural language processing, (C) robotics, (D) computer vision, (VII) medicine, astronomy, materials science. A, B, C, D, VII, XVII, VI, XVI). II, V, XX, XVIII) with Dr. Bengio & Dr. Hinton (see Sec. XVII, I). I respond to LBH Abstract & Outline (~300 words), Introduction (~300 words), Critique of LBH Executive summary of what 21 comments on 21 claims by ACM (~8,000 words), Conclusion and Acknowledgments (~2,000 words). All backed up by over 250 references (~9,000 words). science is self-correcting."[SV20] they are mine or other people and to fight plagiarism, collusion rings,[LIT21] and systemic academic corruption in all of their more and less subtle forms.[FAKE] Sec. 2 LBH of this post.[T20a][R12] ACM 2018 A.M. Turing Award[R1] After the Executive Summary in Sec. 3, Sec. 4 will split ACM into 21 parts I, II, III, IV, V, VI, VII, VIII, IX, X, XI, XII, XIII, XIV, XV, XVI, XVII, XVIII, XIX, XX, XXI. publishing yet another misleading overview of the field, this time based on LBH LBH LBH claim to "briefly describe the origins of deep learning"[DL3a] without even mentioning the world Ivakhnenko and Lapa in 1965[DEEP1-2][R8] (see Sec. II). the first really deep feedforward NN.[HW1-3] (see Sec. D, VI). brought essentially unlimited depth to gradient-based supervised recurrent NNs;[LSTM0-17] from 2007[LSTM4,14] LBH cite Hinton (2012) for "dropout" without mentioning that dropout is just a variant of Hanson von der Malsburg who introduced ReLUs in 1973[CMB] (see Sec. XIV). XVIII). already in 1965[DEEP1-2][R8] (see Sec. II). earlier fast weights of von der Malsburg (1981) and Feldman (1982).[FAST,FASTa-b][FWP] dedicate an extra section to attention-based Transformers,[TR1-6] citing Bengio LBH claim that Bengio of text compression[SNT] (see Sec. XVI, XVII-1). LBH cite Bengio In summation, LBH have repeatedly chosen to ignore the previous well-known critiques[DLC][HIN][T20a] and deep learning surveys,[DL1-2] and deep learning (e.g., Sec. I), ACM lauds Numerous references can be found under the relevant section links I-XXI which adhere to the sequential order of ACM Sec. II: Sec. I contains 4 subsections A, B, C, D A: Speech Recognition (see also Sec. VI & XI & XV): The first superior end-to-end neural speech recognition Hinton (2012) and Bengio (XV) Sec. B: Natural Language Processing (see also Sec. VI & XI & XVI): Sec. C: Robotics. Sec. D: Computer Vision XVIII & XIV & XI & VI) and applied to speech. All before LeCun pre-training (in contrast to Hinton Sec. XIV: Sec. XI: ACM mentions GPU-accelerated NNs XVIII: Fukushima and Waibel (see Sec. D). VII: ACM explicitly mentions medicine and Sec. XII & XIX & XXI: Modern XIII & II & V III & IX & X & XX): Sec. XX: ACM credits LeCun for work on Sec. XXI: ACM credits LeCun for work on XV: ACM credits Bengio for hybrids of NNs and probabilistic models of sequences. A & B). XVI: ACM Sec. XVII: and other topics.[R2-R6] Critique of LBH Sec. Conclusion: Sec. II & III & V & XII & XIII & XVII & XIV & XIX & XX & XXI. In what follows, ACM I, II, III, IV, V, VI, VII, VIII, IX, X, XI, XII, XIII, XIV, XV, XVI, XVII, XVIII, XIX, XX, XXI. LBH and their co-workers have contributed certain useful improvements of existing deep learning methods.[CNN2,4][CDI][LAN][RMSP][XAV][ATT14][CAPS] (1965),[DEEP1-2][R8] (1970),[BP1-2][R7] architectures of recurrent NNs (1943-56)[MC43][K56] and convolutional NNs (1979),[CNN1] GPU-accelerated NNs (2004),[GPUNN][DAN][DAN1][GPUCNN5] transformer-like[TR1-6][FWP] attention[FWP][ATT] through [DL1-2][R2-R8] This may explain some of ACM II & III & V & XIII & X & XVII & XII & XVIII & XX. academia and industry,[DL4] mentioned by ACM (labeled as A, B, C, D) below: or LSTM (1990s-2005)[LSTM0-6] student Sepp Hochreiter in 1991.[VAN1] This happened long before the similar work of Bengio (see Sec. XVII).[MIR] LSTM was refined with my student Felix Gers[LSTM2] (A2) Connectionist Temporal Classification by my student Alex Graves et al. (2006).[CTC] Our team successfully applied CTC-trained LSTM to speech in 2007[LSTM4] (also with hierarchical LSTM stacks[LSTM14]). Markov models (HMMs)[BW][BRI][BOU] (Sec. XV). Hinton et al. (2012) still used the old hybrid approach[HYB12] and did not compare it to CTC-LSTM. He later reused our end-to-end neural speech recognizer[LSTM4][LSTM14] as a postdoc in Hinton CTC-LSTM dramatically improved Google on-device speech recognition[GSR19] (not any longer on the server) (see Sec. VI & XI & XV). of text[SNT] (see Sec. XVI). In 2001, we showed that LSTM can learn languages unlearnable by traditional models such as HMMs,[LSTM13] See also Sec. VI & XI & XV. tailored by Bengio see Sec. XVI. C. Robotics & RL etc. Since 2003, our team has used LSTM for Reinforcement Learning (RL) and robotics.[LSTM-RL][RPG][LSTMPG] For example, in 2018, a PG-trained LSTM was the core of OpenAI beat a pro player in the game of Starcraft, which is theoretically harder than Chess or Go[DM2] in many ways, using OpenAI Five which learned to defeat human experts in the Dota 2 video game (2018).[OAI2] Apart from A, B, C above, chemistry, molecular design, lip reading, speech synthesis,[AM16] predicting what was being used for LSTM (only 5% for the CNNs of Sec. D).[JOU17] Apparently the first LSTM journal paper[LSTM1][R5] is now the most frequently cited a particular feedforward NN called the convolutional NN (CNN).[CNN1-4] The basic CNN architecture with convolutional and downsampling layers is due to Fukushima (1979).[CNN1] The popular downsampling variant called max-pooling was introduced by Weng et al. (1993).[CNN3] LeCun Finally, my own team showed in 2010[MLP1] to train deep NNs, contrary to claims by Hinton[VID1] who said that "nobody in their right mind would ever suggest" this. Then we CNNs of 2006.[GPUCNN] in a row (15 May 2011, 6 Aug 2011, 1 Mar 2012, 10 Sep 2012).[GPUCNN5] CVPR paper on DanNet[GPUCNN3] of Hinton Our CNN image scanners were 1000 times faster than previous methods.[SCAN] The VGG network (ImageNet 2014 winner)[GPUCNN9] and other highly cited CNNs[RCNN1-3] ResNet, the ImageNet 2015 winner[HW2] (Dec 2015) which currently gets See also Sec. XVIII & XIV & XI & VI. were proposed already in the 1940s/50s[MC43][K56] (but don deep convolutional NN architecture was proposed in the 1970s.[CNN1] NNs without hidden layers learned in 1958[R58] regression and the method of least squares[DL1-2]). about deeper adaptive NNs[R61,R62] XIII & III & V & VIII & IX & X. LBH & co-authors, e.g., Sejnowski[S20] (see Sec. XIII). It goes more or less like this: "In 1969, Minsky & Papert[M69] researchers took a fresh look at the problem in the 1980s."[S20] However, as mentioned above, the 1969 book[M69] addressed a "problem" of Gauss & Legendre (but see a 1989 paper[MOZ]). See Sec. 1 of the overview:[MIR] "Very Deep Learning" tasks of depth > 1000.[UN2][DL1][UN] (By 2003, LSTM variants successfully dealt with language problems of depth up to 30,000[LSTM17] III. Note that III).[DLC][DEEP1-2][BP1][DL1-2][R7-R8][R2-R4] deep learning multilayer perceptrons (1965),[DEEP1-2][R8] (1970),[BP1,2][R7] architectures of recurrent NNs (1943-56)[MC43][K56] and convolutional NNs (1979),[CNN1] GPU-accelerated NNs (2004),[GPUNN][GPUCNN5] and other foundations.[DL1-2][R2-R8] II & V & XIII & IX & X & XVII & XII & XVIII & XX & I. deeplearning.net which until 2019 advertised deep learning as "moving beyond shallow machine learning since 2006",[DL7] referring to Hinton II & XVII (5). Not to mention Ivakhnenko which Hinton,[UN4] Bengio,[UN5] and LBH[DL3,DL3a] did not cite either. See Sec. X. my comments systematically track the sequential order of ACM

    ACM Much of early AI in the 1940s-70s was actually about theorem proving[ZU48][NS56] Turing Machine.[TUR] He rederived the above-mentioned result,[CHU][TUR][HIN][GOD21,21a][TUR21][LEI21,21a] In the same year of 1936, Emil Post published yet another independent universal model of computing,[POS] without suggesting any fact-based corrections.[HIN]) open problem "P=NP?" in his famous letter to John von Neumann (1956).[GOD56][URQ10] His patent application of 1936[ZU36-38][Z36][RO98][ZUS21] predating Claude Shannon Zuse also created the first high-level programming language in the early 1940s.[BAU][KNU] conditional jump instruction.[RO98] that learn internal representations (1965),[DEEP1-2][R8] (1970),[BP1,2][R7] architectures of recurrent NNs (1943-56)[MC43][K56] and convolutional NNs (1979),[CNN1] (2004),[GPUNN][GPUCNN5] (2010)[MLP1-2] transformer-like[TR1-6][FWP] attention[FWP][ATT] through and more.[DL1-2][R2-R8] II & I & III & XIII & X & XVII & XII & XVIII & XX. achieved by our group 2010-2011[MLP1-2][DAN][DAN1][GPUCNN5][R6] (Sept 2012, on cancer detection).[GPUCNN5,8] and were able to greatly improve steel defect detection.[ST] All of this happened before the similar GPU-accelerated AlexNet of Hinton mitosis detection.[MGC][GPUCNN5,8] D & XI). without citing them.[DL1][DLC][HIN][R2-R4][R7-R8] V & XII & XIX & II & III & XIII & XVII & X & I. work.[HIN][DLC][DL1-2][DEEP1-2][CMB][R7-R8] See Sec. II & III & XIII & V & X & XIV & I. first introduced to Machine Learning by Dechter (1986), and to NNs by Aizenberg et al (2000).[DL2] To my knowledge, LBH have never cited them. (Margin note: our 2005 paper on deep RL[DL6,6a] was LBH started talking about "deep learning ... moving beyond shallow machine learning since 2006",[DL7] referring to their unsupervised pre-training methods of 2006. See Sec. III. II & III & XIII & V & I. ignored by LBH V & II & III & I & XIII & XII & XIX & X & XVII).

    ACM correctly mentions advancements through GPUs. The first to use GPUs for NNs were Jung & Oh (2004),[GPUNN][GPUCNN5] an important benchmark record,[MLP1-2] to train deep NNs, contrary to Hinton vision (explicitly mentioned by ACM) for the first time[R6] (see Sec. D). (explicitly mentioned by ACM) were actually dominated by LSTM and CTC of our team.[LSTM1-4][CTC] In particular, as mentioned in Sec. A, such as HMMs.[BW][BOU][BRI][HYB12] As mentioned in Sec. B and XVI, the first superior end-to-end neural machine translation was also based on LSTM. ACM backpropagation by Rumelhart et al. (1985-86)[RUM] (1982).[BP2] And the article[RUM] even failed to mention Linnainmaa, the inventor of this famous algorithm for credit assignment in networks (1970),[BP1] Kelley already had a precursor thereof in the field of control theory;[BPA] see also later work of the early 1960s.[BPB][BPC][R7] internal representations in hidden layers of NNs.[RUM] But this was essentially just an experimental analysis of a known method.[BP1-2] And history of backpropagation can be found at Scholarpedia[DL2] and in my award-winning survey.[DL1] Also see Sec. XIX, II.

    Some claim that "backpropagation is just the chain rule of Leibniz (1676) & L Hinton[AOI] Rumelhart[RUM] with the "invention" of backpropagation. for "creating" the method and for other things he didn Neither in a popular book[AOI] nor in other recent work[DL3,DL3a] did he cite Linnainmaa (1970),[BP1] the true creator.[BP4-5] that his 2015 survey[DL3] does cite Werbos (1974) who however described the method correctly only later in 1982[BP2] and also failed to cite Linnainmaa[BP1] (compare Amari Linnainmaa It wasn one person who published first[BP1] and therefore should get the credit. Boltzmann Machine (BM)[BM] a learning.[HIN] Recently, however, I learnt through a reader that even the BM paper[BM] did not cite prior relevant work by Sherrington & Kirkpatrick[SK75] and Glauber.[G63] (Compare related work.[H86][H88][S93]) multilayer perceptrons with arbitrarily many layers.[DEEP1-2][HIN] Sec. II V &

    As mentioned in Sec. II, Sejnowski at the problem in the 1980s."[S20] However, the 1969 book[M69] addressed a "deep learning problem" (a limitation of Gauss & Legendre also in the 1970s, especially outside of the Anglosphere.[DEEP2][BP6][CNN1][DL1-2] Dropout is actually a variant of Hanson as we showed already in 2011 in a contest where LeCun Sec. D above. Back then, the only really of deep CNNs through GPUs.[GPUCNN1,3,5][R6] Already before ImageNet 2012,[R6] a monopoly on winning computer vision competitions.[GPUCNN5] It more than "halved the error rate for object recognition" (ACM See Sec. D since the late 1980s.[BW][BRI][BOU] LSTM (1990s-2005)[LSTM0-6] and CTC[CTC] (2006), which were applied to speech in 2007.[LSTM4][LSTM14] CTC-LSTM is end-to-end-neural and thus very different from (and superior to) the hybrid methods since the late 1980s.[BW][BRI][BOU][HYB12] See also Sec. A. 5 years earlier, in 1995, we already had a similar, excellent neural probabilistic text model.[SNT] Bengio[NPM] characterizes it only briefly as "related" (see also Pollack Bengio For example, it helped to further improve Facebook attention-based Transformers[TR1-6] are My FWP of 1991[FWP0-1] (now often called keys and values for self-attention).[TR1-6][FWP] Transformers[TR1-2] a traditional LSTM domain (see Sec. B). rapidly learn to solve quickly[LSTM13,17] linear Transformers or Performers[TR5-6] which are formally equivalent to my 1991 FWPs (apart from normalization).[FWP6][FWP] In 1993, I introduced in this context,[ATT] and RNNs that program themselves. He was the reviewer of my 1990 paper[ATT2] his own work:[ATT3] GANs[GAN0-1] (2010-2014) are actually from 1990[AC90,90b][AC20] (see also surveys[AC09-10]). This principle is now widely used for exploration in RL (e.g., Sec. C) and for image synthesis[GAN1] (also mentioned by ACM in Sec. XVIII). predictor NN minimizes its error, while the generator NN tries to make outputs that maximize this error: one net whether the controller early adversarial machine learning settings[S59][H90] neither involved unsupervised NNs nor were about modeling data nor used gradient descent.[AC20]) Bengio et al. neither cited the original work[AC90,90b][AC20] nor corrected their erroneous claims[GAN1] about Bloomberg,[AV1] their NIPS 2014 paper[GAN1] and some of the erroneous claims it made about my prior work.[AC20] Goodfellow eventually admitted that PM is adversarial (his paper[GAN1] still claims the opposite), but emphasized that it When the authors[GAN1] I published one myself in the hopes of correcting the annals of history.[AC20] that they are instances of my earlier work.[R2][AC20] was settled in favor of Sepp.[VAN1] However, even after a common publication,[VAN3] Bengio published papers[VAN4][XAV] are poor indicators of truly pioneering work.[NAT1] (Margin note: Bengio states[YB20] that in 2018 he one must at least clarify it later,[DLC] Bengio also claims[YB20] that in 1995 date back to 1991-93.[UN0-2][UN] which I started in 1987[META1][META] long before Bengio that he did it before me.[R3] Bengio also writes[YB20] that in Bengio has also heavily used our LSTM (see Sec. A-C), "gated recurrent units (GRU)"[LSTMGRU] for a variant of our vanilla LSTM architecture[LSTM2] (2000) which he did not cite although our work[LSTM2] was the one that introduced gated recurrent units. In addition, our team automatically evolved lots of additional LSTM variants and topologies already in 2009[LSTM7] without changing the name of the basic method. learn to count[LSTMGRU2] nor learn simple non-regular languages;[LSTMGRU2] they according to Google Brain.[LSTMGRU3]) Hinton work on this[UN0-2] (see Sec. II above).[UN] It was published in 1991-92[UN1] when compute was about 1000 times more expensive than in 2006. survey (2015),[DL3][DLC] See also Sec. II & III. Hinton[DIST2] (2006) did not cite my much earlier original work on this (1991),[UN1][UN] not even in his later patent application Hinton[ATT3] (2010) he was both reviewer and editor of my summary[ATT2] (1990; see Sec. XVI above).

    The ten priority disputes mentioned in the present Sec. XVII are not on the only ones.[R4] Remarkably, three of them are related to the 1991 paper[UN1][UN] which in many ways started what people now call deep learning, going beyond Most of them go back to work of 1990-91.[MIR] See Sec. I for additional related issues of credit assignment. LeCun All of this happened before LeCun three times worse performance).[DAN1] Again see Sec. D. (Sept 2012, on detection of mitosis/cancer)[GPUCNN5,7,8] (before the similar AlexNet won ImageNet 2012[GPUCNN5][R6] and the similar VGG network[GPUCNN9] won ImageNet 2014). mitosis detection.[MGC][GPUCNN5,7,8] Many major companies are using it now. See Sec. D & VII. ACM also explicitly mentions speech recognition, speech synthesis,[AM16][DL1] Sec. A, B, VI, XI. recent work.[DL3,DL3a][DLC] In 1960, Kelley already had a precursor of the algorithm.[BPA] Furthermore, many besides LeCun have worked "to speed up backpropagation algorithms"[DL1] (ACM However, "hierarchical feature representation" in deep learning networks is what Ivakhnenko & Lapa (1965)[DEEP1-2] (and also Fukushima[CNN1][DL2]) had long before LeCun. See Sec. D & II & XIII & V. LeCun et al. neither cited the origins[BP1] (1970) of this widely used type of automatic differentiation for differentiable networks of modules[DL2][BP4-5][DLC] for such systems.[S80] See also Sec. XIX & XII. before LeCun who did not cite them. See also Pollack

    (Furthermore, "complex networks of modules where backpropagation is performed" were the central theme of my much earlier habilitation thesis (1993).[UN2] For example, our see "100 Authors against Einstein."[AH1] "If you cannot dispute a fact-based message, attack the messenger himself."[HIN] award can ever change that.[HIN] and their co-workers have contributed useful improvements of deep learning methods.[CNN2,4][CDI][LAN][RMSP][XAV][ATT14][CAPS] whom they did not cite II, V, XII, XIX, XXI, XIII, XIV, XI, and XX, and 2). Sec. I, A, B, C, D, XVII, VI, and XVI). to self-correction,"[SV20] as is already the standard in other scientific fields. in popular science venues without peer review? For example, the narrator of a popular 2018 Bloomberg video[VID2] Germany and Switzerland (LSTM & CTC; see Sec. A) long before Hinton Google on Google Translate[WU] mentions LSTM over 50 times (see Sec. B). In ad hominem style,[AH2-3] claiming credit he doesn LeCun also called the GANs of Bengio of my work in 1990.[AC90,90b][AC20][R2] According to Bloomberg,[AV2] Bengio has simply "denied my claims" without backing up his denial by any facts; see Sec. XVII. and forcefully contradict public figures who promote it."[FAKE] Our LSTM paper[LSTM1] has got more citations than any paper by Bengio or LeCun,[R5] Hinton deep NNs (2010)[MLP1] [UN][UN0-3] and later championed by Hinton;[UN4][VID1] see Sec. D). Hinton (2012)[GPUCNN4] characterizes AlexNet won one;[R6] see Sec. D, XIV. The highly cited VGG network (2014)[GPUCNN9] Hinton of Hinton for a book by Rumelhart & McClelland[R5]). method[BP1] whose origins of Ivakhnenko whom he has never cited;[DEEP1-2][R7-R8] see Sec. II, XIII. Bengio (1990)[AC90,90b][AC20][R2] which he did not cite; see Sec. XVII. Hinton were preceded by Hanson As recently as of 2021, ACM published yet another misleading deep learning "survey" by LBH,[DL3a] again heavily citing LBH without Consult the Executive Summary and Sec. I-XXI of this critique for more. have their conceptual and technical roots in my labs in Munich and Lugano,[MOST] of deep learning MLPs since 1965[DEEP1-2] (see Sec. II, XX) and backpropagation (1960-70)[BPA][BP1] (see Sec. XIX, XII) and convolutional NNs since 1979[CNN1-4] (see Sec. XVIII, D). Our LSTM (1990s, see Sec. A, B; also for RL, 2003-, see Sec. C) → our Highway Net (May 2015) → ResNet (Dec 2015, see Sec. D). Our adversarial Artificial Curiosity (1990) → GANs (2010s, see Sec. XVII). our own unsupervised pre-training of deep NNs (1991, see Sec. II & III) for recurrent NNs in the 1990s → our LSTM (see Sec. A-C) and for feedforward NNs in 2010 → our DanNet (2011) → AlexNet (2012); VGG Net (2014) (see Sec. D). superior computer vision (2011, see Sec. D, XVIII), speech recognition (with our CTC, 2007-15, see Sec. A), machine translation (2016, see Sec. B), robotics & video game players (2018-19, see Sec. C), Fast Weight Programmers (1991, see Sec. XVI) are formally equivalent to linear Transformers (now popular in NLP). I, A, B, C, D, VII, XVIII. depth that really learned.[DEEP1-2][R8] Five years later, modern

    Yes, this critique is also an implicit critique of certain other awards to LBH.[HIN] reddit.com/r/MachineLearning[R1-R12] (the largest machine learning forum with back then over 800k subscribers), many of them influenced by my overview.[MIR]

    Dr. LeCun himself is well aware of the challenges to scientific integrity in our field:[LECP] "... else cites."[LECP]

    Note that I am insisting on proper credit assignment not only in my own research field but also in quite disconnected areas,[HIN] as demonstrated by my numerous letters in this regard published in Science and Nature, e.g., on the history of aviation,[NASC1-2] the telephone,[NASC3] the computer,[NASC4-7] resilient robots,[NASC8] and scientists of the 19th century.[NASC9]

    Creative Commons LicenseThanks to many expert reviewers for useful comments. Since science is about self-correction, let me know under juergen@idsia.ch if you can spot any remaining error. Many additional relevant publications can be found in my arXiv page. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. PDF. PDF. PDF. IEEE link. With a brief summary of the generative adversarial neural networks of 1990[AC90,90b][AC20] Preprint arXiv/1906.04493. Link. Link. Blog of Werner Vogels, CTO of Amazon (Nov 2016): PDF. arXiv/1409.0473, 2014-16. Bloomberg, May 15, 2018. Bloomberg, May 17, 2018. PDF. PDF. Link. PDF. First application of backpropagation[BP1] to NNs (concretizing thoughts in his 1974 thesis). More.[DL2] English version: [CNN1+]. More in Scholarpedia. Link. [CNN1a] A. Waibel. Phoneme Recognition Using Time-Delay Neural Networks. Meeting of IEICE, Tokyo, Japan, 1987. First application of backpropagation[BP1][BP2] and weight-sharing PDF. Spatial Averaging.[CNN1] PDF. Beijing, 2014. Preprint arXiv:1402.3511 [cs.NE]. 1st superhuman result in 2011.[DAN1] [DIST1] J. Schmidhuber, 1991.[UN-UN2] Deep Learning. HTML. [DL3a] Y. Bengio, Y. LeCun, G. Hinton (2021). Turing Lecture: Deep Learning for AI. Communications of the ACM, July 2021. HTML. greatly improved (CTC-based) on-device speech recognition (on the phone, not the server) PDF. Web site deeplearning.net of Y. Bengio Internet Archive), referring to Hinton unsupervised pre-training for deep NNs[UN4] (2006) although II & XVII & III. arxiv:1312.5602. Link. arXiv:1808.03578, 2018. over 4 billion automatic translations per day (The Verge, August 4, 2017); Facebook blog by J.M. Pino, A. Sidorov, N.F. Ayan (August 3, 2017) alternative[FWP0-1] to recurrent NNs. the fast weights[FAST,FASTa] of Such Fast Weight Programmers[FWP0-6,FWPMETA1-7] can learn to memorize past data, e.g., by computing fast weight changes through additive outer products of self-invented activation patterns[FWP0-1] (now often called keys and values for self-attention[TR1-6]). The similar Transformers[TR1-2] combine this with projections linear Transformers or Performers[TR5-6] In 1993, I introduced in this context,[ATT] and RNNs that program themselves. PDF. PDF. Preprint: arXiv:1811.12143. PDF. PDF. Like [FWP0-2]. Preprint: arXiv:2003.08165. PDF. Linear Transformers Are Secretly Fast Weight Programmers. ICML 2021. Preprint: arXiv:2102.11174. Preprint: arXiv:2106.06295 (June 2021). PDF. An introspective network that can learn to run its own weight change algorithm. In Proc. of the Intl. Conf. on Artificial Neural Networks, J. Schmidhuber. Habilitation thesis, TUM, 1993. PDF. Preprint arXiv:2012.14905 [cs.LG], 2020. Report arXiv:2011.07831 [cs.AI], 2020. Google Research Blog, Sep 2015, see also Aug 2015 Google Alphr Technology, Jul 2015, or 9to5google, Jul 2015 WIRED, Sep 2016, siliconANGLE, Sep 2016 Blog post, Internet Archive, 2010. A blog post describing the basic ideas[AC][AC90, AC90b][AC20] of GANs. Description of GANs that does not cite the original work of 1990[AC][AC90,AC90b][AC20][R2] (also containing wrong claims about Predictability Minimization[PM0-2][AC20]). Link. This was number 1 on Hacker News. Frankfurter Allgemeine Zeitung, 16/6/2021. Preprint arXiv/2005.14165. for Image Classification. International Joint Conference on Artificial Intelligence (IJCAI-2011, Barcelona), 2011. PDF. ArXiv preprint. competitor.[DAN1] This led to massive interest from industry. PDF. PDF. North-Holland, 1991. PDF. Extending TR FKI-129-90, TUM, 1990. PDF. PDF. Preprints arXiv:1505.00387 (May 2015) and arXiv:1507.06228 (July 2015). Also at NIPS 2015. The LSTM with forget gates[LSTM2] for RNNs.) Resnets[HW2] are a version of this where the gates are always open: g(x)=t(x)=const=1. Highway Nets perform roughly as well as ResNets[HW2] on ImageNet.[HW3] Highway layers are also often used for natural language processing, where the simpler residual layers do not work as well.[HW3] Link. arXiv:1512.03385 (Dec 2015). Residual nets are a version of Highway Nets[HW1] arxiv:1612.07771 (2016). Also at ICLR 2017. Preprint arXiv:1704.04760 PDF. PDF. arXiv:1607.06450, 2016. A New Publishing Model in Computer Science. 19/5/2021. [LSTM1] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 1997. PDF. Based on [LSTM0]. More. PDF. PDF. PDF. PDF. PDF. PDF. PDF. PDF. Preprint: arxiv:1506.07452. PDF. PDF. Preprint arXiv:1805.04908. Architectures. Preprint arXiv:1703.03906 arXiv:2005.05744, 2020. Computation 22(12): 3207-3220, 2010. ArXiv Preprint. By 2010, when compute was 100 times more expensive than today, both our feedforward NNs[MLP1] Preprint arXiv:1611.01578 (PDF), 2017. Correspondence, Nature, vol 483, p 541, March 2012, doi:10.1038/483541b. Letter, Science, vol 336, p 1639, June 2012. See also comment on response by A. Hodges (DOI:10.1126/science.336.6089.1639-a) NY Times article NY Times article Learning Dexterous In-Hand Manipulation. arxiv:1312.5602 (PDF). arxiv:1912.06680. PDF. Based on TR FKI-126-90 (1990).[AC90] PDF. Partially based on TR FKI-126-90 (1990).[AC90] Report arXiv:1210.0118 [cs.AI], 2015. One Big Net For Everything. Preprint arXiv:1802.08864 [cs.AI], Feb 2018. Preprint: arXiv:1809.01999. Github: World Models. minimization. TR CU-CS-565-91, Univ. Colorado at Boulder, 1991. PDF. 1991. PDF. arXiv:1112.5309 [cs.AI] First Experiments with PowerPlay. arXiv:1210.8385 [cs.AI]. [R1] Reddit/ML, 2019. Hinton, LeCun, Bengio receive ACM Turing Award. [R2] Reddit/ML, 2019. J. Schmidhuber really had GANs in 1990. [R3] Reddit/ML, 2019. NeurIPS 2019 Bengio Schmidhuber Meta-Learning Fiasco. [R4] Reddit/ML, 2019. Five major deep learning papers by G. Hinton did not cite similar earlier work by J. Schmidhuber. [R5] Reddit/ML, 2019. The 1997 LSTM paper by Hochreiter & Schmidhuber has become the most cited deep learning research paper of the 20th century. [R6] Reddit/ML, 2019. DanNet, the CUDA CNN of Dan Ciresan in J. Schmidhuber [R7] Reddit/ML, 2019. J. Schmidhuber on Seppo Linnainmaa, inventor of backpropagation in 1970. [R8] Reddit/ML, 2019. J. Schmidhuber on Alexey Ivakhnenko, godfather of deep learning 1965. [R9] Reddit/ML, 2019. We [R11] Reddit/ML, 2020. Schmidhuber: Critique of Honda Prize for Dr. Hinton [R12] Reddit/ML, 2020. J. Schmidhuber: Critique of Turing Award for Drs. Bengio & Hinton & LeCun [R15] Reddit/ML, 2021. J. Schmidhuber Preprint arXiv/1311.2524, Nov 2013. Preprint arXiv/1703.06870, 2017. Link. The Past, Present and Future of Artificial Intelligence. PDF. ACM Link. 1992. Based on TR FKI-148-91, TUM, 1991.[UN0] PDF. [UN2] J. Schmidhuber. Habilitation thesis, TUM, 1993. PDF. can be found here (depth > 1000). 2006. PDF. Link. [VAN1] S. Hochreiter. Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, TUM, 1991 (advisor J. Schmidhuber). PDF. PDF. [VAN4] Y. Bengio. Neural net language models. Scholarpedia, 3(1):3881, 2008. Link. Link. Youtube video [see 28:16]. But in 2010, our team showed[MLP1-2] Youtube video, 2018. Preprint arXiv:1609.08144 (PDF), 2016. Based on LSTM which it mentions at least 50 times. WWW link (retrieved 15 May 2020). PDF. Menu


    Twitter:
    @SchmidhuberAI the fast weights of another NN (see Sec. 1). In 1991, one of them[FWP0-1] (now often called keys and values for self-attention; Sec. 2). The very similar Transformers[TR1-2] combine this with projections Transformers with linearized self-attention[TR5-6] to the 1991 Fast Weight Programmers[MOST] (see this tweet). In 1993, I also introduced in this context[ATT] (Sec. 4), and RNNs that program themselves (Sec. 3). problem aka deep learning problem (analyzed a few months later in 1991[VAN1]) through additive fast weight changes (Sec. 5). brand new, improved version[FWP6] of the 1991 fast weight update rule (Sec. 6). reinforcement learning through neuroevolution[FWP5] (2005-, Sec. 7), goal-conditioned policy generators (2022),[GGP] metalearning machines that learn to learn[FWPMETA1-9] (1992-2022, Sec. 8). As I have frequently emphasized since 1990,[AC90][PLAN][META] universal self-referential formal systems,[GOD][GOD34] I built NNs whose outputs are changes of programs or weight matrices of other NNs[FWP0-2] (Sec. 1, 2, 3), their own weight change algorithms or learning algorithms[FWPMETA1-5] (Sec. 8). gradient descent procedure[BP1-4][BPA][R7]) can compute a direction in program space where one may find a better program,[AC90] better program-modifying program.[FWP0-2][FWPMETA1-5] layers.[DEEP1-2] Their activation functions were Kolmogorov-Gabor polynomials which include the now popular multiplicative gates,[DL1-2] von der Malsburg was the first to explicitly emphasize the importance of NNs with rapidly changing weights.[FAST] The second paper on this was published by Feldman in 1982.[FASTa] The weights of a 1987 NN were sums of weights with a large learning rate and weights with a small rate[FASTb][T22] (but have nothing to do with the NN-programming NNs discussed below). Fast Weight Programmers (FWPs) were published in 1991-93[FWP0-2] (Sec. 1, 2, 3, 4). and Transformers[TR1-6] (Sec. 2, 3, 4, 5). slow NN that learns by backpropagation[BP1-4] to rapidly modify the fast weights of another NN,[FWP0] essentially published in Neural Computation.[FWP1] (Sec. 4) but in a fully neural way (rather than in a hybrid fashion[PDA1][PDA2][DNC]). Synthetic Gradients.[NAN1-5] One of the FWPs of 1991[FWP0-1] is illustrated in the figure. There is A disadvantage addressed in Sec. 2 is that the slow net needs many output units if the fast net is large.

    The Fast Weight Programmer[FWP0-1] depicted in Sec. 1 has a slow net unit for each fast weight. However, Section 2 of the same 1991 paper[FWP0] linear[TR5-6] Transformers[TR1-2] or to the fast weight (which then may be normalized by a squashing function[FWP0]). The second order tensor products.[FWP0-3a] linear Transformers).[FWP6][TR5-6] The highly successful Transformers of 2017[TR1-2] can be viewed as a combination of my additive outer product fast weight principle[FWP0-2] NN-programmed fast weights (Sec. 5 & 1). linear Transformers (2020-21)[TR5-6] abandoned the softmax, essentially resurrecting the original 1991 system.[FWP0-1] Compare Sec. 6. go back at least to Hebb Steinbuch since 1991.[FWP0-3a][TR5-6] I offered the FWPs of 1991[FWP0-1] as an (Sec. 1), Modern Transformers are also viewed as RNN alternatives, despite their limitations.[TR3-4] The slow net and the fast net of the 1991 system[FWP0-1] in Sec. 2 were feedforward NNs (FNNs), like most current Transformers.[TR1-6] I collapsed all of this into a single RNN that could rapidly reprogram all of its own fast weights through additive outer product-based weight changes.[FWP2] One motivation reflected by the title of the paper[FWP2] See also our more recent work on FWPs since 2017,[FWP3-3a][FWPMETA7][FWP6] and compare a recent study.[RA21] Today, everybody is talking about attention when it comes to describing the principles of Transformers.[TR1-2] The additive outer products[FWP0-1] of the Fast Weight Programmers described in Sec. 2 and Sec. 3 Similarly, the attention weights or self-attention weights (see also[FWP4b-d]) 1993 paper[FWP2] which Fast Weight Programmers.[FWP2][ATT] Apart from possible normalization/squashing,[FWP0] are additive (Sec. 1 & 2). by my brilliant student Sepp Hochreiter a few months later in his 1991 diploma thesis.[VAN1] That is, the core of LSTM is operating in a linear additive activation space (ignoring LSTM Additive FWPs[FWP0-2] (Sec. 1 & 2), however, solve the problem through a dual approach, By favoring additive operations yielding non-vanishing first derivatives and error flow,[VAN1] Transformers[TR1-6] also follow the additive approach.[FWP0-2] (compare Sec. 2 and Sec. 4 on attention terminology since 1993). LSTM It is essentially a feedforward version of LSTM[LSTM1] with forget gates.[LSTM2] Residual Net or ResNet[HW2] (Dec 2015). smartphones.[DL4] rapidly learn to solve quickly[LSTM13] while plain Transformers can Recent work of February 2021[FWP6] mechanisms[TR5-6] and Fast Weight Programmers[FWP0-2] variants.[TR5-6] Building on previous work[FWPMETA7] on FWPs (Sec. 1, 2, 3, 8), we replace the 1991 elementary programming instruction based on additive outer products[FWP0-2] by a delta rule-like[WID] language modeling tasks.[FWP6] Our code is public. work of June 2021[FWP7] (also with Robert Csordas) points out that the original FWP formulation of 1991[FWP0-1] is more general than the one of linear Transformers: a slow NN continually reprograms the weights of a fast NN with Our code is public. with my former postdoc Faustino Gomez[FWP5] (now CEO of NNAISENSE) Our 2005 paper on deep RL[DL6,6a] was actually numerous weights of large NNs through very compact codes.[KO0-2][CO1-4] Here we exploited that the

    Recent work of 2022[GGP] with In references[FWPMETA1-5] since 1992, the slow NN and the fast NN (Sec. 1) are recurrent and identical. The RNN can see its own errors or reward signals called eval(t+1) in the image.[FWPMETA5]

    The 1993 FWP of Sec. 3[FWP2] also was an RNN RNN above,[FWPMETA1-5] it used outer products between key patterns and value patterns (Sec. 2) to manipulate functions of two variables[HO1] (more on LSTM and fast weights in Sec. 5). In 2020, Imanol et al. augmented an LSTM with an associative fast weight memory.[FWPMETA7] partially observable environments.[FWPMETA7] Our recent MetaGenRL (2020)[METARL10] meta-learns See the blog post of my PhD student Louis Kirsch. outer-product-like fast weights encoded in the activations of LSTMs.[FWPMETA6] variables[FWP2] (Sec. 3). VS-ML can also learn to implement the backpropagation learning algorithm[BP1-4] purely in the end-to-end differentiable forward dynamics of RNNs.[FWPMETA6]

    In 2022, we also published at ICML a modern self-referential weight matrix (SWRM)[FWPMETA8] based on the 1992 SWRM.[FWPMETA1-5] self-improvement (compare this tweet). A modern self-referential weight matrix (2022) based on the one of 1992 This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Creative Commons License First publication of what was later sometimes called the Hopfield network[AMH2] or Amari-Hopfield Network. The Hopfield network or Amari-Hopfield Network was published in 1972 by Amari.[AMH1] Transformers with linearized self-attention (1991-93).[FWP] Today, both types are very popular. PDF. PDF. Link. PDF. First application of backpropagation[BP1] to NNs (concretizing thoughts in his 1974 thesis). More.[DL2] Deep Learning. greatly improved (CTC-based) on-device speech recognition (on the phone, not the server) PDF. neural networks learning to control dynamic external memories.[PDA1-2][FWP0-1] alternative[FWP0-1] to recurrent NNs. the fast weights[FAST,FASTa] of Such Fast Weight Programmers[FWP0-6,FWPMETA1-8] can learn to memorize past data, e.g., by computing fast weight changes through additive outer products of self-invented activation patterns[FWP0-1] (now often called keys and values for self-attention[TR1-6]). The similar Transformers[TR1-2] combine this with projections Transformers with linearized self-attention[TR5-6] In 1993, he introduced in this context,[ATT] and RNNs that program themselves. See tweet of 2022. "Transformer with linearized self-attention."[FWP] PDF. See tweet of 2022 for 30-year anniversary. PDF. Preprint: arXiv:1811.12143. PDF. PDF. Preprint: arXiv:2003.08165. PDF. Linear Transformers Are Secretly Fast Weight Programmers. ICML 2021. Preprint: arXiv:2102.11174. Preprint: arXiv:2106.06295 (June 2021). PDF. An introspective network that can learn to run its own weight change algorithm. In Proc. of the Intl. Conf. on Artificial Neural Networks, J. Schmidhuber. Habilitation thesis, TUM, 1993. PDF. Preprint arXiv:2012.14905 [cs.LG], 2020. Report arXiv:2011.07831 [cs.AI], 2020. Preprint: arXiv:2202.05780. Preprint arXiv/2207.01570, 4 July 2022 (submitted in May 2022). Preprints arXiv:1505.00387 (May 2015) and arXiv:1507.06228 (July 2015). Also at NIPS 2015. The LSTM with forget gates[LSTM2] for RNNs.) Resnets[HW2] are a version of this where the gates are always open: g(x)=t(x)=const=1. Link. arXiv:1512.03385 (Dec 2015). Residual nets are a version of Highway Nets[HW1] arxiv:1612.07771 (2016). Also at ICLR 2017. PDF. PDF. [LSTM1] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 1997. PDF. PDF. PDF. arXiv:2005.05744, 2020. Preprint arXiv:1608.05343, 2016. The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization. Proc. ICLR 2022. Preprint arXiv/2110.07732. the 1991 publication on what See tweet of 2022 for 30-year anniversary. [R3] Reddit/ML, 2019. NeurIPS 2019 Bengio Schmidhuber Meta-Learning Fiasco. [R4] Reddit/ML, 2019. Five major deep learning papers by G. Hinton did not cite similar earlier work by J. Schmidhuber. [R7] Reddit/ML, 2019. J. Schmidhuber on Seppo Linnainmaa, inventor of backpropagation in 1970. 1992. Based on TR FKI-148-91, TUM, 1991.[UN0] PDF. [UN2] J. Schmidhuber. Habilitation thesis, TUM, 1993. PDF. can be found here (depth > 1000). [VAN1] S. Hochreiter. Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, TUM, 15 June 1991 (advisor J. Schmidhuber). PDF. Transformers with linearized self-attention in Neural Computation 1992, equivalent to fast weight programmers (apart from normalization), separating storage and control. Key/value was called FROM/TO. The attention terminology was introduced at ICANN 1993. Juergen Schmidhuber. Menu


    @SchmidhuberAI
    arXiv:2212.11279 mentioning my own team Sec. 1: Introduction
    Sec. 2: 1676: The Chain Rule For Backward Credit Assignment
    Sec. 3: Circa 1800: First Neural Net (NN) / Linear Regression / Shallow Learning
    Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture. ~1972: First Learning RNNs
    Sec. 5: 1958: Multilayer Feedforward NN (without Deep Learning)
    Sec. 6: 1965: First Deep Learning
    Sec. 7: 1967-68: Deep Learning by Stochastic Gradient Descent
    Sec. 8: 1970: Backpropagation. 1982: For NNs. 1960: Precursor.
    Sec. 9: 1979: First Deep Convolutional NN (1969: Rectified Linear Units)
    Sec. 10: 1980s-90s: Graph NNs / Stochastic Delta Rule (Dropout) / More RNNs / Etc
    Sec. 11: Feb 1990: Generative Adversarial Networks / Artificial Curiosity / NN Online Planners
    Sec. 12: April 1990: NNs Learn to Generate Subgoals / Work on Command
    Sec. 13: March 1991: NNs Learn to Program NNs. Transformers with Linearized Self-Attention
    Sec. 14: April 1991: Deep Learning by Self-Supervised Pre-Training. Distilling NNs
    Sec. 15: June 1991: Fundamental Deep Learning Problem: Vanishing/Exploding Gradients
    Sec. 16: June 1991: Roots of Long Short-Term Memory / Highway Nets / ResNets
    Sec. 17: 1980s-: NNs for Learning to Act Without a Teacher
    Sec. 18: It Sec. 19: But Don Sec. 20: The Broader Historic Context from Big Bang to Far Future
    Sec. 21: Acknowledgments
    Sec. 22: 555+ Partially Annotated References (many more in the award-winning survey[DL1])
    quite erroneous ideas about the origins of the universe (see the final section

    A history of AI written in the 1980s would have emphasized topics such as theorem proving,[GOD][GOD34][ZU48][NS56] logic programming, expert systems, and heuristic search.[FEI63,83][LEN83] an old area of research seeing renewed interest. Practical AI dates back at least to 1914, when Leonardo Torres y Quevedo (see below) built the first working chess end game player[BRU1-4] any type of computation-based AI.[GOD][BIB3][GOD21,a,b] emphasis on topics such as support vector machines and kernel methods,[SVM1-4] Bayesian (actually Laplacian or possibly Saundersonian[STI83-85]) reasoning[BAY1-8][FI22] and other concepts of probability theory and statistics,[MM1-5][NIL98][RUS95] decision trees,e.g.,[MIT97] ensemble methods,[ENS1-4] swarm intelligence,[SW1] and evolutionary computation.[EVO1-7]([TUR1],unpublished) Why? Because back then such techniques drove many successful AI applications.

    A history of AI written in the 2020s must emphasize concepts such as the even older chain rule[LEI07] and deep nonlinear artificial neural networks (NNs) trained by gradient descent,[MACY51] and the 1951 Paris conference on calculating machines and human thought, now often viewed as the first conference on AI.[AI51][BRO21][BRU4] based on "deep learning" with NNs.[DL1-2][DEC]

    The present piece also debunks a frequently repeated, misleading "history of deep learning"[S20][DL3,3a] which ignores most of the pioneering work mentioned below.[T22] See Footnote 6. The title image of the present article is a reaction to an erroneous piece of common knowledge which says[T19] that the use of NNs "as a tool to help computers recognize patterns and simulate human intelligence had been introduced in the 1980s," although such NNs appeared long before the 1980s.[T22] on the history of aviation,[NASC1-2] the telephone,[NASC3] the computer,[NASC4-7] resilient robots,[NASC8] and scientists of the 19th century.[NASC9] Finally, textbook on Leibniz This answer is used by the technique of gradient descent (GD), apparently first proposed by Augustin-Louis Cauchy in 1847[MAD86-05] "the world and the first with an internal memory.[BL16] He described the principles of binary computers (1679)[L79][L03][LA14][HO66][LEI21,a,b] His formal Algebra of Thought (1686)[L86][WI48] was deductively equivalent[LE18] to the much later Boolean Algebra (1847).[BOO] all possible questions through computation;[WI48] doing this).[T22] It was not published until 1970, as discussed below.[BP1,4,5]

    In 1805, Adrien-Marie Legendre published what Rosenblatt combined a linear NN as above with an output threshold function to obtain a pattern classifier (compare his more advanced work on multi-layer networks discussed below). Joseph[R61] Widrow & Hoff analyzed by physicists Ernst Ising and Wilhelm Lenz in the 1920s.[L20][I24,I25][K41][W45][T22] It settles into an equilibrium state in response to input conditions, and is the foundation of the first learning RNNs (see below). were also discussed in 1943 by neuroscientists Warren McCulloch und Walter Pitts[MC43] and formally analyzed in 1956 by Stephen Cole Kleene.[K56]

    In 1972, Shun-Ichi Amari made the Lenz-Ising recurrent architecture adaptive such that it could learn to associate input patterns with output patterns by changing its connection weights.[AMH1] See also Stephen Grossberg and Kaoru Nakano

    10 years later, the Amari network was republished (and its storage capacity analyzed).[AMH2] Some called it the Hopfield Network (!) or Amari-Hopfield Network.[AMH3] sequence-processing generalization thereof.[AMH1] learning RNNs. This, however, was first published many decades later,[TUR1] which explains the obscurity of his thoughts here.[TUR21] (Margin note: it has been pointed out that the famous "Turing Test" should actually be called the "Descartes Test."[TUR3,a,b][TUR21])

    Today, the most popular RNN is the Long Short-Term Memory (LSTM) mentioned below, which has become the

    In 1958, Frank Rosenblatt not only combined linear NNs and threshold functions (see the section on shallow learning since 1800), he also had more interesting, deeper multilayer perceptrons (MLPs).[R58] because only the last layer learned,[DL1] Rosenblatt basically had what much later was rebranded as Extreme Learning Machines (ELMs) without proper attribution.[ELM1-2][CONN21][T22]

    MLPs were also discussed in 1961 by Karl Steinbuch[ST61-95] and Roger David Joseph[R61] (1961). See also Oliver Selfridge wrote about "back-propagating errors" in an MLP with a hidden layer[R62] although he did not yet have a general deep learning algorithm for deep MLPs. What

    Today, the most popular FNN is a version of the LSTM-based Highway Net (mentioned below) called ResNet,[HW1-3] which has become the multiplicative gates).[DEEP1-2][DL1-2][FDL] A paper of 1971[DEEP2] first introduced to Machine Learning much later by Dechter (1986), and to NNs by Aizenberg et al (2000).[DL2] (Margin note: our 2005 paper on deep learning[DL6,6a] was publication with the word combination "learn deep" in the title.[T22])

    Ivakhnenko and Lapa (1965, see above) end-to-end fashion from scratch by stochastic gradient descent (SGD),[GD1] a method proposed in 1951 by Robbins & Monro.[STO51-52]

    Amari

    See also Iakov Zalmanovich Tsypkin

    Remarkably, as mentioned above, Amari also published learning RNNs in 1972.[AMH1]

    In 1970, Seppo Linnainmaa was the first to publish what

    In 1982, Paul Werbos proposed to use the method to train NNs,[BP2] extending ideas in his 1974 thesis.

    In 1960, Henry J. Kelley already had a precursor of backpropagation in the field of control theory;[BPA] see also later work of the early 1960s by Stuart Dreyfus and Arthur E. Bryson.[BPB][BPC][R7] Unlike Linnainmaa

    Backpropagation is essentially an efficient way of implementing Leibniz such that the NN behaves more and more like some teacher, which could be a human, or another NN,[UN-UN2] or something else. had just become accessible in wealthier academic labs. An experimental analysis of the known method[BP1-2] yield useful internal representations in hidden layers of NNs.[RUM] At least for supervised learning, backpropagation is generally more efficient than Amari postdoc Dan Ciresan[MLP1-2] pre-training for important applications.[MLP2]

    Our system set a new performance record[MLP1] on Jung & Oh in 2004[GPUNN]). A reviewer called this a researchers took a fresh look at the problem in the 1980s."[S20] However, the 1969 book[M69] addressed a "problem" of Gauss & Legendre and then also by Amari (such as the Boltzmann machine[BM][HIN][SK75][G63][T22]) without relating them to the original work,[DLC][S20][T22] although the true history is well-known. in the 1960s-70s, especially outside of the Anglosphere.[DEEP1-2][GD1-3][CNN1][DL1-2][T22] Blatant misattribution and unintentional[PLAG1][CONN21] or intentional[FAKE2] plagiarism are still tainting the entire field of deep learning.[T22] Scientific journals "need to make clearer and firmer commitments to self-correction,"[SV20] as is already the standard in other scientific fields. Neocognitron.[CNN1] rectified linear units (ReLUs) for NNs (1969).[RELU1] They are now widely used in CNNs and other NNs. called max-pooling was introduced by Yamaguchi et al. for TDNNs in 1990[CNN3a] and by Juan Weng et al. for higher-dimensional CNNs in 1993.[CNN3] Yann LeCun Baldi and Chauvin (1993) had the first application of CNNs with backpropagation to biomedical/biometric images.[BA93] CNNs (Dan Ciresan et al., 2011).[GPUCNN1,3,5] Our fast GPU-based[GPUNN][GPUCNN5] CNNs of 2006.[GPUCNN] In 2011, DanNet became the first pure deep CNN to win computer vision contests.[GPUCNN2-3,5]

    Competition[GPUCNN5] DanNet[DAN,DAN1][R6] DanNet[GPUCNN3a] DanNet[GPUCNN8] ImageNet 2012 AlexNet[GPUCNN4] DanNet[GPUCNN8] ImageNet 2014 VGG Net[GPUCNN9]
    Twitter: @SchmidhuberAI This is a point-for-point critique of ACM (see Executive Summary I, V, II, XII, XIX, XXI, XIII, XIV, XX, XVII). (A) speech recognition, (B) natural language processing, (C) robotics, (D) computer vision, (VII) medicine, astronomy, materials science. A, B, C, D, VII, XVII, VI, XVI). II, V, XX, XVIII) with Dr. Bengio & Dr. Hinton (see Sec. XVII, I). I respond to LBH Abstract & Outline (~300 words), Introduction (~300 words), Critique of LBH Executive summary of what 21 comments on 21 claims by ACM (~8,000 words), Conclusion (~2,000 words). All backed up by over 300 references (over 10,000 words). science is self-correcting."[SV20] they are mine or other people and to fight plagiarism,[FAKE2] collusion rings,[LIT21] and systemic academic corruption in all of their more and less subtle forms.[FAKE] Sec. 2 LBH of this post.[T20a][R12] ACM 2018 A.M. Turing Award[R1] After the Executive Summary in Sec. 3, Sec. 4 will split ACM into 21 parts I, II, III, IV, V, VI, VII, VIII, IX, X, XI, XII, XIII, XIV, XV, XVI, XVII, XVIII, XIX, XX, XXI. publishing yet another misleading overview of the field, this time based on LBH LBH LBH claim to "briefly describe the origins of deep learning"[DL3a] without even mentioning the world Ivakhnenko and Lapa in 1965[DEEP1-2][R8] (see Sec. II). the first really deep feedforward NN.[HW1-3] (see Sec. D, VI). brought essentially unlimited depth to gradient-based supervised recurrent NNs;[LSTM0-17] from 2007[LSTM4,14] LBH cite Hinton (2012) for "dropout" without mentioning that dropout is just a variant of Hanson perceptrons through stochastic gradient descent[GD1-3] (without reverse mode backpropagation[BP1]). Fukushima who introduced ReLUs in 1969[RELU1-2] (see Sec. XIV). XVIII). already in 1965[DEEP1-2][R8] (see Sec. II). earlier fast weights of von der Malsburg (1981) and Feldman (1982).[FAST,FASTa-b][FWP] dedicate an extra section to attention-based Transformers,[TR1-6] citing Bengio LBH claim that Bengio of text compression[SNT] (see Sec. XVI, XVII-1). LBH cite Bengio In summation, LBH have repeatedly chosen to ignore the previous well-known critiques[DLC][HIN][T20a] and deep learning surveys,[DL1-2] and ACM and deep learning (e.g., Sec. I), ACM lauds Numerous references can be found under the relevant section links I-XXI which adhere to the sequential order of ACM Sec. II: Sec. I contains 4 subsections A, B, C, D A: Speech Recognition (see also Sec. VI & XI & XV): The first superior end-to-end neural speech recognition Hinton (2012) and Bengio (XV) Sec. B: Natural Language Processing (see also Sec. VI & XI & XVI): Sec. C: Robotics. Sec. D: Computer Vision XVIII & XIV & XI & VI) and applied to speech. All before LeCun pre-training (in contrast to Hinton Sec. XIV: Sec. XI: ACM mentions GPU-accelerated NNs XVIII: Fukushima and Waibel (see Sec. D). The first application of CNNs with backpropagation to biomedical/biometric images is due to Baldi and Chauvin.[BA93] VII: ACM explicitly mentions medicine and Sec. XII & XIX & XXI: Modern XIII & II & V III & IX & X & XX): Sec. XX: ACM credits LeCun for work on Sec. XXI: ACM credits LeCun for work on XV: ACM credits Bengio for hybrids of NNs and probabilistic models of sequences. A & B). XVI: ACM Sec. XVII: and other topics.[R2-R6] Critique of LBH Sec. Conclusion: Sec. II & III & V & XII & XIII & XVII & XIV & XIX & XX & XXI. In what follows, ACM I, II, III, IV, V, VI, VII, VIII, IX, X, XI, XII, XIII, XIV, XV, XVI, XVII, XVIII, XIX, XX, XXI. LBH and their co-workers have contributed certain useful improvements of existing deep learning methods.[CNN2,4][CDI][LAN][RMSP][XAV][ATT14][CAPS] (1965),[DEEP1-2][R8] stochastic gradient descent for multilayer perceptrons (1967),[GD1-3] (1970),[BP1-2][R7] architectures of recurrent NNs (1925-56)[I25][MC43][K56] and convolutional NNs (1979),[CNN1] GPU-accelerated NNs (2004),[GPUNN][DAN][DAN1][GPUCNN5] transformer-like[TR1-6][FWP] attention[FWP][ATT] through [DL1-2][R2-R8] This may explain some of ACM II & III & V & XIII & X & XVII & XII & XVIII & XX. academia and industry,[DL4] mentioned by ACM (labeled as A, B, C, D) below: or LSTM (1990s-2005)[LSTM0-6] student Sepp Hochreiter in 1991.[VAN1] This happened long before the similar work of Bengio (see Sec. XVII).[MIR] LSTM was refined with my student Felix Gers[LSTM2] (A2) Connectionist Temporal Classification by my student Alex Graves et al. (2006).[CTC] Our team successfully applied CTC-trained LSTM to speech in 2007[LSTM4] (also with hierarchical LSTM stacks[LSTM14]). Markov models (HMMs)[BW][BRI][BOU] (Sec. XV). Hinton et al. (2012) still used the old hybrid approach[HYB12] and did not compare it to CTC-LSTM. He later reused our end-to-end neural speech recognizer[LSTM4][LSTM14] as a postdoc in Hinton CTC-LSTM dramatically improved Google on-device speech recognition[GSR19] (not any longer on the server) (see Sec. VI & XI & XV). of text[SNT] (see Sec. XVI). In 2001, we showed that LSTM can learn languages unlearnable by traditional models such as HMMs,[LSTM13] See also Sec. VI & XI & XV. tailored by Bengio see Sec. XVI. C. Robotics & RL etc. Since 2003, our team has used LSTM for Reinforcement Learning (RL) and robotics.[LSTM-RL][RPG][LSTMPG] For example, in 2018, a PG-trained LSTM was the core of OpenAI beat a pro player in the game of Starcraft, which is theoretically harder than Chess or Go[DM2] in many ways, using OpenAI Five which learned to defeat human experts in the Dota 2 video game (2018).[OAI2] Apart from A, B, C above, chemistry, molecular design, lip reading, speech synthesis,[AM16] predicting what was being used for LSTM (only 5% for the CNNs of Sec. D).[JOU17] Apparently the first LSTM journal paper[LSTM1][R5] is now the 20th century a particular feedforward neural net (NN) called the convolutional NN (CNN).[CNN1-4] The basic CNN architecture with convolutional and downsampling layers is due to Fukushima (1979),[CNN1] who also introduced the now widely used rectified linear units (ReLUs) in 1969.[RELU1] called max-pooling was introduced by Yamaguchi et al. for TDNNs in 1990[CNN3a] and by Weng et al. for higher-dimensional CNNs in 1993.[CNN3] Since 1989, LeCun Finally, my own team showed in 2010[MLP1] to train deep NNs, contrary to claims by Hinton[VID1] who said that "nobody in their right mind would ever suggest" this. Then we CNNs of 2006.[GPUCNN] in a row (15 May 2011, 6 Aug 2011, 1 Mar 2012, 10 Sep 2012).[GPUCNN5] CVPR paper on DanNet[GPUCNN3] of Hinton Our CNN image scanners were 1000 times faster than previous methods.[SCAN] The VGG network (ImageNet 2014 winner)[GPUCNN9] and other highly cited CNNs[RCNN1-3] ResNet, the ImageNet 2015 winner[HW2] (Dec 2015) and currently the See also Sec. XVIII & XIV & XI & VI. The first non-learning recurrent NN (RNN) architecture (the Lenz-Ising model) was analyzed by physicists in the 1920s.[L20][I25][K41][W45] were also discussed in 1943 by McCulloch and Pitts[MC43] and formally analyzed in 1956 by Kleene.[K56] In 1972, Amari reused the Lenz-Ising model to build a learning RNN, later sometimes called the Hopfield network or Amari-Hopfield Network.[AMH1-3] artificial evolution[TUR1] and single adaptive layer learned in 1958[R58] (Joseph[R61] Widrow & Hoff regression and the method of least squares[DL1-2] multilayer perceptrons (MLPs) were discussed by Steinbuch[ST61-95] (1961), Joseph[R61] (1961), and Rosenblatt[R62] (1962), who wrote about "back-propagating errors" in an MLP with a hidden layer,[R62] but did not yet have a general deep learning algorithm for deep MLPs (what Compare also Selfridge deep convolutional NN architecture was first introduced in the 1970s;[CNN1] his very popular ReLU already in 1969.[RELU1-2] XIII, III, V, VIII, IX, and X. LBH & co-authors, e.g., Sejnowski[S20] (see Sec. XIII). It goes more or less like this: "In 1969, Minsky & Papert[M69] researchers took a fresh look at the problem in the 1980s."[S20] However, as mentioned above, the 1969 book[M69] addressed a "problem" of Gauss & Legendre (and then also by Amari (but see a 1989 paper[MOZ]). See Sec. 1 of the overview:[MIR] "Very Deep Learning" tasks of depth > 1000.[UN2][DL1][UN] (By 2003, LSTM variants successfully dealt with language problems of depth up to 30,000[LSTM17] III. Note that III).[DLC][DEEP1-2][BP1][DL1-2][R7-R8][R2-R4] deep learning multilayer perceptrons (1965),[DEEP1-2][R8] stochastic gradient descent for multilayer perceptrons (1967),[GD1-3] (1970),[BP1,2][R7] architectures of recurrent NNs (1925-56)[I25][MC43][K56] and convolutional NNs (1979),[CNN1] GPU-accelerated NNs (2004),[GPUNN][GPUCNN5] and other foundations.[DL1-2][R2-R8] II & V & XIII & IX & X & XVII & XII & XVIII & XX & I. deeplearning.net which until 2019 advertised deep learning as "moving beyond shallow machine learning since 2006",[DL7] referring to Hinton II & XVII (5). Not to mention Ivakhnenko which Hinton,[UN4] Bengio,[UN5] and LBH[DL3,DL3a] did not cite either. See Sec. X. my comments systematically track the sequential order of ACM

    ACM Much of early AI in the 1940s-70s was actually about theorem proving[ZU48][NS56] Turing Machine.[TUR] He rederived the above-mentioned result,[CHU][TUR][HIN][GOD21,21a][TUR21][LEI21,21a] In the same year of 1936, Emil Post published yet another independent universal model of computing,[POS] without suggesting any fact-based corrections.[HIN]) open problem "P=NP?" in his famous letter to John von Neumann (1956).[GOD56][URQ10] His patent application of 1936[ZU36-38][Z36][RO98][ZUS21] predating Claude Shannon Zuse also created the first high-level programming language in the early 1940s.[BAU][KNU] conditional jump instruction.[RO98] that learn internal representations (1965),[DEEP1-2][R8] stochastic gradient descent for multilayer perceptrons (1967),[GD1-3] (1970),[BP1,2][R7] architectures of recurrent NNs (1925-56)[I25][MC43][K56] and convolutional NNs (1979),[CNN1] (2004),[GPUNN][GPUCNN5] (2010)[MLP1-2] transformer-like[TR1-6][FWP] attention[FWP][ATT] through and more.[DL1-2][R2-R8] II & I & III & XIII & X & XVII & XII & XVIII & XX. achieved by our group 2010-2011[MLP1-2][DAN][DAN1][GPUCNN5][R6] Baldi and Chauvin (1993) had the first application of CNNs with backpropagation to biomedical/biometric images.[BA93] (Sept 2012, on cancer detection).[GPUCNN5,8] and were able to greatly improve steel defect detection.[ST] All of this happened before the similar GPU-accelerated AlexNet of Hinton mitosis detection.[MGC][GPUCNN5,8] D & XI). without citing them.[DL1][DLC][HIN][R2-R4][R7-R8] V & XII & XIX & II & III & XIII & XVII & X & I. work.[HIN][DLC][DL1-2][DEEP1-2][RELU1-2][R7-R8] See Sec. II & III & XIII & V & X & XIV & I. first introduced to Machine Learning by Dechter (1986), and to NNs by Aizenberg et al (2000).[DL2] To my knowledge, LBH have never cited them. (Margin note: our 2005 paper on deep RL[DL6,6a] was LBH started talking about "deep learning ... moving beyond shallow machine learning since 2006",[DL7] referring to their unsupervised pre-training methods of 2006. See Sec. III. II & III & XIII & V & I. ignored by LBH V & II & III & I & XIII & XII & XIX & X & XVII).

    ACM correctly mentions advancements through GPUs. The first to use GPUs for NNs were Jung & Oh (2004),[GPUNN][GPUCNN5] an important benchmark record,[MLP1-2] to train deep NNs, contrary to Hinton vision (explicitly mentioned by ACM) for the first time[R6] (see Sec. D). (explicitly mentioned by ACM) were actually dominated by LSTM and CTC of our team.[LSTM1-4][CTC] In particular, as mentioned in Sec. A, such as HMMs.[BW][BOU][BRI][HYB12] As mentioned in Sec. B and XVI, the first superior end-to-end neural machine translation was also based on LSTM. ACM backpropagation by Rumelhart et al. (1985-86)[RUM] (1982).[BP2] And the article[RUM] even failed to mention Linnainmaa, the inventor of this famous algorithm for credit assignment in networks (1970),[BP1] Kelley already had a precursor thereof in the field of control theory;[BPA] see also later work of the early 1960s.[BPB][BPC][R7] internal representations in hidden layers of NNs.[RUM] But this was essentially just an experimental analysis of a known method.[BP1-2] And history of backpropagation can be found at Scholarpedia[DL2] and in my award-winning survey.[DL1] Also see Sec. XIX, II.

    Some claim that "backpropagation is just the chain rule of Leibniz (1676) & L Hinton[AOI] Rumelhart[RUM] with the "invention" of backpropagation. for "creating" the method and for other things he didn Neither in a popular book[AOI] nor in other recent work[DL3,DL3a] did he cite Linnainmaa (1970),[BP1] the true creator.[BP4-5] that his 2015 survey[DL3] does cite Werbos (1974) who however described the method correctly only later in 1982[BP2] and also failed to cite Linnainmaa.[BP1] Compare the 1967-68 work of Amari:[GD1-3] to my knowledge the first to propose and implement stochastic gradient descent[STO51-52] reverse mode gradient descent method now known as backpropagation[BP1]); see also Tsypkin Linnainmaa It wasn one person who published first[BP1] and therefore should get the credit. Boltzmann Machine (BM)[BM] a learning.[HIN] Recently, however, I learnt through a reader that even the BM paper[BM] did not cite prior relevant work by Sherrington & Kirkpatrick[SK75] and Glauber.[G63] (Compare related work.[H86][H88][S93]) multilayer perceptrons with arbitrarily many layers.[DEEP1-2][HIN] Sec. II V &

    As mentioned in Sec. II, Sejnowski at the problem in the 1980s."[S20] However, the 1969 book[M69] addressed a "deep learning problem" (a limitation of Gauss & Legendre also in the 1970s, especially outside of the Anglosphere.[DEEP2][GD1-3][CNN1][DL1-2] Dropout is actually a variant of Hanson as we showed already in 2011 in a contest where LeCun Sec. D above. Back then, the only really of deep CNNs through GPUs.[GPUCNN1,3,5][R6] Already before ImageNet 2012,[R6] a monopoly on winning computer vision competitions.[GPUCNN5] It more than "halved the error rate for object recognition" (ACM See Sec. D since the late 1980s.[BW][BRI][BOU] LSTM (1990s-2005)[LSTM0-6] and CTC[CTC] (2006), which were applied to speech in 2007.[LSTM4][LSTM14] CTC-LSTM is end-to-end-neural and thus very different from (and superior to) the hybrid methods since the late 1980s.[BW][BRI][BOU][HYB12] See also Sec. A. 5 years earlier, in 1995, we already had a similar, excellent neural probabilistic text model.[SNT] Bengio[NPM] characterizes it only briefly as "related" (see also Pollack Bengio For example, it helped to further improve Facebook attention-based Transformers[TR1-6] are My FWP of 1991[FWP0-1] (now often called keys and values for self-attention).[TR1-6][FWP] Transformers[TR1-2] a traditional LSTM domain (see Sec. B). rapidly learn to solve quickly[LSTM13,17] linear Transformers or Performers[TR5-6] which are formally equivalent to my 1991 FWPs (apart from normalization).[FWP6][FWP] In 1993, I introduced in this context,[ATT] and RNNs that program themselves. He was the reviewer of my 1990 paper[ATT2] his own work:[ATT3] GANs[GAN0-1] (2010-2014) are actually from 1990[AC90,90b][AC20] (see also surveys[AC09-10]). This principle is now widely used for exploration in RL (e.g., Sec. C) and for image synthesis[GAN1] (also mentioned by ACM in Sec. XVIII). predictor NN minimizes its error, while the generator NN tries to make outputs that maximize this error: one net whether the controller early adversarial machine learning settings[S59][H90] neither involved unsupervised NNs nor were about modeling data nor used gradient descent.[AC20]) Bengio et al. neither cited the original work[AC90,90b][AC20] nor corrected their erroneous claims[GAN1] about Bloomberg,[AV1] their NIPS 2014 paper[GAN1] and some of the erroneous claims it made about my prior work.[AC20] Goodfellow eventually admitted that PM is adversarial (his paper[GAN1] still claims the opposite), but emphasized that it When the authors[GAN1] I published one myself in the hopes of correcting the annals of history.[AC20] that they are instances of my earlier work.[R2][AC20] was settled in favor of Sepp.[VAN1] However, even after a common publication,[VAN3] Bengio published papers[VAN4][XAV] are poor indicators of truly pioneering work.[NAT1] (Margin note: Bengio states[YB20] that in 2018 he one must at least clarify it later,[DLC] Bengio also claims[YB20] that in 1995 date back to 1991-93.[UN0-2][UN] which I started in 1987[META1][META] long before Bengio that he did it before me.[R3] Bengio also writes[YB20] that in Bengio has also heavily used our LSTM (see Sec. A-C), "gated recurrent units (GRU)"[LSTMGRU] for a variant of our vanilla LSTM architecture[LSTM2] (2000) which he did not cite although our work[LSTM2] was the one that introduced gated recurrent units. In addition, our team automatically evolved lots of additional LSTM variants and topologies already in 2009[LSTM7] without changing the name of the basic method. learn to count[LSTMGRU2] nor learn simple non-regular languages;[LSTMGRU2] they according to Google Brain.[LSTMGRU3]) Hinton work on this[UN0-2] (see Sec. II above).[UN] It was published in 1991-92[UN1] when compute was about 1000 times more expensive than in 2006. survey (2015),[DL3][DLC] See also Sec. II & III. Hinton[DIST2] (2006) did not cite my much earlier original work on this (1991),[UN1][UN] not even in his later patent application Hinton[ATT3] (2010) he was both reviewer and editor of my summary[ATT2] (1990; see Sec. XVI above).

    The ten priority disputes mentioned in the present Sec. XVII are not on the only ones.[R4] Remarkably, three of them are related to the 1991 paper[UN1][UN] which in many ways started what people now call deep learning, going beyond Most of them go back to work of 1990-91.[MIR] See Sec. I for additional related issues of credit assignment. LeCun All of this happened before LeCun three times worse performance).[DAN1] Again see Sec. D. Baldi and Chauvin (1993) had the first application of CNNs with backpropagation to biomedical/biometric images.[BA93] (Sept 2012, on detection of mitosis/cancer)[GPUCNN5,7,8] (before the similar AlexNet won ImageNet 2012[GPUCNN5][R6] and the similar VGG network[GPUCNN9] won ImageNet 2014). mitosis detection.[MGC][GPUCNN5,7,8] Many major companies are using it now. See Sec. D & VII. ACM also explicitly mentions speech recognition, speech synthesis,[AM16][DL1] Sec. A, B, VI, XI. recent work.[DL3,DL3a][DLC] In 1960, Kelley already had a precursor of the algorithm.[BPA] Furthermore, many besides LeCun have worked "to speed up backpropagation algorithms"[DL1] (ACM However, "hierarchical feature representation" in deep learning networks is what Ivakhnenko & Lapa (1965)[DEEP1-2] and Amari[GD1-2] (and also Fukushima[CNN1][DL2]) had long before LeCun. See Sec. D & II & XIII & V. LeCun et al. neither cited the origins[BP1] (1970) of this widely used type of automatic differentiation for differentiable networks of modules[DL2][BP4-5][DLC] for such systems.[S80] See also Sec. XIX & XII. before LeCun who did not cite them. See also Pollack

    (Furthermore, "complex networks of modules where backpropagation is performed" were the central theme of my much earlier habilitation thesis (1993).[UN2] For example, our see "100 Authors against Einstein."[AH1] "If you cannot dispute a fact-based message, attack the messenger himself."[HIN] Science has a well-established way of dealing with plagiarism (which may be unintentional[PLAG1][CONN21] or not[FAKE2]) award can ever change that.[HIN] and their co-workers have contributed useful improvements of deep learning methods.[CNN2,4][CDI][LAN][RMSP][XAV][ATT14][CAPS] whom they did not cite, in contrast to ACM II, V, XII, XIX, XXI, XIII, XIV, XI, and XX, and 2). Sec. I, A, B, C, D, XVII, VI, and XVI). to self-correction,"[SV20] as is already the standard in other scientific fields. in popular science venues without peer review? For example, the narrator of a popular 2018 Bloomberg video[VID2] Germany and Switzerland (LSTM & CTC; see Sec. A) long before Hinton Google on Google Translate[WU] mentions LSTM over 50 times (see Sec. B). In ad hominem style,[AH2-3] claiming credit he doesn LeCun also called the GANs of Bengio of my work in 1990.[AC90,90b][AC20][R2] According to Bloomberg,[AV2] Bengio has simply "denied my claims" without backing up his denial by any facts; see Sec. XVII. and forcefully contradict public figures who promote it."[FAKE] Our LSTM paper[LSTM1] has got more citations than any paper by Bengio or LeCun,[R5] Hinton deep NNs (2010)[MLP1] [UN][UN0-3] and later championed by Hinton;[UN4][VID1] see Sec. D). Hinton (2012)[GPUCNN4] characterizes AlexNet won one;[R6] see Sec. D, XIV. The highly cited VGG network (2014)[GPUCNN9] Hinton of Hinton for a book by Rumelhart & McClelland[R5]). method[BP1] whose origins of Ivakhnenko whom he has never cited;[DEEP1-2][R7-R8] see Sec. II, XIII. Bengio (1990)[AC90,90b][AC20][R2] which he did not cite; see Sec. XVII. Hinton were preceded by Hanson As recently as of 2021, ACM published yet another misleading deep learning "survey" by LBH,[DL3a] again heavily citing LBH without Consult the Executive Summary and Sec. I-XXI of this critique for more. have their conceptual and technical roots in my labs in Munich and Lugano,[MOST] of deep learning MLPs since 1965[DEEP1-2][GD1-2a] (see Sec. II, XX) and backpropagation (1960-70)[BPA][BP1] (see Sec. XIX, XII) and convolutional NNs since 1979[CNN1-4] (see Sec. XVIII, D). Our LSTM (1990s, see Sec. A, B; also for RL, 2003-, see Sec. C) → our Highway Net (May 2015) → ResNet (Dec 2015, see Sec. D). Our adversarial Artificial Curiosity (1990) → GANs (2010s, see Sec. XVII). our own unsupervised pre-training of deep NNs (1991, see Sec. II & III) for recurrent NNs in the 1990s → our LSTM (see Sec. A-C) and for feedforward NNs in 2010 → our DanNet (2011) → AlexNet (2012); VGG Net (2014) (see Sec. D). superior computer vision (2011, see Sec. D, XVIII), speech recognition (with our CTC, 2007-15, see Sec. A), machine translation (2016, see Sec. B), robotics & video game players (2018-19, see Sec. C), Fast Weight Programmers (1991, see Sec. XVI) are formally equivalent to linear Transformers (now popular in NLP). I, A, B, C, D, VII, XVIII. depth that really learned.[DEEP1-2][R8] Soon afterwards, multilayer perceptrons learned internal representations through stochastic gradient descent in Japan.[GD1-2a] A few years later, modern unintentional[PLAG1][CONN21] or intentional.[FAKE2]

    Yes, this critique is also an implicit critique of certain other awards to LBH.[HIN] reddit.com/r/MachineLearning[R1-R12] (the largest machine learning forum with back then over 800k subscribers), many of them influenced by my overview.[MIR]

    Dr. LeCun himself is well aware of the challenges to scientific integrity in our field:[LECP] "... else cites."[LECP] weights and an adaptive output layer.[R62] So Rosenblatt basically had what much later was rebranded as Extreme Learning Machines (ELMs)[ELM1] revisionist narrative of ELMs[ELM2][CONN21] self-proclaimed "deep learning conspiracy"[DLC1-2]

    Note that I am insisting on proper credit assignment not only in my own research field but also in quite disconnected areas,[HIN] as demonstrated by my numerous letters in this regard published in Science and Nature, e.g., on the history of aviation,[NASC1-2] the telephone,[NASC3] the computer,[NASC4-7] resilient robots,[NASC8] and scientists of the 19th century.[NASC9]

    Creative Commons LicenseThanks arXiv page. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. PDF. PDF. PDF. IEEE link. With a brief summary of the generative adversarial neural networks of 1990[AC90,90b][AC20] Preprint arXiv/1906.04493. ACM Code of Ethics and Professional Conduct. Association for Computing Machinery (ACM), 2018. Quote: Link. Link. Blog of Werner Vogels, CTO of Amazon (Nov 2016): First publication of what was later sometimes called the Hopfield network[AMH2] or Amari-Hopfield Network.[AMH3] The Hopfield network or Amari-Hopfield Network was published in 1972 by Amari.[AMH1] PDF. arXiv/1409.0473, 2014-16. Bloomberg, May 15, 2018. Bloomberg, May 17, 2018. PDF. PDF. Link. PDF. First application of backpropagation[BP1] to NNs (concretizing thoughts in his 1974 thesis). More.[DL2] English version: [CNN1+]. More in Scholarpedia. Link. [CNN1a] A. Waibel. Phoneme Recognition Using Time-Delay Neural Networks. Meeting of IEICE, Tokyo, Japan, 1987. First application of backpropagation[BP1][BP2] and weight-sharing PDF. Spatial Averaging.[CNN1] Spatial Averaging.[CNN1] Since November 2021: Comments on version 1 of the present report[T21v1] in the Connectionists Mailing List, perhaps the oldest mailing list on artificial neural networks. Link to the archive. PDF. Beijing, 2014. Preprint arXiv:1402.3511 [cs.NE]. 1st superhuman result in 2011.[DAN1] [DIST1] J. Schmidhuber, 1991.[UN-UN2] Deep Learning. HTML. [DL3a] Y. Bengio, Y. LeCun, G. Hinton (2021). Turing Lecture: Deep Learning for AI. Communications of the ACM, July 2021. HTML. greatly improved (CTC-based) on-device speech recognition (on the phone, not the server) PDF. Web site deeplearning.net of Y. Bengio Internet Archive), referring to Hinton unsupervised pre-training for deep NNs[UN4] (2006) although II & XVII & III. arxiv:1312.5602. Link. arXiv:1808.03578, 2018. In fact, the ELM concept goes back to Rosenblatt over 4 billion automatic translations per day (The Verge, August 4, 2017); Facebook blog by J.M. Pino, A. Sidorov, N.F. Ayan (August 3, 2017) alternative[FWP0-1] to recurrent NNs. the fast weights[FAST,FASTa] of Such Fast Weight Programmers[FWP0-6,FWPMETA1-7] can learn to memorize past data, e.g., by computing fast weight changes through additive outer products of self-invented activation patterns[FWP0-1] (now often called keys and values for self-attention[TR1-6]). The similar Transformers[TR1-2] combine this with projections linear Transformers or Performers[TR5-6] In 1993, I introduced in this context,[ATT] and RNNs that program themselves. PDF. PDF. Preprint: arXiv:1811.12143. PDF. PDF. Like [FWP0-2]. Preprint: arXiv:2003.08165. PDF. Linear Transformers Are Secretly Fast Weight Programmers. ICML 2021. Preprint: arXiv:2102.11174. Preprint: arXiv:2106.06295 (June 2021). PDF. An introspective network that can learn to run its own weight change algorithm. In Proc. of the Intl. Conf. on Artificial Neural Networks, J. Schmidhuber. Habilitation thesis, TUM, 1993. PDF. Preprint arXiv:2012.14905 [cs.LG], 2020. Report arXiv:2011.07831 [cs.AI], 2020. Probably the first paper on using stochastic gradient descent[STO51-52] reverse mode of automatic differentiation or backpropagation[BP1]). Implementation of Amari Google Research Blog, Sep 2015, see also Aug 2015 Google Alphr Technology, Jul 2015, or 9to5google, Jul 2015 WIRED, Sep 2016, siliconANGLE, Sep 2016 Blog post, Internet Archive, 2010. A blog post describing the basic ideas[AC][AC90, AC90b][AC20] of GANs. Description of GANs that does not cite the original work of 1990[AC][AC90,AC90b][AC20][R2] (also containing wrong claims about Predictability Minimization[PM0-2][AC20]). Link. This was number 1 on Hacker News. Frankfurter Allgemeine Zeitung, 16/6/2021. Preprint arXiv/2005.14165. for Image Classification. International Joint Conference on Artificial Intelligence (IJCAI-2011, Barcelona), 2011. PDF. ArXiv preprint. competitor.[DAN1] This led to massive interest from industry. PDF. PDF. North-Holland, 1991. PDF. Extending TR FKI-129-90, TUM, 1990. PDF. PDF. Preprints arXiv:1505.00387 (May 2015) and arXiv:1507.06228 (July 2015). Also at NIPS 2015. The LSTM with forget gates[LSTM2] for RNNs.) Resnets[HW2] are a version of this where the gates are always open: g(x)=t(x)=const=1. Highway Nets perform roughly as well as ResNets[HW2] on ImageNet.[HW3] Highway layers are also often used for natural language processing, where the simpler residual layers do not work as well.[HW3] Link. arXiv:1512.03385 (Dec 2015). Residual nets are a version of Highway Nets[HW1] arxiv:1612.07771 (2016). Also at ICLR 2017. Preprint arXiv:1704.04760 PDF. PDF. arXiv:1607.06450, 2016. A New Publishing Model in Computer Science. 19/5/2021. [LSTM1] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 1997. PDF. Based on [LSTM0]. More. PDF. PDF. PDF. PDF. PDF. PDF. PDF. PDF. Preprint: arxiv:1506.07452. PDF. PDF. Preprint arXiv:1805.04908. Architectures. Preprint arXiv:1703.03906 arXiv:2005.05744, 2020. Computation 22(12): 3207-3220, 2010. ArXiv Preprint. By 2010, when compute was 100 times more expensive than today, both our feedforward NNs[MLP1] Preprint arXiv:1611.01578 (PDF), 2017. Correspondence, Nature, vol 483, p 541, March 2012, doi:10.1038/483541b. Letter, Science, vol 336, p 1639, June 2012. See also comment on response by A. Hodges (DOI:10.1126/science.336.6089.1639-a) NY Times article NY Times article Learning Dexterous In-Hand Manipulation. arxiv:1312.5602 (PDF). arxiv:1912.06680. PDF. Link. Based on TR FKI-126-90 (1990).[AC90] PDF. Partially based on TR FKI-126-90 (1990).[AC90] Report arXiv:1210.0118 [cs.AI], 2015. One Big Net For Everything. Preprint arXiv:1802.08864 [cs.AI], Feb 2018. Preprint: arXiv:1809.01999. Github: World Models. minimization. TR CU-CS-565-91, Univ. Colorado at Boulder, 1991. PDF. 1991. PDF. arXiv:1112.5309 [cs.AI] First Experiments with PowerPlay. arXiv:1210.8385 [cs.AI]. [R1] Reddit/ML, 2019. Hinton, LeCun, Bengio receive ACM Turing Award. [R2] Reddit/ML, 2019. J. Schmidhuber really had GANs in 1990. [R3] Reddit/ML, 2019. NeurIPS 2019 Bengio Schmidhuber Meta-Learning Fiasco. [R4] Reddit/ML, 2019. Five major deep learning papers by G. Hinton did not cite similar earlier work by J. Schmidhuber. [R5] Reddit/ML, 2019. The 1997 LSTM paper by Hochreiter & Schmidhuber has become the most cited deep learning research paper of the 20th century. [R6] Reddit/ML, 2019. DanNet, the CUDA CNN of Dan Ciresan in J. Schmidhuber [R7] Reddit/ML, 2019. J. Schmidhuber on Seppo Linnainmaa, inventor of backpropagation in 1970. [R8] Reddit/ML, 2019. J. Schmidhuber on Alexey Ivakhnenko, godfather of deep learning 1965. [R9] Reddit/ML, 2019. We [R11] Reddit/ML, 2020. Schmidhuber: Critique of Honda Prize for Dr. Hinton [R12] Reddit/ML, 2020. J. Schmidhuber: Critique of Turing Award for Drs. Bengio & Hinton & LeCun [R15] Reddit/ML, 2021. J. Schmidhuber Preprint arXiv/1311.2524, Nov 2013. Preprint arXiv/1703.06870, 2017. Link. The Past, Present and Future of Artificial Intelligence. PDF. ACM Link. Link. 1992. Based on TR FKI-148-91, TUM, 1991.[UN0] PDF. [UN2] J. Schmidhuber. Habilitation thesis, TUM, 1993. PDF. can be found here (depth > 1000). 2006. PDF. Link. [VAN1] S. Hochreiter. Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, TUM, 1991 (advisor J. Schmidhuber). PDF. PDF. [VAN4] Y. Bengio. Neural net language models. Scholarpedia, 3(1):3881, 2008. Link. Link. Youtube video [see 28:16]. But in 2010, our team showed[MLP1-2] Youtube video, 2018. Preprint arXiv:1609.08144 (PDF), 2016. Based on LSTM which it mentions at least 50 times. WWW link (retrieved 15 May 2020). PDF. Menu


    @SchmidhuberAI This is a point-for-point critique of ACM (see Executive Summary I, V, II, XII, XIX, XXI, XIII, XIV, XX, XVII). (A) speech recognition, (B) natural language processing, (C) robotics, (D) computer vision, (VII) medicine, astronomy, materials science. A, B, C, D, VII, XVII, VI, XVI). II, V, XX, XVIII) with Dr. Bengio & Dr. Hinton (see Sec. XVII, I). I respond to LBH Abstract & Outline (~300 words), Introduction (~300 words), Critique of LBH Executive summary of what 21 comments on 21 claims by ACM (~8,000 words), Conclusion and Acknowledgments (~2,000 words). All backed up by over 250 references (~9,000 words). science is self-correcting."[SV20] they are mine or other people and to fight plagiarism, collusion rings,[LIT21] and systemic academic corruption in all of their more and less subtle forms.[FAKE] Sec. 2 LBH of this post.[T20a][R12] ACM 2018 A.M. Turing Award[R1] After the Executive Summary in Sec. 3, Sec. 4 will split ACM into 21 parts I, II, III, IV, V, VI, VII, VIII, IX, X, XI, XII, XIII, XIV, XV, XVI, XVII, XVIII, XIX, XX, XXI. publishing yet another misleading overview of the field, this time based on LBH LBH LBH claim to "briefly describe the origins of deep learning"[DL3a] without even mentioning the world Ivakhnenko and Lapa in 1965[DEEP1-2][R8] (see Sec. II). the first really deep feedforward NN.[HW1-3] (see Sec. D, VI). brought essentially unlimited depth to gradient-based supervised recurrent NNs;[LSTM0-17] from 2007[LSTM4,14] LBH cite Hinton (2012) for "dropout" without mentioning that dropout is just a variant of Hanson von der Malsburg who introduced ReLUs in 1973[CMB] (see Sec. XIV). XVIII). already in 1965[DEEP1-2][R8] (see Sec. II). earlier fast weights of von der Malsburg (1981) and Feldman (1982).[FAST,FASTa-b][FWP] dedicate an extra section to attention-based Transformers,[TR1-6] citing Bengio LBH claim that Bengio of text compression[SNT] (see Sec. XVI, XVII-1). LBH cite Bengio In summation, LBH have repeatedly chosen to ignore the previous well-known critiques[DLC][HIN][T20a] and deep learning surveys,[DL1-2] and deep learning (e.g., Sec. I), ACM lauds Numerous references can be found under the relevant section links I-XXI which adhere to the sequential order of ACM Sec. II: Sec. I contains 4 subsections A, B, C, D A: Speech Recognition (see also Sec. VI & XI & XV): The first superior end-to-end neural speech recognition Hinton (2012) and Bengio (XV) Sec. B: Natural Language Processing (see also Sec. VI & XI & XVI): Sec. C: Robotics. Sec. D: Computer Vision XVIII & XIV & XI & VI) and applied to speech. All before LeCun pre-training (in contrast to Hinton Sec. XIV: Sec. XI: ACM mentions GPU-accelerated NNs XVIII: Fukushima and Waibel (see Sec. D). VII: ACM explicitly mentions medicine and Sec. XII & XIX & XXI: Modern XIII & II & V III & IX & X & XX): Sec. XX: ACM credits LeCun for work on Sec. XXI: ACM credits LeCun for work on XV: ACM credits Bengio for hybrids of NNs and probabilistic models of sequences. A & B). XVI: ACM Sec. XVII: and other topics.[R2-R6] Critique of LBH Sec. Conclusion: Sec. II & III & V & XII & XIII & XVII & XIV & XIX & XX & XXI. In what follows, ACM I, II, III, IV, V, VI, VII, VIII, IX, X, XI, XII, XIII, XIV, XV, XVI, XVII, XVIII, XIX, XX, XXI. LBH and their co-workers have contributed certain useful improvements of existing deep learning methods.[CNN2,4][CDI][LAN][RMSP][XAV][ATT14][CAPS] (1965),[DEEP1-2][R8] (1970),[BP1-2][R7] architectures of recurrent NNs (1943-56)[MC43][K56] and convolutional NNs (1979),[CNN1] GPU-accelerated NNs (2004),[GPUNN][DAN][DAN1][GPUCNN5] transformer-like[TR1-6][FWP] attention[FWP][ATT] through [DL1-2][R2-R8] This may explain some of ACM II & III & V & XIII & X & XVII & XII & XVIII & XX. academia and industry,[DL4] mentioned by ACM (labeled as A, B, C, D) below: or LSTM (1990s-2005)[LSTM0-6] student Sepp Hochreiter in 1991.[VAN1] This happened long before the similar work of Bengio (see Sec. XVII).[MIR] LSTM was refined with my student Felix Gers[LSTM2] (A2) Connectionist Temporal Classification by my student Alex Graves et al. (2006).[CTC] Our team successfully applied CTC-trained LSTM to speech in 2007[LSTM4] (also with hierarchical LSTM stacks[LSTM14]). Markov models (HMMs)[BW][BRI][BOU] (Sec. XV). Hinton et al. (2012) still used the old hybrid approach[HYB12] and did not compare it to CTC-LSTM. He later reused our end-to-end neural speech recognizer[LSTM4][LSTM14] as a postdoc in Hinton CTC-LSTM dramatically improved Google on-device speech recognition[GSR19] (not any longer on the server) (see Sec. VI & XI & XV). of text[SNT] (see Sec. XVI). In 2001, we showed that LSTM can learn languages unlearnable by traditional models such as HMMs,[LSTM13] See also Sec. VI & XI & XV. tailored by Bengio see Sec. XVI. C. Robotics & RL etc. Since 2003, our team has used LSTM for Reinforcement Learning (RL) and robotics.[LSTM-RL][RPG][LSTMPG] For example, in 2018, a PG-trained LSTM was the core of OpenAI beat a pro player in the game of Starcraft, which is theoretically harder than Chess or Go[DM2] in many ways, using OpenAI Five which learned to defeat human experts in the Dota 2 video game (2018).[OAI2] Apart from A, B, C above, chemistry, molecular design, lip reading, speech synthesis,[AM16] predicting what was being used for LSTM (only 5% for the CNNs of Sec. D).[JOU17] Apparently the first LSTM journal paper[LSTM1][R5] is now the most frequently cited a particular feedforward NN called the convolutional NN (CNN).[CNN1-4] The basic CNN architecture with convolutional and downsampling layers is due to Fukushima (1979).[CNN1] The popular downsampling variant called max-pooling was introduced by Weng et al. (1993).[CNN3] LeCun Finally, my own team showed in 2010[MLP1] to train deep NNs, contrary to claims by Hinton[VID1] who said that "nobody in their right mind would ever suggest" this. Then we CNNs of 2006.[GPUCNN] in a row (15 May 2011, 6 Aug 2011, 1 Mar 2012, 10 Sep 2012).[GPUCNN5] CVPR paper on DanNet[GPUCNN3] of Hinton Our CNN image scanners were 1000 times faster than previous methods.[SCAN] The VGG network (ImageNet 2014 winner)[GPUCNN9] and other highly cited CNNs[RCNN1-3] ResNet, the ImageNet 2015 winner[HW2] (Dec 2015) which currently gets See also Sec. XVIII & XIV & XI & VI. were proposed already in the 1940s/50s[MC43][K56] (but don deep convolutional NN architecture was proposed in the 1970s.[CNN1] NNs without hidden layers learned in 1958[R58] regression and the method of least squares[DL1-2]). about deeper adaptive NNs[R61,R62] XIII & III & V & VIII & IX & X. LBH & co-authors, e.g., Sejnowski[S20] (see Sec. XIII). It goes more or less like this: "In 1969, Minsky & Papert[M69] researchers took a fresh look at the problem in the 1980s."[S20] However, as mentioned above, the 1969 book[M69] addressed a "problem" of Gauss & Legendre (but see a 1989 paper[MOZ]). See Sec. 1 of the overview:[MIR] "Very Deep Learning" tasks of depth > 1000.[UN2][DL1][UN] (By 2003, LSTM variants successfully dealt with language problems of depth up to 30,000[LSTM17] III. Note that III).[DLC][DEEP1-2][BP1][DL1-2][R7-R8][R2-R4] deep learning multilayer perceptrons (1965),[DEEP1-2][R8] (1970),[BP1,2][R7] architectures of recurrent NNs (1943-56)[MC43][K56] and convolutional NNs (1979),[CNN1] GPU-accelerated NNs (2004),[GPUNN][GPUCNN5] and other foundations.[DL1-2][R2-R8] II & V & XIII & IX & X & XVII & XII & XVIII & XX & I. deeplearning.net which until 2019 advertised deep learning as "moving beyond shallow machine learning since 2006",[DL7] referring to Hinton II & XVII (5). Not to mention Ivakhnenko which Hinton,[UN4] Bengio,[UN5] and LBH[DL3,DL3a] did not cite either. See Sec. X. my comments systematically track the sequential order of ACM

    ACM Much of early AI in the 1940s-70s was actually about theorem proving[ZU48][NS56] Turing Machine.[TUR] He rederived the above-mentioned result,[CHU][TUR][HIN][GOD21,21a][TUR21][LEI21,21a] In the same year of 1936, Emil Post published yet another independent universal model of computing,[POS] without suggesting any fact-based corrections.[HIN]) open problem "P=NP?" in his famous letter to John von Neumann (1956).[GOD56][URQ10] His patent application of 1936[ZU36-38][Z36][RO98][ZUS21] predating Claude Shannon Zuse also created the first high-level programming language in the early 1940s.[BAU][KNU] conditional jump instruction.[RO98] that learn internal representations (1965),[DEEP1-2][R8] (1970),[BP1,2][R7] architectures of recurrent NNs (1943-56)[MC43][K56] and convolutional NNs (1979),[CNN1] (2004),[GPUNN][GPUCNN5] (2010)[MLP1-2] transformer-like[TR1-6][FWP] attention[FWP][ATT] through and more.[DL1-2][R2-R8] II & I & III & XIII & X & XVII & XII & XVIII & XX. achieved by our group 2010-2011[MLP1-2][DAN][DAN1][GPUCNN5][R6] (Sept 2012, on cancer detection).[GPUCNN5,8] and were able to greatly improve steel defect detection.[ST] All of this happened before the similar GPU-accelerated AlexNet of Hinton mitosis detection.[MGC][GPUCNN5,8] D & XI). without citing them.[DL1][DLC][HIN][R2-R4][R7-R8] V & XII & XIX & II & III & XIII & XVII & X & I. work.[HIN][DLC][DL1-2][DEEP1-2][CMB][R7-R8] See Sec. II & III & XIII & V & X & XIV & I. first introduced to Machine Learning by Dechter (1986), and to NNs by Aizenberg et al (2000).[DL2] To my knowledge, LBH have never cited them. (Margin note: our 2005 paper on deep RL[DL6,6a] was LBH started talking about "deep learning ... moving beyond shallow machine learning since 2006",[DL7] referring to their unsupervised pre-training methods of 2006. See Sec. III. II & III & XIII & V & I. ignored by LBH V & II & III & I & XIII & XII & XIX & X & XVII).

    ACM correctly mentions advancements through GPUs. The first to use GPUs for NNs were Jung & Oh (2004),[GPUNN][GPUCNN5] an important benchmark record,[MLP1-2] to train deep NNs, contrary to Hinton vision (explicitly mentioned by ACM) for the first time[R6] (see Sec. D). (explicitly mentioned by ACM) were actually dominated by LSTM and CTC of our team.[LSTM1-4][CTC] In particular, as mentioned in Sec. A, such as HMMs.[BW][BOU][BRI][HYB12] As mentioned in Sec. B and XVI, the first superior end-to-end neural machine translation was also based on LSTM. ACM backpropagation by Rumelhart et al. (1985-86)[RUM] (1982).[BP2] And the article[RUM] even failed to mention Linnainmaa, the inventor of this famous algorithm for credit assignment in networks (1970),[BP1] Kelley already had a precursor thereof in the field of control theory;[BPA] see also later work of the early 1960s.[BPB][BPC][R7] internal representations in hidden layers of NNs.[RUM] But this was essentially just an experimental analysis of a known method.[BP1-2] And history of backpropagation can be found at Scholarpedia[DL2] and in my award-winning survey.[DL1] Also see Sec. XIX, II.

    Some claim that "backpropagation is just the chain rule of Leibniz (1676) & L Hinton[AOI] Rumelhart[RUM] with the "invention" of backpropagation. for "creating" the method and for other things he didn Neither in a popular book[AOI] nor in other recent work[DL3,DL3a] did he cite Linnainmaa (1970),[BP1] the true creator.[BP4-5] that his 2015 survey[DL3] does cite Werbos (1974) who however described the method correctly only later in 1982[BP2] and also failed to cite Linnainmaa[BP1] (compare Amari Linnainmaa It wasn one person who published first[BP1] and therefore should get the credit. Boltzmann Machine (BM)[BM] a learning.[HIN] Recently, however, I learnt through a reader that even the BM paper[BM] did not cite prior relevant work by Sherrington & Kirkpatrick[SK75] and Glauber.[G63] (Compare related work.[H86][H88][S93]) multilayer perceptrons with arbitrarily many layers.[DEEP1-2][HIN] Sec. II V &

    As mentioned in Sec. II, Sejnowski at the problem in the 1980s."[S20] However, the 1969 book[M69] addressed a "deep learning problem" (a limitation of Gauss & Legendre also in the 1970s, especially outside of the Anglosphere.[DEEP2][BP6][CNN1][DL1-2] Dropout is actually a variant of Hanson as we showed already in 2011 in a contest where LeCun Sec. D above. Back then, the only really of deep CNNs through GPUs.[GPUCNN1,3,5][R6] Already before ImageNet 2012,[R6] a monopoly on winning computer vision competitions.[GPUCNN5] It more than "halved the error rate for object recognition" (ACM See Sec. D since the late 1980s.[BW][BRI][BOU] LSTM (1990s-2005)[LSTM0-6] and CTC[CTC] (2006), which were applied to speech in 2007.[LSTM4][LSTM14] CTC-LSTM is end-to-end-neural and thus very different from (and superior to) the hybrid methods since the late 1980s.[BW][BRI][BOU][HYB12] See also Sec. A. 5 years earlier, in 1995, we already had a similar, excellent neural probabilistic text model.[SNT] Bengio[NPM] characterizes it only briefly as "related" (see also Pollack Bengio For example, it helped to further improve Facebook attention-based Transformers[TR1-6] are My FWP of 1991[FWP0-1] (now often called keys and values for self-attention).[TR1-6][FWP] Transformers[TR1-2] a traditional LSTM domain (see Sec. B). rapidly learn to solve quickly[LSTM13,17] linear Transformers or Performers[TR5-6] which are formally equivalent to my 1991 FWPs (apart from normalization).[FWP6][FWP] In 1993, I introduced in this context,[ATT] and RNNs that program themselves. He was the reviewer of my 1990 paper[ATT2] his own work:[ATT3] GANs[GAN0-1] (2010-2014) are actually from 1990[AC90,90b][AC20] (see also surveys[AC09-10]). This principle is now widely used for exploration in RL (e.g., Sec. C) and for image synthesis[GAN1] (also mentioned by ACM in Sec. XVIII). predictor NN minimizes its error, while the generator NN tries to make outputs that maximize this error: one net whether the controller early adversarial machine learning settings[S59][H90] neither involved unsupervised NNs nor were about modeling data nor used gradient descent.[AC20]) Bengio et al. neither cited the original work[AC90,90b][AC20] nor corrected their erroneous claims[GAN1] about Bloomberg,[AV1] their NIPS 2014 paper[GAN1] and some of the erroneous claims it made about my prior work.[AC20] Goodfellow eventually admitted that PM is adversarial (his paper[GAN1] still claims the opposite), but emphasized that it When the authors[GAN1] I published one myself in the hopes of correcting the annals of history.[AC20] that they are instances of my earlier work.[R2][AC20] was settled in favor of Sepp.[VAN1] However, even after a common publication,[VAN3] Bengio published papers[VAN4][XAV] are poor indicators of truly pioneering work.[NAT1] (Margin note: Bengio states[YB20] that in 2018 he one must at least clarify it later,[DLC] Bengio also claims[YB20] that in 1995 date back to 1991-93.[UN0-2][UN] which I started in 1987[META1][META] long before Bengio that he did it before me.[R3] Bengio also writes[YB20] that in Bengio has also heavily used our LSTM (see Sec. A-C), "gated recurrent units (GRU)"[LSTMGRU] for a variant of our vanilla LSTM architecture[LSTM2] (2000) which he did not cite although our work[LSTM2] was the one that introduced gated recurrent units. In addition, our team automatically evolved lots of additional LSTM variants and topologies already in 2009[LSTM7] without changing the name of the basic method. learn to count[LSTMGRU2] nor learn simple non-regular languages;[LSTMGRU2] they according to Google Brain.[LSTMGRU3]) Hinton work on this[UN0-2] (see Sec. II above).[UN] It was published in 1991-92[UN1] when compute was about 1000 times more expensive than in 2006. survey (2015),[DL3][DLC] See also Sec. II & III. Hinton[DIST2] (2006) did not cite my much earlier original work on this (1991),[UN1][UN] not even in his later patent application Hinton[ATT3] (2010) he was both reviewer and editor of my summary[ATT2] (1990; see Sec. XVI above).

    The ten priority disputes mentioned in the present Sec. XVII are not on the only ones.[R4] Remarkably, three of them are related to the 1991 paper[UN1][UN] which in many ways started what people now call deep learning, going beyond Most of them go back to work of 1990-91.[MIR] See Sec. I for additional related issues of credit assignment. LeCun All of this happened before LeCun three times worse performance).[DAN1] Again see Sec. D. (Sept 2012, on detection of mitosis/cancer)[GPUCNN5,7,8] (before the similar AlexNet won ImageNet 2012[GPUCNN5][R6] and the similar VGG network[GPUCNN9] won ImageNet 2014). mitosis detection.[MGC][GPUCNN5,7,8] Many major companies are using it now. See Sec. D & VII. ACM also explicitly mentions speech recognition, speech synthesis,[AM16][DL1] Sec. A, B, VI, XI. recent work.[DL3,DL3a][DLC] In 1960, Kelley already had a precursor of the algorithm.[BPA] Furthermore, many besides LeCun have worked "to speed up backpropagation algorithms"[DL1] (ACM However, "hierarchical feature representation" in deep learning networks is what Ivakhnenko & Lapa (1965)[DEEP1-2] (and also Fukushima[CNN1][DL2]) had long before LeCun. See Sec. D & II & XIII & V. LeCun et al. neither cited the origins[BP1] (1970) of this widely used type of automatic differentiation for differentiable networks of modules[DL2][BP4-5][DLC] for such systems.[S80] See also Sec. XIX & XII. before LeCun who did not cite them. See also Pollack

    (Furthermore, "complex networks of modules where backpropagation is performed" were the central theme of my much earlier habilitation thesis (1993).[UN2] For example, our see "100 Authors against Einstein."[AH1] "If you cannot dispute a fact-based message, attack the messenger himself."[HIN] award can ever change that.[HIN] and their co-workers have contributed useful improvements of deep learning methods.[CNN2,4][CDI][LAN][RMSP][XAV][ATT14][CAPS] whom they did not cite II, V, XII, XIX, XXI, XIII, XIV, XI, and XX, and 2). Sec. I, A, B, C, D, XVII, VI, and XVI). to self-correction,"[SV20] as is already the standard in other scientific fields. in popular science venues without peer review? For example, the narrator of a popular 2018 Bloomberg video[VID2] Germany and Switzerland (LSTM & CTC; see Sec. A) long before Hinton Google on Google Translate[WU] mentions LSTM over 50 times (see Sec. B). In ad hominem style,[AH2-3] claiming credit he doesn LeCun also called the GANs of Bengio of my work in 1990.[AC90,90b][AC20][R2] According to Bloomberg,[AV2] Bengio has simply "denied my claims" without backing up his denial by any facts; see Sec. XVII. and forcefully contradict public figures who promote it."[FAKE] Our LSTM paper[LSTM1] has got more citations than any paper by Bengio or LeCun,[R5] Hinton deep NNs (2010)[MLP1] [UN][UN0-3] and later championed by Hinton;[UN4][VID1] see Sec. D). Hinton (2012)[GPUCNN4] characterizes AlexNet won one;[R6] see Sec. D, XIV. The highly cited VGG network (2014)[GPUCNN9] Hinton of Hinton for a book by Rumelhart & McClelland[R5]). method[BP1] whose origins of Ivakhnenko whom he has never cited;[DEEP1-2][R7-R8] see Sec. II, XIII. Bengio (1990)[AC90,90b][AC20][R2] which he did not cite; see Sec. XVII. Hinton were preceded by Hanson As recently as of 2021, ACM published yet another misleading deep learning "survey" by LBH,[DL3a] again heavily citing LBH without Consult the Executive Summary and Sec. I-XXI of this critique for more. have their conceptual and technical roots in my labs in Munich and Lugano,[MOST] of deep learning MLPs since 1965[DEEP1-2] (see Sec. II, XX) and backpropagation (1960-70)[BPA][BP1] (see Sec. XIX, XII) and convolutional NNs since 1979[CNN1-4] (see Sec. XVIII, D). Our LSTM (1990s, see Sec. A, B; also for RL, 2003-, see Sec. C) → our Highway Net (May 2015) → ResNet (Dec 2015, see Sec. D). Our adversarial Artificial Curiosity (1990) → GANs (2010s, see Sec. XVII). our own unsupervised pre-training of deep NNs (1991, see Sec. II & III) for recurrent NNs in the 1990s → our LSTM (see Sec. A-C) and for feedforward NNs in 2010 → our DanNet (2011) → AlexNet (2012); VGG Net (2014) (see Sec. D). superior computer vision (2011, see Sec. D, XVIII), speech recognition (with our CTC, 2007-15, see Sec. A), machine translation (2016, see Sec. B), robotics & video game players (2018-19, see Sec. C), Fast Weight Programmers (1991, see Sec. XVI) are formally equivalent to linear Transformers (now popular in NLP). I, A, B, C, D, VII, XVIII. depth that really learned.[DEEP1-2][R8] Five years later, modern

    Yes, this critique is also an implicit critique of certain other awards to LBH.[HIN] reddit.com/r/MachineLearning[R1-R12] (the largest machine learning forum with back then over 800k subscribers), many of them influenced by my overview.[MIR]

    Dr. LeCun himself is well aware of the challenges to scientific integrity in our field:[LECP] "... else cites."[LECP]

    Note that I am insisting on proper credit assignment not only in my own research field but also in quite disconnected areas,[HIN] as demonstrated by my numerous letters in this regard published in Science and Nature, e.g., on the history of aviation,[NASC1-2] the telephone,[NASC3] the computer,[NASC4-7] resilient robots,[NASC8] and scientists of the 19th century.[NASC9]

    Creative Commons LicenseThanks to many expert reviewers for useful comments. Since science is about self-correction, let me know under juergen@idsia.ch if you can spot any remaining error. Many additional relevant publications can be found in my arXiv page. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. PDF. PDF. PDF. IEEE link. With a brief summary of the generative adversarial neural networks of 1990[AC90,90b][AC20] Preprint arXiv/1906.04493. Link. Link. Blog of Werner Vogels, CTO of Amazon (Nov 2016): PDF. arXiv/1409.0473, 2014-16. Bloomberg, May 15, 2018. Bloomberg, May 17, 2018. PDF. PDF. Link. PDF. First application of backpropagation[BP1] to NNs (concretizing thoughts in his 1974 thesis). More.[DL2] English version: [CNN1+]. More in Scholarpedia. Link. [CNN1a] A. Waibel. Phoneme Recognition Using Time-Delay Neural Networks. Meeting of IEICE, Tokyo, Japan, 1987. First application of backpropagation[BP1][BP2] and weight-sharing PDF. Spatial Averaging.[CNN1] PDF. Beijing, 2014. Preprint arXiv:1402.3511 [cs.NE]. 1st superhuman result in 2011.[DAN1] [DIST1] J. Schmidhuber, 1991.[UN-UN2] Deep Learning. HTML. [DL3a] Y. Bengio, Y. LeCun, G. Hinton (2021). Turing Lecture: Deep Learning for AI. Communications of the ACM, July 2021. HTML. greatly improved (CTC-based) on-device speech recognition (on the phone, not the server) PDF. Web site deeplearning.net of Y. Bengio Internet Archive), referring to Hinton unsupervised pre-training for deep NNs[UN4] (2006) although II & XVII & III. arxiv:1312.5602. Link. arXiv:1808.03578, 2018. over 4 billion automatic translations per day (The Verge, August 4, 2017); Facebook blog by J.M. Pino, A. Sidorov, N.F. Ayan (August 3, 2017) alternative[FWP0-1] to recurrent NNs. the fast weights[FAST,FASTa] of Such Fast Weight Programmers[FWP0-6,FWPMETA1-7] can learn to memorize past data, e.g., by computing fast weight changes through additive outer products of self-invented activation patterns[FWP0-1] (now often called keys and values for self-attention[TR1-6]). The similar Transformers[TR1-2] combine this with projections linear Transformers or Performers[TR5-6] In 1993, I introduced in this context,[ATT] and RNNs that program themselves. PDF. PDF. Preprint: arXiv:1811.12143. PDF. PDF. Like [FWP0-2]. Preprint: arXiv:2003.08165. PDF. Linear Transformers Are Secretly Fast Weight Programmers. ICML 2021. Preprint: arXiv:2102.11174. Preprint: arXiv:2106.06295 (June 2021). PDF. An introspective network that can learn to run its own weight change algorithm. In Proc. of the Intl. Conf. on Artificial Neural Networks, J. Schmidhuber. Habilitation thesis, TUM, 1993. PDF. Preprint arXiv:2012.14905 [cs.LG], 2020. Report arXiv:2011.07831 [cs.AI], 2020. Google Research Blog, Sep 2015, see also Aug 2015 Google Alphr Technology, Jul 2015, or 9to5google, Jul 2015 WIRED, Sep 2016, siliconANGLE, Sep 2016 Blog post, Internet Archive, 2010. A blog post describing the basic ideas[AC][AC90, AC90b][AC20] of GANs. Description of GANs that does not cite the original work of 1990[AC][AC90,AC90b][AC20][R2] (also containing wrong claims about Predictability Minimization[PM0-2][AC20]). Link. This was number 1 on Hacker News. Frankfurter Allgemeine Zeitung, 16/6/2021. Preprint arXiv/2005.14165. for Image Classification. International Joint Conference on Artificial Intelligence (IJCAI-2011, Barcelona), 2011. PDF. ArXiv preprint. competitor.[DAN1] This led to massive interest from industry. PDF. PDF. North-Holland, 1991. PDF. Extending TR FKI-129-90, TUM, 1990. PDF. PDF. Preprints arXiv:1505.00387 (May 2015) and arXiv:1507.06228 (July 2015). Also at NIPS 2015. The LSTM with forget gates[LSTM2] for RNNs.) Resnets[HW2] are a version of this where the gates are always open: g(x)=t(x)=const=1. Highway Nets perform roughly as well as ResNets[HW2] on ImageNet.[HW3] Highway layers are also often used for natural language processing, where the simpler residual layers do not work as well.[HW3] Link. arXiv:1512.03385 (Dec 2015). Residual nets are a version of Highway Nets[HW1] arxiv:1612.07771 (2016). Also at ICLR 2017. Preprint arXiv:1704.04760 PDF. PDF. arXiv:1607.06450, 2016. A New Publishing Model in Computer Science. 19/5/2021. [LSTM1] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 1997. PDF. Based on [LSTM0]. More. PDF. PDF. PDF. PDF. PDF. PDF. PDF. PDF. Preprint: arxiv:1506.07452. PDF. PDF. Preprint arXiv:1805.04908. Architectures. Preprint arXiv:1703.03906 arXiv:2005.05744, 2020. Computation 22(12): 3207-3220, 2010. ArXiv Preprint. By 2010, when compute was 100 times more expensive than today, both our feedforward NNs[MLP1] Preprint arXiv:1611.01578 (PDF), 2017. Correspondence, Nature, vol 483, p 541, March 2012, doi:10.1038/483541b. Letter, Science, vol 336, p 1639, June 2012. See also comment on response by A. Hodges (DOI:10.1126/science.336.6089.1639-a) NY Times article NY Times article Learning Dexterous In-Hand Manipulation. arxiv:1312.5602 (PDF). arxiv:1912.06680. PDF. Based on TR FKI-126-90 (1990).[AC90] PDF. Partially based on TR FKI-126-90 (1990).[AC90] Report arXiv:1210.0118 [cs.AI], 2015. One Big Net For Everything. Preprint arXiv:1802.08864 [cs.AI], Feb 2018. Preprint: arXiv:1809.01999. Github: World Models. minimization. TR CU-CS-565-91, Univ. Colorado at Boulder, 1991. PDF. 1991. PDF. arXiv:1112.5309 [cs.AI] First Experiments with PowerPlay. arXiv:1210.8385 [cs.AI]. [R1] Reddit/ML, 2019. Hinton, LeCun, Bengio receive ACM Turing Award. [R2] Reddit/ML, 2019. J. Schmidhuber really had GANs in 1990. [R3] Reddit/ML, 2019. NeurIPS 2019 Bengio Schmidhuber Meta-Learning Fiasco. [R4] Reddit/ML, 2019. Five major deep learning papers by G. Hinton did not cite similar earlier work by J. Schmidhuber. [R5] Reddit/ML, 2019. The 1997 LSTM paper by Hochreiter & Schmidhuber has become the most cited deep learning research paper of the 20th century. [R6] Reddit/ML, 2019. DanNet, the CUDA CNN of Dan Ciresan in J. Schmidhuber [R7] Reddit/ML, 2019. J. Schmidhuber on Seppo Linnainmaa, inventor of backpropagation in 1970. [R8] Reddit/ML, 2019. J. Schmidhuber on Alexey Ivakhnenko, godfather of deep learning 1965. [R9] Reddit/ML, 2019. We [R11] Reddit/ML, 2020. Schmidhuber: Critique of Honda Prize for Dr. Hinton [R12] Reddit/ML, 2020. J. Schmidhuber: Critique of Turing Award for Drs. Bengio & Hinton & LeCun [R15] Reddit/ML, 2021. J. Schmidhuber Preprint arXiv/1311.2524, Nov 2013. Preprint arXiv/1703.06870, 2017. Link. The Past, Present and Future of Artificial Intelligence. PDF. ACM Link. 1992. Based on TR FKI-148-91, TUM, 1991.[UN0] PDF. [UN2] J. Schmidhuber. Habilitation thesis, TUM, 1993. PDF. can be found here (depth > 1000). 2006. PDF. Link. [VAN1] S. Hochreiter. Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, TUM, 1991 (advisor J. Schmidhuber). PDF. PDF. [VAN4] Y. Bengio. Neural net language models. Scholarpedia, 3(1):3881, 2008. Link. Link. Youtube video [see 28:16]. But in 2010, our team showed[MLP1-2] Youtube video, 2018. Preprint arXiv:1609.08144 (PDF), 2016. Based on LSTM which it mentions at least 50 times. WWW link (retrieved 15 May 2020). PDF. Menu


    Twitter:
    @SchmidhuberAI the fast weights of another NN (see Sec. 1). In 1991, one of them[FWP0-1] (now often called keys and values for self-attention; Sec. 2). The very similar Transformers[TR1-2] combine this with projections Transformers with linearized self-attention[TR5-6] to the 1991 Fast Weight Programmers[MOST] (see this tweet). In 1993, I also introduced in this context[ATT] (Sec. 4), and RNNs that program themselves (Sec. 3). problem aka deep learning problem (analyzed a few months later in 1991[VAN1]) through additive fast weight changes (Sec. 5). brand new, improved version[FWP6] of the 1991 fast weight update rule (Sec. 6). reinforcement learning through neuroevolution[FWP5] (2005-, Sec. 7), goal-conditioned policy generators (2022),[GGP] metalearning machines that learn to learn[FWPMETA1-9] (1992-2022, Sec. 8). As I have frequently emphasized since 1990,[AC90][PLAN][META] universal self-referential formal systems,[GOD][GOD34] I built NNs whose outputs are changes of programs or weight matrices of other NNs[FWP0-2] (Sec. 1, 2, 3), their own weight change algorithms or learning algorithms[FWPMETA1-5] (Sec. 8). gradient descent procedure[BP1-4][BPA][R7]) can compute a direction in program space where one may find a better program,[AC90] better program-modifying program.[FWP0-2][FWPMETA1-5] layers.[DEEP1-2] Their activation functions were Kolmogorov-Gabor polynomials which include the now popular multiplicative gates,[DL1-2] von der Malsburg was the first to explicitly emphasize the importance of NNs with rapidly changing weights.[FAST] The second paper on this was published by Feldman in 1982.[FASTa] The weights of a 1987 NN were sums of weights with a large learning rate and weights with a small rate[FASTb][T22] (but have nothing to do with the NN-programming NNs discussed below). Fast Weight Programmers (FWPs) were published in 1991-93[FWP0-2] (Sec. 1, 2, 3, 4). and Transformers[TR1-6] (Sec. 2, 3, 4, 5). slow NN that learns by backpropagation[BP1-4] to rapidly modify the fast weights of another NN,[FWP0] essentially published in Neural Computation.[FWP1] (Sec. 4) but in a fully neural way (rather than in a hybrid fashion[PDA1][PDA2][DNC]). Synthetic Gradients.[NAN1-5] One of the FWPs of 1991[FWP0-1] is illustrated in the figure. There is A disadvantage addressed in Sec. 2 is that the slow net needs many output units if the fast net is large.

    The Fast Weight Programmer[FWP0-1] depicted in Sec. 1 has a slow net unit for each fast weight. However, Section 2 of the same 1991 paper[FWP0] linear[TR5-6] Transformers[TR1-2] or to the fast weight (which then may be normalized by a squashing function[FWP0]). The second order tensor products.[FWP0-3a] linear Transformers).[FWP6][TR5-6] The highly successful Transformers of 2017[TR1-2] can be viewed as a combination of my additive outer product fast weight principle[FWP0-2] NN-programmed fast weights (Sec. 5 & 1). linear Transformers (2020-21)[TR5-6] abandoned the softmax, essentially resurrecting the original 1991 system.[FWP0-1] Compare Sec. 6. go back at least to Hebb Steinbuch since 1991.[FWP0-3a][TR5-6] I offered the FWPs of 1991[FWP0-1] as an (Sec. 1), Modern Transformers are also viewed as RNN alternatives, despite their limitations.[TR3-4] The slow net and the fast net of the 1991 system[FWP0-1] in Sec. 2 were feedforward NNs (FNNs), like most current Transformers.[TR1-6] I collapsed all of this into a single RNN that could rapidly reprogram all of its own fast weights through additive outer product-based weight changes.[FWP2] One motivation reflected by the title of the paper[FWP2] See also our more recent work on FWPs since 2017,[FWP3-3a][FWPMETA7][FWP6] and compare a recent study.[RA21] Today, everybody is talking about attention when it comes to describing the principles of Transformers.[TR1-2] The additive outer products[FWP0-1] of the Fast Weight Programmers described in Sec. 2 and Sec. 3 Similarly, the attention weights or self-attention weights (see also[FWP4b-d]) 1993 paper[FWP2] which Fast Weight Programmers.[FWP2][ATT] Apart from possible normalization/squashing,[FWP0] are additive (Sec. 1 & 2). by my brilliant student Sepp Hochreiter a few months later in his 1991 diploma thesis.[VAN1] That is, the core of LSTM is operating in a linear additive activation space (ignoring LSTM Additive FWPs[FWP0-2] (Sec. 1 & 2), however, solve the problem through a dual approach, By favoring additive operations yielding non-vanishing first derivatives and error flow,[VAN1] Transformers[TR1-6] also follow the additive approach.[FWP0-2] (compare Sec. 2 and Sec. 4 on attention terminology since 1993). LSTM It is essentially a feedforward version of LSTM[LSTM1] with forget gates.[LSTM2] Residual Net or ResNet[HW2] (Dec 2015). smartphones.[DL4] rapidly learn to solve quickly[LSTM13] while plain Transformers can Recent work of February 2021[FWP6] mechanisms[TR5-6] and Fast Weight Programmers[FWP0-2] variants.[TR5-6] Building on previous work[FWPMETA7] on FWPs (Sec. 1, 2, 3, 8), we replace the 1991 elementary programming instruction based on additive outer products[FWP0-2] by a delta rule-like[WID] language modeling tasks.[FWP6] Our code is public. work of June 2021[FWP7] (also with Robert Csordas) points out that the original FWP formulation of 1991[FWP0-1] is more general than the one of linear Transformers: a slow NN continually reprograms the weights of a fast NN with Our code is public. with my former postdoc Faustino Gomez[FWP5] (now CEO of NNAISENSE) Our 2005 paper on deep RL[DL6,6a] was actually numerous weights of large NNs through very compact codes.[KO0-2][CO1-4] Here we exploited that the

    Recent work of 2022[GGP] with In references[FWPMETA1-5] since 1992, the slow NN and the fast NN (Sec. 1) are recurrent and identical. The RNN can see its own errors or reward signals called eval(t+1) in the image.[FWPMETA5]

    The 1993 FWP of Sec. 3[FWP2] also was an RNN RNN above,[FWPMETA1-5] it used outer products between key patterns and value patterns (Sec. 2) to manipulate functions of two variables[HO1] (more on LSTM and fast weights in Sec. 5). In 2020, Imanol et al. augmented an LSTM with an associative fast weight memory.[FWPMETA7] partially observable environments.[FWPMETA7] Our recent MetaGenRL (2020)[METARL10] meta-learns See the blog post of my PhD student Louis Kirsch. outer-product-like fast weights encoded in the activations of LSTMs.[FWPMETA6] variables[FWP2] (Sec. 3). VS-ML can also learn to implement the backpropagation learning algorithm[BP1-4] purely in the end-to-end differentiable forward dynamics of RNNs.[FWPMETA6]

    In 2022, we also published at ICML a modern self-referential weight matrix (SWRM)[FWPMETA8] based on the 1992 SWRM.[FWPMETA1-5] self-improvement (compare this tweet). A modern self-referential weight matrix (2022) based on the one of 1992 This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Creative Commons License First publication of what was later sometimes called the Hopfield network[AMH2] or Amari-Hopfield Network. The Hopfield network or Amari-Hopfield Network was published in 1972 by Amari.[AMH1] Transformers with linearized self-attention (1991-93).[FWP] Today, both types are very popular. PDF. PDF. Link. PDF. First application of backpropagation[BP1] to NNs (concretizing thoughts in his 1974 thesis). More.[DL2] Deep Learning. greatly improved (CTC-based) on-device speech recognition (on the phone, not the server) PDF. neural networks learning to control dynamic external memories.[PDA1-2][FWP0-1] alternative[FWP0-1] to recurrent NNs. the fast weights[FAST,FASTa] of Such Fast Weight Programmers[FWP0-6,FWPMETA1-8] can learn to memorize past data, e.g., by computing fast weight changes through additive outer products of self-invented activation patterns[FWP0-1] (now often called keys and values for self-attention[TR1-6]). The similar Transformers[TR1-2] combine this with projections Transformers with linearized self-attention[TR5-6] In 1993, he introduced in this context,[ATT] and RNNs that program themselves. See tweet of 2022. "Transformer with linearized self-attention."[FWP] PDF. See tweet of 2022 for 30-year anniversary. PDF. Preprint: arXiv:1811.12143. PDF. PDF. Preprint: arXiv:2003.08165. PDF. Linear Transformers Are Secretly Fast Weight Programmers. ICML 2021. Preprint: arXiv:2102.11174. Preprint: arXiv:2106.06295 (June 2021). PDF. An introspective network that can learn to run its own weight change algorithm. In Proc. of the Intl. Conf. on Artificial Neural Networks, J. Schmidhuber. Habilitation thesis, TUM, 1993. PDF. Preprint arXiv:2012.14905 [cs.LG], 2020. Report arXiv:2011.07831 [cs.AI], 2020. Preprint: arXiv:2202.05780. Preprint arXiv/2207.01570, 4 July 2022 (submitted in May 2022). Preprints arXiv:1505.00387 (May 2015) and arXiv:1507.06228 (July 2015). Also at NIPS 2015. The LSTM with forget gates[LSTM2] for RNNs.) Resnets[HW2] are a version of this where the gates are always open: g(x)=t(x)=const=1. Link. arXiv:1512.03385 (Dec 2015). Residual nets are a version of Highway Nets[HW1] arxiv:1612.07771 (2016). Also at ICLR 2017. PDF. PDF. [LSTM1] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 1997. PDF. PDF. PDF. arXiv:2005.05744, 2020. Preprint arXiv:1608.05343, 2016. The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization. Proc. ICLR 2022. Preprint arXiv/2110.07732. the 1991 publication on what See tweet of 2022 for 30-year anniversary. [R3] Reddit/ML, 2019. NeurIPS 2019 Bengio Schmidhuber Meta-Learning Fiasco. [R4] Reddit/ML, 2019. Five major deep learning papers by G. Hinton did not cite similar earlier work by J. Schmidhuber. [R7] Reddit/ML, 2019. J. Schmidhuber on Seppo Linnainmaa, inventor of backpropagation in 1970. 1992. Based on TR FKI-148-91, TUM, 1991.[UN0] PDF. [UN2] J. Schmidhuber. Habilitation thesis, TUM, 1993. PDF. can be found here (depth > 1000). [VAN1] S. Hochreiter. Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, TUM, 15 June 1991 (advisor J. Schmidhuber). PDF. Transformers with linearized self-attention in Neural Computation 1992, equivalent to fast weight programmers (apart from normalization), separating storage and control. Key/value was called FROM/TO. The attention terminology was introduced at ICANN 1993. Juergen Schmidhuber. Menu


    @SchmidhuberAI
    arXiv:2212.11279 mentioning my own team Sec. 1: Introduction
    Sec. 2: 1676: The Chain Rule For Backward Credit Assignment
    Sec. 3: Circa 1800: First Neural Net (NN) / Linear Regression / Shallow Learning
    Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture. ~1972: First Learning RNNs
    Sec. 5: 1958: Multilayer Feedforward NN (without Deep Learning)
    Sec. 6: 1965: First Deep Learning
    Sec. 7: 1967-68: Deep Learning by Stochastic Gradient Descent
    Sec. 8: 1970: Backpropagation. 1982: For NNs. 1960: Precursor.
    Sec. 9: 1979: First Deep Convolutional NN (1969: Rectified Linear Units)
    Sec. 10: 1980s-90s: Graph NNs / Stochastic Delta Rule (Dropout) / More RNNs / Etc
    Sec. 11: Feb 1990: Generative Adversarial Networks / Artificial Curiosity / NN Online Planners
    Sec. 12: April 1990: NNs Learn to Generate Subgoals / Work on Command
    Sec. 13: March 1991: NNs Learn to Program NNs. Transformers with Linearized Self-Attention
    Sec. 14: April 1991: Deep Learning by Self-Supervised Pre-Training. Distilling NNs
    Sec. 15: June 1991: Fundamental Deep Learning Problem: Vanishing/Exploding Gradients
    Sec. 16: June 1991: Roots of Long Short-Term Memory / Highway Nets / ResNets
    Sec. 17: 1980s-: NNs for Learning to Act Without a Teacher
    Sec. 18: It Sec. 19: But Don Sec. 20: The Broader Historic Context from Big Bang to Far Future
    Sec. 21: Acknowledgments
    Sec. 22: 555+ Partially Annotated References (many more in the award-winning survey[DL1])
    quite erroneous ideas about the origins of the universe (see the final section

    A history of AI written in the 1980s would have emphasized topics such as theorem proving,[GOD][GOD34][ZU48][NS56] logic programming, expert systems, and heuristic search.[FEI63,83][LEN83] an old area of research seeing renewed interest. Practical AI dates back at least to 1914, when Leonardo Torres y Quevedo (see below) built the first working chess end game player[BRU1-4] any type of computation-based AI.[GOD][BIB3][GOD21,a,b] emphasis on topics such as support vector machines and kernel methods,[SVM1-4] Bayesian (actually Laplacian or possibly Saundersonian[STI83-85]) reasoning[BAY1-8][FI22] and other concepts of probability theory and statistics,[MM1-5][NIL98][RUS95] decision trees,e.g.,[MIT97] ensemble methods,[ENS1-4] swarm intelligence,[SW1] and evolutionary computation.[EVO1-7]([TUR1],unpublished) Why? Because back then such techniques drove many successful AI applications.

    A history of AI written in the 2020s must emphasize concepts such as the even older chain rule[LEI07] and deep nonlinear artificial neural networks (NNs) trained by gradient descent,[MACY51] and the 1951 Paris conference on calculating machines and human thought, now often viewed as the first conference on AI.[AI51][BRO21][BRU4] based on "deep learning" with NNs.[DL1-2][DEC]

    The present piece also debunks a frequently repeated, misleading "history of deep learning"[S20][DL3,3a] which ignores most of the pioneering work mentioned below.[T22] See Footnote 6. The title image of the present article is a reaction to an erroneous piece of common knowledge which says[T19] that the use of NNs "as a tool to help computers recognize patterns and simulate human intelligence had been introduced in the 1980s," although such NNs appeared long before the 1980s.[T22] on the history of aviation,[NASC1-2] the telephone,[NASC3] the computer,[NASC4-7] resilient robots,[NASC8] and scientists of the 19th century.[NASC9] Finally, textbook on Leibniz This answer is used by the technique of gradient descent (GD), apparently first proposed by Augustin-Louis Cauchy in 1847[MAD86-05] "the world and the first with an internal memory.[BL16] He described the principles of binary computers (1679)[L79][L03][LA14][HO66][LEI21,a,b] His formal Algebra of Thought (1686)[L86][WI48] was deductively equivalent[LE18] to the much later Boolean Algebra (1847).[BOO] all possible questions through computation;[WI48] doing this).[T22] It was not published until 1970, as discussed below.[BP1,4,5]

    In 1805, Adrien-Marie Legendre published what Rosenblatt combined a linear NN as above with an output threshold function to obtain a pattern classifier (compare his more advanced work on multi-layer networks discussed below). Joseph[R61] Widrow & Hoff analyzed by physicists Ernst Ising and Wilhelm Lenz in the 1920s.[L20][I24,I25][K41][W45][T22] It settles into an equilibrium state in response to input conditions, and is the foundation of the first learning RNNs (see below). were also discussed in 1943 by neuroscientists Warren McCulloch und Walter Pitts[MC43] and formally analyzed in 1956 by Stephen Cole Kleene.[K56]

    In 1972, Shun-Ichi Amari made the Lenz-Ising recurrent architecture adaptive such that it could learn to associate input patterns with output patterns by changing its connection weights.[AMH1] See also Stephen Grossberg and Kaoru Nakano

    10 years later, the Amari network was republished (and its storage capacity analyzed).[AMH2] Some called it the Hopfield Network (!) or Amari-Hopfield Network.[AMH3] sequence-processing generalization thereof.[AMH1] learning RNNs. This, however, was first published many decades later,[TUR1] which explains the obscurity of his thoughts here.[TUR21] (Margin note: it has been pointed out that the famous "Turing Test" should actually be called the "Descartes Test."[TUR3,a,b][TUR21])

    Today, the most popular RNN is the Long Short-Term Memory (LSTM) mentioned below, which has become the

    In 1958, Frank Rosenblatt not only combined linear NNs and threshold functions (see the section on shallow learning since 1800), he also had more interesting, deeper multilayer perceptrons (MLPs).[R58] because only the last layer learned,[DL1] Rosenblatt basically had what much later was rebranded as Extreme Learning Machines (ELMs) without proper attribution.[ELM1-2][CONN21][T22]

    MLPs were also discussed in 1961 by Karl Steinbuch[ST61-95] and Roger David Joseph[R61] (1961). See also Oliver Selfridge wrote about "back-propagating errors" in an MLP with a hidden layer[R62] although he did not yet have a general deep learning algorithm for deep MLPs. What

    Today, the most popular FNN is a version of the LSTM-based Highway Net (mentioned below) called ResNet,[HW1-3] which has become the multiplicative gates).[DEEP1-2][DL1-2][FDL] A paper of 1971[DEEP2] first introduced to Machine Learning much later by Dechter (1986), and to NNs by Aizenberg et al (2000).[DL2] (Margin note: our 2005 paper on deep learning[DL6,6a] was publication with the word combination "learn deep" in the title.[T22])

    Ivakhnenko and Lapa (1965, see above) end-to-end fashion from scratch by stochastic gradient descent (SGD),[GD1] a method proposed in 1951 by Robbins & Monro.[STO51-52]

    Amari

    See also Iakov Zalmanovich Tsypkin

    Remarkably, as mentioned above, Amari also published learning RNNs in 1972.[AMH1]

    In 1970, Seppo Linnainmaa was the first to publish what

    In 1982, Paul Werbos proposed to use the method to train NNs,[BP2] extending ideas in his 1974 thesis.

    In 1960, Henry J. Kelley already had a precursor of backpropagation in the field of control theory;[BPA] see also later work of the early 1960s by Stuart Dreyfus and Arthur E. Bryson.[BPB][BPC][R7] Unlike Linnainmaa

    Backpropagation is essentially an efficient way of implementing Leibniz such that the NN behaves more and more like some teacher, which could be a human, or another NN,[UN-UN2] or something else. had just become accessible in wealthier academic labs. An experimental analysis of the known method[BP1-2] yield useful internal representations in hidden layers of NNs.[RUM] At least for supervised learning, backpropagation is generally more efficient than Amari postdoc Dan Ciresan[MLP1-2] pre-training for important applications.[MLP2]

    Our system set a new performance record[MLP1] on Jung & Oh in 2004[GPUNN]). A reviewer called this a researchers took a fresh look at the problem in the 1980s."[S20] However, the 1969 book[M69] addressed a "problem" of Gauss & Legendre and then also by Amari (such as the Boltzmann machine[BM][HIN][SK75][G63][T22]) without relating them to the original work,[DLC][S20][T22] although the true history is well-known. in the 1960s-70s, especially outside of the Anglosphere.[DEEP1-2][GD1-3][CNN1][DL1-2][T22] Blatant misattribution and unintentional[PLAG1][CONN21] or intentional[FAKE2] plagiarism are still tainting the entire field of deep learning.[T22] Scientific journals "need to make clearer and firmer commitments to self-correction,"[SV20] as is already the standard in other scientific fields. Neocognitron.[CNN1] rectified linear units (ReLUs) for NNs (1969).[RELU1] They are now widely used in CNNs and other NNs. called max-pooling was introduced by Yamaguchi et al. for TDNNs in 1990[CNN3a] and by Juan Weng et al. for higher-dimensional CNNs in 1993.[CNN3] Yann LeCun Baldi and Chauvin (1993) had the first application of CNNs with backpropagation to biomedical/biometric images.[BA93] CNNs (Dan Ciresan et al., 2011).[GPUCNN1,3,5] Our fast GPU-based[GPUNN][GPUCNN5] CNNs of 2006.[GPUCNN] In 2011, DanNet became the first pure deep CNN to win computer vision contests.[GPUCNN2-3,5]

    Competition[GPUCNN5] DanNet[DAN,DAN1][R6] DanNet[GPUCNN3a] DanNet[GPUCNN8] ImageNet 2012 AlexNet[GPUCNN4] DanNet[GPUCNN8] ImageNet 2014 VGG Net[GPUCNN9]
    Twitter: @SchmidhuberAI This is a point-for-point critique of ACM (see Executive Summary I, V, II, XII, XIX, XXI, XIII, XIV, XX, XVII). (A) speech recognition, (B) natural language processing, (C) robotics, (D) computer vision, (VII) medicine, astronomy, materials science. A, B, C, D, VII, XVII, VI, XVI). II, V, XX, XVIII) with Dr. Bengio & Dr. Hinton (see Sec. XVII, I). I respond to LBH Abstract & Outline (~300 words), Introduction (~300 words), Critique of LBH Executive summary of what 21 comments on 21 claims by ACM (~8,000 words), Conclusion (~2,000 words). All backed up by over 300 references (over 10,000 words). science is self-correcting."[SV20] they are mine or other people and to fight plagiarism,[FAKE2] collusion rings,[LIT21] and systemic academic corruption in all of their more and less subtle forms.[FAKE] Sec. 2 LBH of this post.[T20a][R12] ACM 2018 A.M. Turing Award[R1] After the Executive Summary in Sec. 3, Sec. 4 will split ACM into 21 parts I, II, III, IV, V, VI, VII, VIII, IX, X, XI, XII, XIII, XIV, XV, XVI, XVII, XVIII, XIX, XX, XXI. publishing yet another misleading overview of the field, this time based on LBH LBH LBH claim to "briefly describe the origins of deep learning"[DL3a] without even mentioning the world Ivakhnenko and Lapa in 1965[DEEP1-2][R8] (see Sec. II). the first really deep feedforward NN.[HW1-3] (see Sec. D, VI). brought essentially unlimited depth to gradient-based supervised recurrent NNs;[LSTM0-17] from 2007[LSTM4,14] LBH cite Hinton (2012) for "dropout" without mentioning that dropout is just a variant of Hanson perceptrons through stochastic gradient descent[GD1-3] (without reverse mode backpropagation[BP1]). Fukushima who introduced ReLUs in 1969[RELU1-2] (see Sec. XIV). XVIII). already in 1965[DEEP1-2][R8] (see Sec. II). earlier fast weights of von der Malsburg (1981) and Feldman (1982).[FAST,FASTa-b][FWP] dedicate an extra section to attention-based Transformers,[TR1-6] citing Bengio LBH claim that Bengio of text compression[SNT] (see Sec. XVI, XVII-1). LBH cite Bengio In summation, LBH have repeatedly chosen to ignore the previous well-known critiques[DLC][HIN][T20a] and deep learning surveys,[DL1-2] and ACM and deep learning (e.g., Sec. I), ACM lauds Numerous references can be found under the relevant section links I-XXI which adhere to the sequential order of ACM Sec. II: Sec. I contains 4 subsections A, B, C, D A: Speech Recognition (see also Sec. VI & XI & XV): The first superior end-to-end neural speech recognition Hinton (2012) and Bengio (XV) Sec. B: Natural Language Processing (see also Sec. VI & XI & XVI): Sec. C: Robotics. Sec. D: Computer Vision XVIII & XIV & XI & VI) and applied to speech. All before LeCun pre-training (in contrast to Hinton Sec. XIV: Sec. XI: ACM mentions GPU-accelerated NNs XVIII: Fukushima and Waibel (see Sec. D). The first application of CNNs with backpropagation to biomedical/biometric images is due to Baldi and Chauvin.[BA93] VII: ACM explicitly mentions medicine and Sec. XII & XIX & XXI: Modern XIII & II & V III & IX & X & XX): Sec. XX: ACM credits LeCun for work on Sec. XXI: ACM credits LeCun for work on XV: ACM credits Bengio for hybrids of NNs and probabilistic models of sequences. A & B). XVI: ACM Sec. XVII: and other topics.[R2-R6] Critique of LBH Sec. Conclusion: Sec. II & III & V & XII & XIII & XVII & XIV & XIX & XX & XXI. In what follows, ACM I, II, III, IV, V, VI, VII, VIII, IX, X, XI, XII, XIII, XIV, XV, XVI, XVII, XVIII, XIX, XX, XXI. LBH and their co-workers have contributed certain useful improvements of existing deep learning methods.[CNN2,4][CDI][LAN][RMSP][XAV][ATT14][CAPS] (1965),[DEEP1-2][R8] stochastic gradient descent for multilayer perceptrons (1967),[GD1-3] (1970),[BP1-2][R7] architectures of recurrent NNs (1925-56)[I25][MC43][K56] and convolutional NNs (1979),[CNN1] GPU-accelerated NNs (2004),[GPUNN][DAN][DAN1][GPUCNN5] transformer-like[TR1-6][FWP] attention[FWP][ATT] through [DL1-2][R2-R8] This may explain some of ACM II & III & V & XIII & X & XVII & XII & XVIII & XX. academia and industry,[DL4] mentioned by ACM (labeled as A, B, C, D) below: or LSTM (1990s-2005)[LSTM0-6] student Sepp Hochreiter in 1991.[VAN1] This happened long before the similar work of Bengio (see Sec. XVII).[MIR] LSTM was refined with my student Felix Gers[LSTM2] (A2) Connectionist Temporal Classification by my student Alex Graves et al. (2006).[CTC] Our team successfully applied CTC-trained LSTM to speech in 2007[LSTM4] (also with hierarchical LSTM stacks[LSTM14]). Markov models (HMMs)[BW][BRI][BOU] (Sec. XV). Hinton et al. (2012) still used the old hybrid approach[HYB12] and did not compare it to CTC-LSTM. He later reused our end-to-end neural speech recognizer[LSTM4][LSTM14] as a postdoc in Hinton CTC-LSTM dramatically improved Google on-device speech recognition[GSR19] (not any longer on the server) (see Sec. VI & XI & XV). of text[SNT] (see Sec. XVI). In 2001, we showed that LSTM can learn languages unlearnable by traditional models such as HMMs,[LSTM13] See also Sec. VI & XI & XV. tailored by Bengio see Sec. XVI. C. Robotics & RL etc. Since 2003, our team has used LSTM for Reinforcement Learning (RL) and robotics.[LSTM-RL][RPG][LSTMPG] For example, in 2018, a PG-trained LSTM was the core of OpenAI beat a pro player in the game of Starcraft, which is theoretically harder than Chess or Go[DM2] in many ways, using OpenAI Five which learned to defeat human experts in the Dota 2 video game (2018).[OAI2] Apart from A, B, C above, chemistry, molecular design, lip reading, speech synthesis,[AM16] predicting what was being used for LSTM (only 5% for the CNNs of Sec. D).[JOU17] Apparently the first LSTM journal paper[LSTM1][R5] is now the 20th century a particular feedforward neural net (NN) called the convolutional NN (CNN).[CNN1-4] The basic CNN architecture with convolutional and downsampling layers is due to Fukushima (1979),[CNN1] who also introduced the now widely used rectified linear units (ReLUs) in 1969.[RELU1] called max-pooling was introduced by Yamaguchi et al. for TDNNs in 1990[CNN3a] and by Weng et al. for higher-dimensional CNNs in 1993.[CNN3] Since 1989, LeCun Finally, my own team showed in 2010[MLP1] to train deep NNs, contrary to claims by Hinton[VID1] who said that "nobody in their right mind would ever suggest" this. Then we CNNs of 2006.[GPUCNN] in a row (15 May 2011, 6 Aug 2011, 1 Mar 2012, 10 Sep 2012).[GPUCNN5] CVPR paper on DanNet[GPUCNN3] of Hinton Our CNN image scanners were 1000 times faster than previous methods.[SCAN] The VGG network (ImageNet 2014 winner)[GPUCNN9] and other highly cited CNNs[RCNN1-3] ResNet, the ImageNet 2015 winner[HW2] (Dec 2015) and currently the See also Sec. XVIII & XIV & XI & VI. The first non-learning recurrent NN (RNN) architecture (the Lenz-Ising model) was analyzed by physicists in the 1920s.[L20][I25][K41][W45] were also discussed in 1943 by McCulloch and Pitts[MC43] and formally analyzed in 1956 by Kleene.[K56] In 1972, Amari reused the Lenz-Ising model to build a learning RNN, later sometimes called the Hopfield network or Amari-Hopfield Network.[AMH1-3] artificial evolution[TUR1] and single adaptive layer learned in 1958[R58] (Joseph[R61] Widrow & Hoff regression and the method of least squares[DL1-2] multilayer perceptrons (MLPs) were discussed by Steinbuch[ST61-95] (1961), Joseph[R61] (1961), and Rosenblatt[R62] (1962), who wrote about "back-propagating errors" in an MLP with a hidden layer,[R62] but did not yet have a general deep learning algorithm for deep MLPs (what Compare also Selfridge deep convolutional NN architecture was first introduced in the 1970s;[CNN1] his very popular ReLU already in 1969.[RELU1-2] XIII, III, V, VIII, IX, and X. LBH & co-authors, e.g., Sejnowski[S20] (see Sec. XIII). It goes more or less like this: "In 1969, Minsky & Papert[M69] researchers took a fresh look at the problem in the 1980s."[S20] However, as mentioned above, the 1969 book[M69] addressed a "problem" of Gauss & Legendre (and then also by Amari (but see a 1989 paper[MOZ]). See Sec. 1 of the overview:[MIR] "Very Deep Learning" tasks of depth > 1000.[UN2][DL1][UN] (By 2003, LSTM variants successfully dealt with language problems of depth up to 30,000[LSTM17] III. Note that III).[DLC][DEEP1-2][BP1][DL1-2][R7-R8][R2-R4] deep learning multilayer perceptrons (1965),[DEEP1-2][R8] stochastic gradient descent for multilayer perceptrons (1967),[GD1-3] (1970),[BP1,2][R7] architectures of recurrent NNs (1925-56)[I25][MC43][K56] and convolutional NNs (1979),[CNN1] GPU-accelerated NNs (2004),[GPUNN][GPUCNN5] and other foundations.[DL1-2][R2-R8] II & V & XIII & IX & X & XVII & XII & XVIII & XX & I. deeplearning.net which until 2019 advertised deep learning as "moving beyond shallow machine learning since 2006",[DL7] referring to Hinton II & XVII (5). Not to mention Ivakhnenko which Hinton,[UN4] Bengio,[UN5] and LBH[DL3,DL3a] did not cite either. See Sec. X. my comments systematically track the sequential order of ACM

    ACM Much of early AI in the 1940s-70s was actually about theorem proving[ZU48][NS56] Turing Machine.[TUR] He rederived the above-mentioned result,[CHU][TUR][HIN][GOD21,21a][TUR21][LEI21,21a] In the same year of 1936, Emil Post published yet another independent universal model of computing,[POS] without suggesting any fact-based corrections.[HIN]) open problem "P=NP?" in his famous letter to John von Neumann (1956).[GOD56][URQ10] His patent application of 1936[ZU36-38][Z36][RO98][ZUS21] predating Claude Shannon Zuse also created the first high-level programming language in the early 1940s.[BAU][KNU] conditional jump instruction.[RO98] that learn internal representations (1965),[DEEP1-2][R8] stochastic gradient descent for multilayer perceptrons (1967),[GD1-3] (1970),[BP1,2][R7] architectures of recurrent NNs (1925-56)[I25][MC43][K56] and convolutional NNs (1979),[CNN1] (2004),[GPUNN][GPUCNN5] (2010)[MLP1-2] transformer-like[TR1-6][FWP] attention[FWP][ATT] through and more.[DL1-2][R2-R8] II & I & III & XIII & X & XVII & XII & XVIII & XX. achieved by our group 2010-2011[MLP1-2][DAN][DAN1][GPUCNN5][R6] Baldi and Chauvin (1993) had the first application of CNNs with backpropagation to biomedical/biometric images.[BA93] (Sept 2012, on cancer detection).[GPUCNN5,8] and were able to greatly improve steel defect detection.[ST] All of this happened before the similar GPU-accelerated AlexNet of Hinton mitosis detection.[MGC][GPUCNN5,8] D & XI). without citing them.[DL1][DLC][HIN][R2-R4][R7-R8] V & XII & XIX & II & III & XIII & XVII & X & I. work.[HIN][DLC][DL1-2][DEEP1-2][RELU1-2][R7-R8] See Sec. II & III & XIII & V & X & XIV & I. first introduced to Machine Learning by Dechter (1986), and to NNs by Aizenberg et al (2000).[DL2] To my knowledge, LBH have never cited them. (Margin note: our 2005 paper on deep RL[DL6,6a] was LBH started talking about "deep learning ... moving beyond shallow machine learning since 2006",[DL7] referring to their unsupervised pre-training methods of 2006. See Sec. III. II & III & XIII & V & I. ignored by LBH V & II & III & I & XIII & XII & XIX & X & XVII).

    ACM correctly mentions advancements through GPUs. The first to use GPUs for NNs were Jung & Oh (2004),[GPUNN][GPUCNN5] an important benchmark record,[MLP1-2] to train deep NNs, contrary to Hinton vision (explicitly mentioned by ACM) for the first time[R6] (see Sec. D). (explicitly mentioned by ACM) were actually dominated by LSTM and CTC of our team.[LSTM1-4][CTC] In particular, as mentioned in Sec. A, such as HMMs.[BW][BOU][BRI][HYB12] As mentioned in Sec. B and XVI, the first superior end-to-end neural machine translation was also based on LSTM. ACM backpropagation by Rumelhart et al. (1985-86)[RUM] (1982).[BP2] And the article[RUM] even failed to mention Linnainmaa, the inventor of this famous algorithm for credit assignment in networks (1970),[BP1] Kelley already had a precursor thereof in the field of control theory;[BPA] see also later work of the early 1960s.[BPB][BPC][R7] internal representations in hidden layers of NNs.[RUM] But this was essentially just an experimental analysis of a known method.[BP1-2] And history of backpropagation can be found at Scholarpedia[DL2] and in my award-winning survey.[DL1] Also see Sec. XIX, II.

    Some claim that "backpropagation is just the chain rule of Leibniz (1676) & L Hinton[AOI] Rumelhart[RUM] with the "invention" of backpropagation. for "creating" the method and for other things he didn Neither in a popular book[AOI] nor in other recent work[DL3,DL3a] did he cite Linnainmaa (1970),[BP1] the true creator.[BP4-5] that his 2015 survey[DL3] does cite Werbos (1974) who however described the method correctly only later in 1982[BP2] and also failed to cite Linnainmaa.[BP1] Compare the 1967-68 work of Amari:[GD1-3] to my knowledge the first to propose and implement stochastic gradient descent[STO51-52] reverse mode gradient descent method now known as backpropagation[BP1]); see also Tsypkin Linnainmaa It wasn one person who published first[BP1] and therefore should get the credit. Boltzmann Machine (BM)[BM] a learning.[HIN] Recently, however, I learnt through a reader that even the BM paper[BM] did not cite prior relevant work by Sherrington & Kirkpatrick[SK75] and Glauber.[G63] (Compare related work.[H86][H88][S93]) multilayer perceptrons with arbitrarily many layers.[DEEP1-2][HIN] Sec. II V &

    As mentioned in Sec. II, Sejnowski at the problem in the 1980s."[S20] However, the 1969 book[M69] addressed a "deep learning problem" (a limitation of Gauss & Legendre also in the 1970s, especially outside of the Anglosphere.[DEEP2][GD1-3][CNN1][DL1-2] Dropout is actually a variant of Hanson as we showed already in 2011 in a contest where LeCun Sec. D above. Back then, the only really of deep CNNs through GPUs.[GPUCNN1,3,5][R6] Already before ImageNet 2012,[R6] a monopoly on winning computer vision competitions.[GPUCNN5] It more than "halved the error rate for object recognition" (ACM See Sec. D since the late 1980s.[BW][BRI][BOU] LSTM (1990s-2005)[LSTM0-6] and CTC[CTC] (2006), which were applied to speech in 2007.[LSTM4][LSTM14] CTC-LSTM is end-to-end-neural and thus very different from (and superior to) the hybrid methods since the late 1980s.[BW][BRI][BOU][HYB12] See also Sec. A. 5 years earlier, in 1995, we already had a similar, excellent neural probabilistic text model.[SNT] Bengio[NPM] characterizes it only briefly as "related" (see also Pollack Bengio For example, it helped to further improve Facebook attention-based Transformers[TR1-6] are My FWP of 1991[FWP0-1] (now often called keys and values for self-attention).[TR1-6][FWP] Transformers[TR1-2] a traditional LSTM domain (see Sec. B). rapidly learn to solve quickly[LSTM13,17] linear Transformers or Performers[TR5-6] which are formally equivalent to my 1991 FWPs (apart from normalization).[FWP6][FWP] In 1993, I introduced in this context,[ATT] and RNNs that program themselves. He was the reviewer of my 1990 paper[ATT2] his own work:[ATT3] GANs[GAN0-1] (2010-2014) are actually from 1990[AC90,90b][AC20] (see also surveys[AC09-10]). This principle is now widely used for exploration in RL (e.g., Sec. C) and for image synthesis[GAN1] (also mentioned by ACM in Sec. XVIII). predictor NN minimizes its error, while the generator NN tries to make outputs that maximize this error: one net whether the controller early adversarial machine learning settings[S59][H90] neither involved unsupervised NNs nor were about modeling data nor used gradient descent.[AC20]) Bengio et al. neither cited the original work[AC90,90b][AC20] nor corrected their erroneous claims[GAN1] about Bloomberg,[AV1] their NIPS 2014 paper[GAN1] and some of the erroneous claims it made about my prior work.[AC20] Goodfellow eventually admitted that PM is adversarial (his paper[GAN1] still claims the opposite), but emphasized that it When the authors[GAN1] I published one myself in the hopes of correcting the annals of history.[AC20] that they are instances of my earlier work.[R2][AC20] was settled in favor of Sepp.[VAN1] However, even after a common publication,[VAN3] Bengio published papers[VAN4][XAV] are poor indicators of truly pioneering work.[NAT1] (Margin note: Bengio states[YB20] that in 2018 he one must at least clarify it later,[DLC] Bengio also claims[YB20] that in 1995 date back to 1991-93.[UN0-2][UN] which I started in 1987[META1][META] long before Bengio that he did it before me.[R3] Bengio also writes[YB20] that in Bengio has also heavily used our LSTM (see Sec. A-C), "gated recurrent units (GRU)"[LSTMGRU] for a variant of our vanilla LSTM architecture[LSTM2] (2000) which he did not cite although our work[LSTM2] was the one that introduced gated recurrent units. In addition, our team automatically evolved lots of additional LSTM variants and topologies already in 2009[LSTM7] without changing the name of the basic method. learn to count[LSTMGRU2] nor learn simple non-regular languages;[LSTMGRU2] they according to Google Brain.[LSTMGRU3]) Hinton work on this[UN0-2] (see Sec. II above).[UN] It was published in 1991-92[UN1] when compute was about 1000 times more expensive than in 2006. survey (2015),[DL3][DLC] See also Sec. II & III. Hinton[DIST2] (2006) did not cite my much earlier original work on this (1991),[UN1][UN] not even in his later patent application Hinton[ATT3] (2010) he was both reviewer and editor of my summary[ATT2] (1990; see Sec. XVI above).

    The ten priority disputes mentioned in the present Sec. XVII are not on the only ones.[R4] Remarkably, three of them are related to the 1991 paper[UN1][UN] which in many ways started what people now call deep learning, going beyond Most of them go back to work of 1990-91.[MIR] See Sec. I for additional related issues of credit assignment. LeCun All of this happened before LeCun three times worse performance).[DAN1] Again see Sec. D. Baldi and Chauvin (1993) had the first application of CNNs with backpropagation to biomedical/biometric images.[BA93] (Sept 2012, on detection of mitosis/cancer)[GPUCNN5,7,8] (before the similar AlexNet won ImageNet 2012[GPUCNN5][R6] and the similar VGG network[GPUCNN9] won ImageNet 2014). mitosis detection.[MGC][GPUCNN5,7,8] Many major companies are using it now. See Sec. D & VII. ACM also explicitly mentions speech recognition, speech synthesis,[AM16][DL1] Sec. A, B, VI, XI. recent work.[DL3,DL3a][DLC] In 1960, Kelley already had a precursor of the algorithm.[BPA] Furthermore, many besides LeCun have worked "to speed up backpropagation algorithms"[DL1] (ACM However, "hierarchical feature representation" in deep learning networks is what Ivakhnenko & Lapa (1965)[DEEP1-2] and Amari[GD1-2] (and also Fukushima[CNN1][DL2]) had long before LeCun. See Sec. D & II & XIII & V. LeCun et al. neither cited the origins[BP1] (1970) of this widely used type of automatic differentiation for differentiable networks of modules[DL2][BP4-5][DLC] for such systems.[S80] See also Sec. XIX & XII. before LeCun who did not cite them. See also Pollack

    (Furthermore, "complex networks of modules where backpropagation is performed" were the central theme of my much earlier habilitation thesis (1993).[UN2] For example, our see "100 Authors against Einstein."[AH1] "If you cannot dispute a fact-based message, attack the messenger himself."[HIN] Science has a well-established way of dealing with plagiarism (which may be unintentional[PLAG1][CONN21] or not[FAKE2]) award can ever change that.[HIN] and their co-workers have contributed useful improvements of deep learning methods.[CNN2,4][CDI][LAN][RMSP][XAV][ATT14][CAPS] whom they did not cite, in contrast to ACM II, V, XII, XIX, XXI, XIII, XIV, XI, and XX, and 2). Sec. I, A, B, C, D, XVII, VI, and XVI). to self-correction,"[SV20] as is already the standard in other scientific fields. in popular science venues without peer review? For example, the narrator of a popular 2018 Bloomberg video[VID2] Germany and Switzerland (LSTM & CTC; see Sec. A) long before Hinton Google on Google Translate[WU] mentions LSTM over 50 times (see Sec. B). In ad hominem style,[AH2-3] claiming credit he doesn LeCun also called the GANs of Bengio of my work in 1990.[AC90,90b][AC20][R2] According to Bloomberg,[AV2] Bengio has simply "denied my claims" without backing up his denial by any facts; see Sec. XVII. and forcefully contradict public figures who promote it."[FAKE] Our LSTM paper[LSTM1] has got more citations than any paper by Bengio or LeCun,[R5] Hinton deep NNs (2010)[MLP1] [UN][UN0-3] and later championed by Hinton;[UN4][VID1] see Sec. D). Hinton (2012)[GPUCNN4] characterizes AlexNet won one;[R6] see Sec. D, XIV. The highly cited VGG network (2014)[GPUCNN9] Hinton of Hinton for a book by Rumelhart & McClelland[R5]). method[BP1] whose origins of Ivakhnenko whom he has never cited;[DEEP1-2][R7-R8] see Sec. II, XIII. Bengio (1990)[AC90,90b][AC20][R2] which he did not cite; see Sec. XVII. Hinton were preceded by Hanson As recently as of 2021, ACM published yet another misleading deep learning "survey" by LBH,[DL3a] again heavily citing LBH without Consult the Executive Summary and Sec. I-XXI of this critique for more. have their conceptual and technical roots in my labs in Munich and Lugano,[MOST] of deep learning MLPs since 1965[DEEP1-2][GD1-2a] (see Sec. II, XX) and backpropagation (1960-70)[BPA][BP1] (see Sec. XIX, XII) and convolutional NNs since 1979[CNN1-4] (see Sec. XVIII, D). Our LSTM (1990s, see Sec. A, B; also for RL, 2003-, see Sec. C) → our Highway Net (May 2015) → ResNet (Dec 2015, see Sec. D). Our adversarial Artificial Curiosity (1990) → GANs (2010s, see Sec. XVII). our own unsupervised pre-training of deep NNs (1991, see Sec. II & III) for recurrent NNs in the 1990s → our LSTM (see Sec. A-C) and for feedforward NNs in 2010 → our DanNet (2011) → AlexNet (2012); VGG Net (2014) (see Sec. D). superior computer vision (2011, see Sec. D, XVIII), speech recognition (with our CTC, 2007-15, see Sec. A), machine translation (2016, see Sec. B), robotics & video game players (2018-19, see Sec. C), Fast Weight Programmers (1991, see Sec. XVI) are formally equivalent to linear Transformers (now popular in NLP). I, A, B, C, D, VII, XVIII. depth that really learned.[DEEP1-2][R8] Soon afterwards, multilayer perceptrons learned internal representations through stochastic gradient descent in Japan.[GD1-2a] A few years later, modern unintentional[PLAG1][CONN21] or intentional.[FAKE2]

    Yes, this critique is also an implicit critique of certain other awards to LBH.[HIN] reddit.com/r/MachineLearning[R1-R12] (the largest machine learning forum with back then over 800k subscribers), many of them influenced by my overview.[MIR]

    Dr. LeCun himself is well aware of the challenges to scientific integrity in our field:[LECP] "... else cites."[LECP] weights and an adaptive output layer.[R62] So Rosenblatt basically had what much later was rebranded as Extreme Learning Machines (ELMs)[ELM1] revisionist narrative of ELMs[ELM2][CONN21] self-proclaimed "deep learning conspiracy"[DLC1-2]

    Note that I am insisting on proper credit assignment not only in my own research field but also in quite disconnected areas,[HIN] as demonstrated by my numerous letters in this regard published in Science and Nature, e.g., on the history of aviation,[NASC1-2] the telephone,[NASC3] the computer,[NASC4-7] resilient robots,[NASC8] and scientists of the 19th century.[NASC9]

    Creative Commons LicenseThanks arXiv page. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. PDF. PDF. PDF. IEEE link. With a brief summary of the generative adversarial neural networks of 1990[AC90,90b][AC20] Preprint arXiv/1906.04493. ACM Code of Ethics and Professional Conduct. Association for Computing Machinery (ACM), 2018. Quote: Link. Link. Blog of Werner Vogels, CTO of Amazon (Nov 2016): First publication of what was later sometimes called the Hopfield network[AMH2] or Amari-Hopfield Network.[AMH3] The Hopfield network or Amari-Hopfield Network was published in 1972 by Amari.[AMH1] PDF. arXiv/1409.0473, 2014-16. Bloomberg, May 15, 2018. Bloomberg, May 17, 2018. PDF. PDF. Link. PDF. First application of backpropagation[BP1] to NNs (concretizing thoughts in his 1974 thesis). More.[DL2] English version: [CNN1+]. More in Scholarpedia. Link. [CNN1a] A. Waibel. Phoneme Recognition Using Time-Delay Neural Networks. Meeting of IEICE, Tokyo, Japan, 1987. First application of backpropagation[BP1][BP2] and weight-sharing PDF. Spatial Averaging.[CNN1] Spatial Averaging.[CNN1] Since November 2021: Comments on version 1 of the present report[T21v1] in the Connectionists Mailing List, perhaps the oldest mailing list on artificial neural networks. Link to the archive. PDF. Beijing, 2014. Preprint arXiv:1402.3511 [cs.NE]. 1st superhuman result in 2011.[DAN1] [DIST1] J. Schmidhuber, 1991.[UN-UN2] Deep Learning. HTML. [DL3a] Y. Bengio, Y. LeCun, G. Hinton (2021). Turing Lecture: Deep Learning for AI. Communications of the ACM, July 2021. HTML. greatly improved (CTC-based) on-device speech recognition (on the phone, not the server) PDF. Web site deeplearning.net of Y. Bengio Internet Archive), referring to Hinton unsupervised pre-training for deep NNs[UN4] (2006) although II & XVII & III. arxiv:1312.5602. Link. arXiv:1808.03578, 2018. In fact, the ELM concept goes back to Rosenblatt over 4 billion automatic translations per day (The Verge, August 4, 2017); Facebook blog by J.M. Pino, A. Sidorov, N.F. Ayan (August 3, 2017) alternative[FWP0-1] to recurrent NNs. the fast weights[FAST,FASTa] of Such Fast Weight Programmers[FWP0-6,FWPMETA1-7] can learn to memorize past data, e.g., by computing fast weight changes through additive outer products of self-invented activation patterns[FWP0-1] (now often called keys and values for self-attention[TR1-6]). The similar Transformers[TR1-2] combine this with projections linear Transformers or Performers[TR5-6] In 1993, I introduced in this context,[ATT] and RNNs that program themselves. PDF. PDF. Preprint: arXiv:1811.12143. PDF. PDF. Like [FWP0-2]. Preprint: arXiv:2003.08165. PDF. Linear Transformers Are Secretly Fast Weight Programmers. ICML 2021. Preprint: arXiv:2102.11174. Preprint: arXiv:2106.06295 (June 2021). PDF. An introspective network that can learn to run its own weight change algorithm. In Proc. of the Intl. Conf. on Artificial Neural Networks, J. Schmidhuber. Habilitation thesis, TUM, 1993. PDF. Preprint arXiv:2012.14905 [cs.LG], 2020. Report arXiv:2011.07831 [cs.AI], 2020. Probably the first paper on using stochastic gradient descent[STO51-52] reverse mode of automatic differentiation or backpropagation[BP1]). Implementation of Amari Google Research Blog, Sep 2015, see also Aug 2015 Google Alphr Technology, Jul 2015, or 9to5google, Jul 2015 WIRED, Sep 2016, siliconANGLE, Sep 2016 Blog post, Internet Archive, 2010. A blog post describing the basic ideas[AC][AC90, AC90b][AC20] of GANs. Description of GANs that does not cite the original work of 1990[AC][AC90,AC90b][AC20][R2] (also containing wrong claims about Predictability Minimization[PM0-2][AC20]). Link. This was number 1 on Hacker News. Frankfurter Allgemeine Zeitung, 16/6/2021. Preprint arXiv/2005.14165. for Image Classification. International Joint Conference on Artificial Intelligence (IJCAI-2011, Barcelona), 2011. PDF. ArXiv preprint. competitor.[DAN1] This led to massive interest from industry. PDF. PDF. North-Holland, 1991. PDF. Extending TR FKI-129-90, TUM, 1990. PDF. PDF. Preprints arXiv:1505.00387 (May 2015) and arXiv:1507.06228 (July 2015). Also at NIPS 2015. The LSTM with forget gates[LSTM2] for RNNs.) Resnets[HW2] are a version of this where the gates are always open: g(x)=t(x)=const=1. Highway Nets perform roughly as well as ResNets[HW2] on ImageNet.[HW3] Highway layers are also often used for natural language processing, where the simpler residual layers do not work as well.[HW3] Link. arXiv:1512.03385 (Dec 2015). Residual nets are a version of Highway Nets[HW1] arxiv:1612.07771 (2016). Also at ICLR 2017. Preprint arXiv:1704.04760 PDF. PDF. arXiv:1607.06450, 2016. A New Publishing Model in Computer Science. 19/5/2021. [LSTM1] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 1997. PDF. Based on [LSTM0]. More. PDF. PDF. PDF. PDF. PDF. PDF. PDF. PDF. Preprint: arxiv:1506.07452. PDF. PDF. Preprint arXiv:1805.04908. Architectures. Preprint arXiv:1703.03906 arXiv:2005.05744, 2020. Computation 22(12): 3207-3220, 2010. ArXiv Preprint. By 2010, when compute was 100 times more expensive than today, both our feedforward NNs[MLP1] Preprint arXiv:1611.01578 (PDF), 2017. Correspondence, Nature, vol 483, p 541, March 2012, doi:10.1038/483541b. Letter, Science, vol 336, p 1639, June 2012. See also comment on response by A. Hodges (DOI:10.1126/science.336.6089.1639-a) NY Times article NY Times article Learning Dexterous In-Hand Manipulation. arxiv:1312.5602 (PDF). arxiv:1912.06680. PDF. Link. Based on TR FKI-126-90 (1990).[AC90] PDF. Partially based on TR FKI-126-90 (1990).[AC90] Report arXiv:1210.0118 [cs.AI], 2015. One Big Net For Everything. Preprint arXiv:1802.08864 [cs.AI], Feb 2018. Preprint: arXiv:1809.01999. Github: World Models. minimization. TR CU-CS-565-91, Univ. Colorado at Boulder, 1991. PDF. 1991. PDF. arXiv:1112.5309 [cs.AI] First Experiments with PowerPlay. arXiv:1210.8385 [cs.AI]. [R1] Reddit/ML, 2019. Hinton, LeCun, Bengio receive ACM Turing Award. [R2] Reddit/ML, 2019. J. Schmidhuber really had GANs in 1990. [R3] Reddit/ML, 2019. NeurIPS 2019 Bengio Schmidhuber Meta-Learning Fiasco. [R4] Reddit/ML, 2019. Five major deep learning papers by G. Hinton did not cite similar earlier work by J. Schmidhuber. [R5] Reddit/ML, 2019. The 1997 LSTM paper by Hochreiter & Schmidhuber has become the most cited deep learning research paper of the 20th century. [R6] Reddit/ML, 2019. DanNet, the CUDA CNN of Dan Ciresan in J. Schmidhuber [R7] Reddit/ML, 2019. J. Schmidhuber on Seppo Linnainmaa, inventor of backpropagation in 1970. [R8] Reddit/ML, 2019. J. Schmidhuber on Alexey Ivakhnenko, godfather of deep learning 1965. [R9] Reddit/ML, 2019. We [R11] Reddit/ML, 2020. Schmidhuber: Critique of Honda Prize for Dr. Hinton [R12] Reddit/ML, 2020. J. Schmidhuber: Critique of Turing Award for Drs. Bengio & Hinton & LeCun [R15] Reddit/ML, 2021. J. Schmidhuber Preprint arXiv/1311.2524, Nov 2013. Preprint arXiv/1703.06870, 2017. Link. The Past, Present and Future of Artificial Intelligence. PDF. ACM Link. Link. 1992. Based on TR FKI-148-91, TUM, 1991.[UN0] PDF. [UN2] J. Schmidhuber. Habilitation thesis, TUM, 1993. PDF. can be found here (depth > 1000). 2006. PDF. Link. [VAN1] S. Hochreiter. Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, TUM, 1991 (advisor J. Schmidhuber). PDF. PDF. [VAN4] Y. Bengio. Neural net language models. Scholarpedia, 3(1):3881, 2008. Link. Link. Youtube video [see 28:16]. But in 2010, our team showed[MLP1-2] Youtube video, 2018. Preprint arXiv:1609.08144 (PDF), 2016. Based on LSTM which it mentions at least 50 times. WWW link (retrieved 15 May 2020). PDF. Menu


    @SchmidhuberAI This is a point-for-point critique of ACM (see Executive Summary I, V, II, XII, XIX, XXI, XIII, XIV, XX, XVII). (A) speech recognition, (B) natural language processing, (C) robotics, (D) computer vision, (VII) medicine, astronomy, materials science. A, B, C, D, VII, XVII, VI, XVI). II, V, XX, XVIII) with Dr. Bengio & Dr. Hinton (see Sec. XVII, I). I respond to LBH Abstract & Outline (~300 words), Introduction (~300 words), Critique of LBH Executive summary of what 21 comments on 21 claims by ACM (~8,000 words), Conclusion and Acknowledgments (~2,000 words). All backed up by over 250 references (~9,000 words). science is self-correcting."[SV20] they are mine or other people and to fight plagiarism, collusion rings,[LIT21] and systemic academic corruption in all of their more and less subtle forms.[FAKE] Sec. 2 LBH of this post.[T20a][R12] ACM 2018 A.M. Turing Award[R1] After the Executive Summary in Sec. 3, Sec. 4 will split ACM into 21 parts I, II, III, IV, V, VI, VII, VIII, IX, X, XI, XII, XIII, XIV, XV, XVI, XVII, XVIII, XIX, XX, XXI. publishing yet another misleading overview of the field, this time based on LBH LBH LBH claim to "briefly describe the origins of deep learning"[DL3a] without even mentioning the world Ivakhnenko and Lapa in 1965[DEEP1-2][R8] (see Sec. II). the first really deep feedforward NN.[HW1-3] (see Sec. D, VI). brought essentially unlimited depth to gradient-based supervised recurrent NNs;[LSTM0-17] from 2007[LSTM4,14] LBH cite Hinton (2012) for "dropout" without mentioning that dropout is just a variant of Hanson von der Malsburg who introduced ReLUs in 1973[CMB] (see Sec. XIV). XVIII). already in 1965[DEEP1-2][R8] (see Sec. II). earlier fast weights of von der Malsburg (1981) and Feldman (1982).[FAST,FASTa-b][FWP] dedicate an extra section to attention-based Transformers,[TR1-6] citing Bengio LBH claim that Bengio of text compression[SNT] (see Sec. XVI, XVII-1). LBH cite Bengio In summation, LBH have repeatedly chosen to ignore the previous well-known critiques[DLC][HIN][T20a] and deep learning surveys,[DL1-2] and deep learning (e.g., Sec. I), ACM lauds Numerous references can be found under the relevant section links I-XXI which adhere to the sequential order of ACM Sec. II: Sec. I contains 4 subsections A, B, C, D A: Speech Recognition (see also Sec. VI & XI & XV): The first superior end-to-end neural speech recognition Hinton (2012) and Bengio (XV) Sec. B: Natural Language Processing (see also Sec. VI & XI & XVI): Sec. C: Robotics. Sec. D: Computer Vision XVIII & XIV & XI & VI) and applied to speech. All before LeCun pre-training (in contrast to Hinton Sec. XIV: Sec. XI: ACM mentions GPU-accelerated NNs XVIII: Fukushima and Waibel (see Sec. D). VII: ACM explicitly mentions medicine and Sec. XII & XIX & XXI: Modern XIII & II & V III & IX & X & XX): Sec. XX: ACM credits LeCun for work on Sec. XXI: ACM credits LeCun for work on XV: ACM credits Bengio for hybrids of NNs and probabilistic models of sequences. A & B). XVI: ACM Sec. XVII: and other topics.[R2-R6] Critique of LBH Sec. Conclusion: Sec. II & III & V & XII & XIII & XVII & XIV & XIX & XX & XXI. In what follows, ACM I, II, III, IV, V, VI, VII, VIII, IX, X, XI, XII, XIII, XIV, XV, XVI, XVII, XVIII, XIX, XX, XXI. LBH and their co-workers have contributed certain useful improvements of existing deep learning methods.[CNN2,4][CDI][LAN][RMSP][XAV][ATT14][CAPS] (1965),[DEEP1-2][R8] (1970),[BP1-2][R7] architectures of recurrent NNs (1943-56)[MC43][K56] and convolutional NNs (1979),[CNN1] GPU-accelerated NNs (2004),[GPUNN][DAN][DAN1][GPUCNN5] transformer-like[TR1-6][FWP] attention[FWP][ATT] through [DL1-2][R2-R8] This may explain some of ACM II & III & V & XIII & X & XVII & XII & XVIII & XX. academia and industry,[DL4] mentioned by ACM (labeled as A, B, C, D) below: or LSTM (1990s-2005)[LSTM0-6] student Sepp Hochreiter in 1991.[VAN1] This happened long before the similar work of Bengio (see Sec. XVII).[MIR] LSTM was refined with my student Felix Gers[LSTM2] (A2) Connectionist Temporal Classification by my student Alex Graves et al. (2006).[CTC] Our team successfully applied CTC-trained LSTM to speech in 2007[LSTM4] (also with hierarchical LSTM stacks[LSTM14]). Markov models (HMMs)[BW][BRI][BOU] (Sec. XV). Hinton et al. (2012) still used the old hybrid approach[HYB12] and did not compare it to CTC-LSTM. He later reused our end-to-end neural speech recognizer[LSTM4][LSTM14] as a postdoc in Hinton CTC-LSTM dramatically improved Google on-device speech recognition[GSR19] (not any longer on the server) (see Sec. VI & XI & XV). of text[SNT] (see Sec. XVI). In 2001, we showed that LSTM can learn languages unlearnable by traditional models such as HMMs,[LSTM13] See also Sec. VI & XI & XV. tailored by Bengio see Sec. XVI. C. Robotics & RL etc. Since 2003, our team has used LSTM for Reinforcement Learning (RL) and robotics.[LSTM-RL][RPG][LSTMPG] For example, in 2018, a PG-trained LSTM was the core of OpenAI beat a pro player in the game of Starcraft, which is theoretically harder than Chess or Go[DM2] in many ways, using OpenAI Five which learned to defeat human experts in the Dota 2 video game (2018).[OAI2] Apart from A, B, C above, chemistry, molecular design, lip reading, speech synthesis,[AM16] predicting what was being used for LSTM (only 5% for the CNNs of Sec. D).[JOU17] Apparently the first LSTM journal paper[LSTM1][R5] is now the most frequently cited a particular feedforward NN called the convolutional NN (CNN).[CNN1-4] The basic CNN architecture with convolutional and downsampling layers is due to Fukushima (1979).[CNN1] The popular downsampling variant called max-pooling was introduced by Weng et al. (1993).[CNN3] LeCun Finally, my own team showed in 2010[MLP1] to train deep NNs, contrary to claims by Hinton[VID1] who said that "nobody in their right mind would ever suggest" this. Then we CNNs of 2006.[GPUCNN] in a row (15 May 2011, 6 Aug 2011, 1 Mar 2012, 10 Sep 2012).[GPUCNN5] CVPR paper on DanNet[GPUCNN3] of Hinton Our CNN image scanners were 1000 times faster than previous methods.[SCAN] The VGG network (ImageNet 2014 winner)[GPUCNN9] and other highly cited CNNs[RCNN1-3] ResNet, the ImageNet 2015 winner[HW2] (Dec 2015) which currently gets See also Sec. XVIII & XIV & XI & VI. were proposed already in the 1940s/50s[MC43][K56] (but don deep convolutional NN architecture was proposed in the 1970s.[CNN1] NNs without hidden layers learned in 1958[R58] regression and the method of least squares[DL1-2]). about deeper adaptive NNs[R61,R62] XIII & III & V & VIII & IX & X. LBH & co-authors, e.g., Sejnowski[S20] (see Sec. XIII). It goes more or less like this: "In 1969, Minsky & Papert[M69] researchers took a fresh look at the problem in the 1980s."[S20] However, as mentioned above, the 1969 book[M69] addressed a "problem" of Gauss & Legendre (but see a 1989 paper[MOZ]). See Sec. 1 of the overview:[MIR] "Very Deep Learning" tasks of depth > 1000.[UN2][DL1][UN] (By 2003, LSTM variants successfully dealt with language problems of depth up to 30,000[LSTM17] III. Note that III).[DLC][DEEP1-2][BP1][DL1-2][R7-R8][R2-R4] deep learning multilayer perceptrons (1965),[DEEP1-2][R8] (1970),[BP1,2][R7] architectures of recurrent NNs (1943-56)[MC43][K56] and convolutional NNs (1979),[CNN1] GPU-accelerated NNs (2004),[GPUNN][GPUCNN5] and other foundations.[DL1-2][R2-R8] II & V & XIII & IX & X & XVII & XII & XVIII & XX & I. deeplearning.net which until 2019 advertised deep learning as "moving beyond shallow machine learning since 2006",[DL7] referring to Hinton II & XVII (5). Not to mention Ivakhnenko which Hinton,[UN4] Bengio,[UN5] and LBH[DL3,DL3a] did not cite either. See Sec. X. my comments systematically track the sequential order of ACM

    ACM Much of early AI in the 1940s-70s was actually about theorem proving[ZU48][NS56] Turing Machine.[TUR] He rederived the above-mentioned result,[CHU][TUR][HIN][GOD21,21a][TUR21][LEI21,21a] In the same year of 1936, Emil Post published yet another independent universal model of computing,[POS] without suggesting any fact-based corrections.[HIN]) open problem "P=NP?" in his famous letter to John von Neumann (1956).[GOD56][URQ10] His patent application of 1936[ZU36-38][Z36][RO98][ZUS21] predating Claude Shannon Zuse also created the first high-level programming language in the early 1940s.[BAU][KNU] conditional jump instruction.[RO98] that learn internal representations (1965),[DEEP1-2][R8] (1970),[BP1,2][R7] architectures of recurrent NNs (1943-56)[MC43][K56] and convolutional NNs (1979),[CNN1] (2004),[GPUNN][GPUCNN5] (2010)[MLP1-2] transformer-like[TR1-6][FWP] attention[FWP][ATT] through and more.[DL1-2][R2-R8] II & I & III & XIII & X & XVII & XII & XVIII & XX. achieved by our group 2010-2011[MLP1-2][DAN][DAN1][GPUCNN5][R6] (Sept 2012, on cancer detection).[GPUCNN5,8] and were able to greatly improve steel defect detection.[ST] All of this happened before the similar GPU-accelerated AlexNet of Hinton mitosis detection.[MGC][GPUCNN5,8] D & XI). without citing them.[DL1][DLC][HIN][R2-R4][R7-R8] V & XII & XIX & II & III & XIII & XVII & X & I. work.[HIN][DLC][DL1-2][DEEP1-2][CMB][R7-R8] See Sec. II & III & XIII & V & X & XIV & I. first introduced to Machine Learning by Dechter (1986), and to NNs by Aizenberg et al (2000).[DL2] To my knowledge, LBH have never cited them. (Margin note: our 2005 paper on deep RL[DL6,6a] was LBH started talking about "deep learning ... moving beyond shallow machine learning since 2006",[DL7] referring to their unsupervised pre-training methods of 2006. See Sec. III. II & III & XIII & V & I. ignored by LBH V & II & III & I & XIII & XII & XIX & X & XVII).

    ACM correctly mentions advancements through GPUs. The first to use GPUs for NNs were Jung & Oh (2004),[GPUNN][GPUCNN5] an important benchmark record,[MLP1-2] to train deep NNs, contrary to Hinton vision (explicitly mentioned by ACM) for the first time[R6] (see Sec. D). (explicitly mentioned by ACM) were actually dominated by LSTM and CTC of our team.[LSTM1-4][CTC] In particular, as mentioned in Sec. A, such as HMMs.[BW][BOU][BRI][HYB12] As mentioned in Sec. B and XVI, the first superior end-to-end neural machine translation was also based on LSTM. ACM backpropagation by Rumelhart et al. (1985-86)[RUM] (1982).[BP2] And the article[RUM] even failed to mention Linnainmaa, the inventor of this famous algorithm for credit assignment in networks (1970),[BP1] Kelley already had a precursor thereof in the field of control theory;[BPA] see also later work of the early 1960s.[BPB][BPC][R7] internal representations in hidden layers of NNs.[RUM] But this was essentially just an experimental analysis of a known method.[BP1-2] And history of backpropagation can be found at Scholarpedia[DL2] and in my award-winning survey.[DL1] Also see Sec. XIX, II.

    Some claim that "backpropagation is just the chain rule of Leibniz (1676) & L Hinton[AOI] Rumelhart[RUM] with the "invention" of backpropagation. for "creating" the method and for other things he didn Neither in a popular book[AOI] nor in other recent work[DL3,DL3a] did he cite Linnainmaa (1970),[BP1] the true creator.[BP4-5] that his 2015 survey[DL3] does cite Werbos (1974) who however described the method correctly only later in 1982[BP2] and also failed to cite Linnainmaa[BP1] (compare Amari Linnainmaa It wasn one person who published first[BP1] and therefore should get the credit. Boltzmann Machine (BM)[BM] a learning.[HIN] Recently, however, I learnt through a reader that even the BM paper[BM] did not cite prior relevant work by Sherrington & Kirkpatrick[SK75] and Glauber.[G63] (Compare related work.[H86][H88][S93]) multilayer perceptrons with arbitrarily many layers.[DEEP1-2][HIN] Sec. II V &

    As mentioned in Sec. II, Sejnowski at the problem in the 1980s."[S20] However, the 1969 book[M69] addressed a "deep learning problem" (a limitation of Gauss & Legendre also in the 1970s, especially outside of the Anglosphere.[DEEP2][BP6][CNN1][DL1-2] Dropout is actually a variant of Hanson as we showed already in 2011 in a contest where LeCun Sec. D above. Back then, the only really of deep CNNs through GPUs.[GPUCNN1,3,5][R6] Already before ImageNet 2012,[R6] a monopoly on winning computer vision competitions.[GPUCNN5] It more than "halved the error rate for object recognition" (ACM See Sec. D since the late 1980s.[BW][BRI][BOU] LSTM (1990s-2005)[LSTM0-6] and CTC[CTC] (2006), which were applied to speech in 2007.[LSTM4][LSTM14] CTC-LSTM is end-to-end-neural and thus very different from (and superior to) the hybrid methods since the late 1980s.[BW][BRI][BOU][HYB12] See also Sec. A. 5 years earlier, in 1995, we already had a similar, excellent neural probabilistic text model.[SNT] Bengio[NPM] characterizes it only briefly as "related" (see also Pollack Bengio For example, it helped to further improve Facebook attention-based Transformers[TR1-6] are My FWP of 1991[FWP0-1] (now often called keys and values for self-attention).[TR1-6][FWP] Transformers[TR1-2] a traditional LSTM domain (see Sec. B). rapidly learn to solve quickly[LSTM13,17] linear Transformers or Performers[TR5-6] which are formally equivalent to my 1991 FWPs (apart from normalization).[FWP6][FWP] In 1993, I introduced in this context,[ATT] and RNNs that program themselves. He was the reviewer of my 1990 paper[ATT2] his own work:[ATT3] GANs[GAN0-1] (2010-2014) are actually from 1990[AC90,90b][AC20] (see also surveys[AC09-10]). This principle is now widely used for exploration in RL (e.g., Sec. C) and for image synthesis[GAN1] (also mentioned by ACM in Sec. XVIII). predictor NN minimizes its error, while the generator NN tries to make outputs that maximize this error: one net whether the controller early adversarial machine learning settings[S59][H90] neither involved unsupervised NNs nor were about modeling data nor used gradient descent.[AC20]) Bengio et al. neither cited the original work[AC90,90b][AC20] nor corrected their erroneous claims[GAN1] about Bloomberg,[AV1] their NIPS 2014 paper[GAN1] and some of the erroneous claims it made about my prior work.[AC20] Goodfellow eventually admitted that PM is adversarial (his paper[GAN1] still claims the opposite), but emphasized that it When the authors[GAN1] I published one myself in the hopes of correcting the annals of history.[AC20] that they are instances of my earlier work.[R2][AC20] was settled in favor of Sepp.[VAN1] However, even after a common publication,[VAN3] Bengio published papers[VAN4][XAV] are poor indicators of truly pioneering work.[NAT1] (Margin note: Bengio states[YB20] that in 2018 he one must at least clarify it later,[DLC] Bengio also claims[YB20] that in 1995 date back to 1991-93.[UN0-2][UN] which I started in 1987[META1][META] long before Bengio that he did it before me.[R3] Bengio also writes[YB20] that in Bengio has also heavily used our LSTM (see Sec. A-C), "gated recurrent units (GRU)"[LSTMGRU] for a variant of our vanilla LSTM architecture[LSTM2] (2000) which he did not cite although our work[LSTM2] was the one that introduced gated recurrent units. In addition, our team automatically evolved lots of additional LSTM variants and topologies already in 2009[LSTM7] without changing the name of the basic method. learn to count[LSTMGRU2] nor learn simple non-regular languages;[LSTMGRU2] they according to Google Brain.[LSTMGRU3]) Hinton work on this[UN0-2] (see Sec. II above).[UN] It was published in 1991-92[UN1] when compute was about 1000 times more expensive than in 2006. survey (2015),[DL3][DLC] See also Sec. II & III. Hinton[DIST2] (2006) did not cite my much earlier original work on this (1991),[UN1][UN] not even in his later patent application Hinton[ATT3] (2010) he was both reviewer and editor of my summary[ATT2] (1990; see Sec. XVI above).

    The ten priority disputes mentioned in the present Sec. XVII are not on the only ones.[R4] Remarkably, three of them are related to the 1991 paper[UN1][UN] which in many ways started what people now call deep learning, going beyond Most of them go back to work of 1990-91.[MIR] See Sec. I for additional related issues of credit assignment. LeCun All of this happened before LeCun three times worse performance).[DAN1] Again see Sec. D. (Sept 2012, on detection of mitosis/cancer)[GPUCNN5,7,8] (before the similar AlexNet won ImageNet 2012[GPUCNN5][R6] and the similar VGG network[GPUCNN9] won ImageNet 2014). mitosis detection.[MGC][GPUCNN5,7,8] Many major companies are using it now. See Sec. D & VII. ACM also explicitly mentions speech recognition, speech synthesis,[AM16][DL1] Sec. A, B, VI, XI. recent work.[DL3,DL3a][DLC] In 1960, Kelley already had a precursor of the algorithm.[BPA] Furthermore, many besides LeCun have worked "to speed up backpropagation algorithms"[DL1] (ACM However, "hierarchical feature representation" in deep learning networks is what Ivakhnenko & Lapa (1965)[DEEP1-2] (and also Fukushima[CNN1][DL2]) had long before LeCun. See Sec. D & II & XIII & V. LeCun et al. neither cited the origins[BP1] (1970) of this widely used type of automatic differentiation for differentiable networks of modules[DL2][BP4-5][DLC] for such systems.[S80] See also Sec. XIX & XII. before LeCun who did not cite them. See also Pollack

    (Furthermore, "complex networks of modules where backpropagation is performed" were the central theme of my much earlier habilitation thesis (1993).[UN2] For example, our see "100 Authors against Einstein."[AH1] "If you cannot dispute a fact-based message, attack the messenger himself."[HIN] award can ever change that.[HIN] and their co-workers have contributed useful improvements of deep learning methods.[CNN2,4][CDI][LAN][RMSP][XAV][ATT14][CAPS] whom they did not cite II, V, XII, XIX, XXI, XIII, XIV, XI, and XX, and 2). Sec. I, A, B, C, D, XVII, VI, and XVI). to self-correction,"[SV20] as is already the standard in other scientific fields. in popular science venues without peer review? For example, the narrator of a popular 2018 Bloomberg video[VID2] Germany and Switzerland (LSTM & CTC; see Sec. A) long before Hinton Google on Google Translate[WU] mentions LSTM over 50 times (see Sec. B). In ad hominem style,[AH2-3] claiming credit he doesn LeCun also called the GANs of Bengio of my work in 1990.[AC90,90b][AC20][R2] According to Bloomberg,[AV2] Bengio has simply "denied my claims" without backing up his denial by any facts; see Sec. XVII. and forcefully contradict public figures who promote it."[FAKE] Our LSTM paper[LSTM1] has got more citations than any paper by Bengio or LeCun,[R5] Hinton deep NNs (2010)[MLP1] [UN][UN0-3] and later championed by Hinton;[UN4][VID1] see Sec. D). Hinton (2012)[GPUCNN4] characterizes AlexNet won one;[R6] see Sec. D, XIV. The highly cited VGG network (2014)[GPUCNN9] Hinton of Hinton for a book by Rumelhart & McClelland[R5]). method[BP1] whose origins of Ivakhnenko whom he has never cited;[DEEP1-2][R7-R8] see Sec. II, XIII. Bengio (1990)[AC90,90b][AC20][R2] which he did not cite; see Sec. XVII. Hinton were preceded by Hanson As recently as of 2021, ACM published yet another misleading deep learning "survey" by LBH,[DL3a] again heavily citing LBH without Consult the Executive Summary and Sec. I-XXI of this critique for more. have their conceptual and technical roots in my labs in Munich and Lugano,[MOST] of deep learning MLPs since 1965[DEEP1-2] (see Sec. II, XX) and backpropagation (1960-70)[BPA][BP1] (see Sec. XIX, XII) and convolutional NNs since 1979[CNN1-4] (see Sec. XVIII, D). Our LSTM (1990s, see Sec. A, B; also for RL, 2003-, see Sec. C) → our Highway Net (May 2015) → ResNet (Dec 2015, see Sec. D). Our adversarial Artificial Curiosity (1990) → GANs (2010s, see Sec. XVII). our own unsupervised pre-training of deep NNs (1991, see Sec. II & III) for recurrent NNs in the 1990s → our LSTM (see Sec. A-C) and for feedforward NNs in 2010 → our DanNet (2011) → AlexNet (2012); VGG Net (2014) (see Sec. D). superior computer vision (2011, see Sec. D, XVIII), speech recognition (with our CTC, 2007-15, see Sec. A), machine translation (2016, see Sec. B), robotics & video game players (2018-19, see Sec. C), Fast Weight Programmers (1991, see Sec. XVI) are formally equivalent to linear Transformers (now popular in NLP). I, A, B, C, D, VII, XVIII. depth that really learned.[DEEP1-2][R8] Five years later, modern

    Yes, this critique is also an implicit critique of certain other awards to LBH.[HIN] reddit.com/r/MachineLearning[R1-R12] (the largest machine learning forum with back then over 800k subscribers), many of them influenced by my overview.[MIR]

    Dr. LeCun himself is well aware of the challenges to scientific integrity in our field:[LECP] "... else cites."[LECP]

    Note that I am insisting on proper credit assignment not only in my own research field but also in quite disconnected areas,[HIN] as demonstrated by my numerous letters in this regard published in Science and Nature, e.g., on the history of aviation,[NASC1-2] the telephone,[NASC3] the computer,[NASC4-7] resilient robots,[NASC8] and scientists of the 19th century.[NASC9]

    Creative Commons LicenseThanks to many expert reviewers for useful comments. Since science is about self-correction, let me know under juergen@idsia.ch if you can spot any remaining error. Many additional relevant publications can be found in my arXiv page. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. PDF. PDF. PDF. IEEE link. With a brief summary of the generative adversarial neural networks of 1990[AC90,90b][AC20] Preprint arXiv/1906.04493. Link. Link. Blog of Werner Vogels, CTO of Amazon (Nov 2016): PDF. arXiv/1409.0473, 2014-16. Bloomberg, May 15, 2018. Bloomberg, May 17, 2018. PDF. PDF. Link. PDF. First application of backpropagation[BP1] to NNs (concretizing thoughts in his 1974 thesis). More.[DL2] English version: [CNN1+]. More in Scholarpedia. Link. [CNN1a] A. Waibel. Phoneme Recognition Using Time-Delay Neural Networks. Meeting of IEICE, Tokyo, Japan, 1987. First application of backpropagation[BP1][BP2] and weight-sharing PDF. Spatial Averaging.[CNN1] PDF. Beijing, 2014. Preprint arXiv:1402.3511 [cs.NE]. 1st superhuman result in 2011.[DAN1] [DIST1] J. Schmidhuber, 1991.[UN-UN2] Deep Learning. HTML. [DL3a] Y. Bengio, Y. LeCun, G. Hinton (2021). Turing Lecture: Deep Learning for AI. Communications of the ACM, July 2021. HTML. greatly improved (CTC-based) on-device speech recognition (on the phone, not the server) PDF. Web site deeplearning.net of Y. Bengio Internet Archive), referring to Hinton unsupervised pre-training for deep NNs[UN4] (2006) although II & XVII & III. arxiv:1312.5602. Link. arXiv:1808.03578, 2018. over 4 billion automatic translations per day (The Verge, August 4, 2017); Facebook blog by J.M. Pino, A. Sidorov, N.F. Ayan (August 3, 2017) alternative[FWP0-1] to recurrent NNs. the fast weights[FAST,FASTa] of Such Fast Weight Programmers[FWP0-6,FWPMETA1-7] can learn to memorize past data, e.g., by computing fast weight changes through additive outer products of self-invented activation patterns[FWP0-1] (now often called keys and values for self-attention[TR1-6]). The similar Transformers[TR1-2] combine this with projections linear Transformers or Performers[TR5-6] In 1993, I introduced in this context,[ATT] and RNNs that program themselves. PDF. PDF. Preprint: arXiv:1811.12143. PDF. PDF. Like [FWP0-2]. Preprint: arXiv:2003.08165. PDF. Linear Transformers Are Secretly Fast Weight Programmers. ICML 2021. Preprint: arXiv:2102.11174. Preprint: arXiv:2106.06295 (June 2021). PDF. An introspective network that can learn to run its own weight change algorithm. In Proc. of the Intl. Conf. on Artificial Neural Networks, J. Schmidhuber. Habilitation thesis, TUM, 1993. PDF. Preprint arXiv:2012.14905 [cs.LG], 2020. Report arXiv:2011.07831 [cs.AI], 2020. Google Research Blog, Sep 2015, see also Aug 2015 Google Alphr Technology, Jul 2015, or 9to5google, Jul 2015 WIRED, Sep 2016, siliconANGLE, Sep 2016 Blog post, Internet Archive, 2010. A blog post describing the basic ideas[AC][AC90, AC90b][AC20] of GANs. Description of GANs that does not cite the original work of 1990[AC][AC90,AC90b][AC20][R2] (also containing wrong claims about Predictability Minimization[PM0-2][AC20]). Link. This was number 1 on Hacker News. Frankfurter Allgemeine Zeitung, 16/6/2021. Preprint arXiv/2005.14165. for Image Classification. International Joint Conference on Artificial Intelligence (IJCAI-2011, Barcelona), 2011. PDF. ArXiv preprint. competitor.[DAN1] This led to massive interest from industry. PDF. PDF. North-Holland, 1991. PDF. Extending TR FKI-129-90, TUM, 1990. PDF. PDF. Preprints arXiv:1505.00387 (May 2015) and arXiv:1507.06228 (July 2015). Also at NIPS 2015. The LSTM with forget gates[LSTM2] for RNNs.) Resnets[HW2] are a version of this where the gates are always open: g(x)=t(x)=const=1. Highway Nets perform roughly as well as ResNets[HW2] on ImageNet.[HW3] Highway layers are also often used for natural language processing, where the simpler residual layers do not work as well.[HW3] Link. arXiv:1512.03385 (Dec 2015). Residual nets are a version of Highway Nets[HW1] arxiv:1612.07771 (2016). Also at ICLR 2017. Preprint arXiv:1704.04760 PDF. PDF. arXiv:1607.06450, 2016. A New Publishing Model in Computer Science. 19/5/2021. [LSTM1] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 1997. PDF. Based on [LSTM0]. More. PDF. PDF. PDF. PDF. PDF. PDF. PDF. PDF. Preprint: arxiv:1506.07452. PDF. PDF. Preprint arXiv:1805.04908. Architectures. Preprint arXiv:1703.03906 arXiv:2005.05744, 2020. Computation 22(12): 3207-3220, 2010. ArXiv Preprint. By 2010, when compute was 100 times more expensive than today, both our feedforward NNs[MLP1] Preprint arXiv:1611.01578 (PDF), 2017. Correspondence, Nature, vol 483, p 541, March 2012, doi:10.1038/483541b. Letter, Science, vol 336, p 1639, June 2012. See also comment on response by A. Hodges (DOI:10.1126/science.336.6089.1639-a) NY Times article NY Times article Learning Dexterous In-Hand Manipulation. arxiv:1312.5602 (PDF). arxiv:1912.06680. PDF. Based on TR FKI-126-90 (1990).[AC90] PDF. Partially based on TR FKI-126-90 (1990).[AC90] Report arXiv:1210.0118 [cs.AI], 2015. One Big Net For Everything. Preprint arXiv:1802.08864 [cs.AI], Feb 2018. Preprint: arXiv:1809.01999. Github: World Models. minimization. TR CU-CS-565-91, Univ. Colorado at Boulder, 1991. PDF. 1991. PDF. arXiv:1112.5309 [cs.AI] First Experiments with PowerPlay. arXiv:1210.8385 [cs.AI]. [R1] Reddit/ML, 2019. Hinton, LeCun, Bengio receive ACM Turing Award. [R2] Reddit/ML, 2019. J. Schmidhuber really had GANs in 1990. [R3] Reddit/ML, 2019. NeurIPS 2019 Bengio Schmidhuber Meta-Learning Fiasco. [R4] Reddit/ML, 2019. Five major deep learning papers by G. Hinton did not cite similar earlier work by J. Schmidhuber. [R5] Reddit/ML, 2019. The 1997 LSTM paper by Hochreiter & Schmidhuber has become the most cited deep learning research paper of the 20th century. [R6] Reddit/ML, 2019. DanNet, the CUDA CNN of Dan Ciresan in J. Schmidhuber [R7] Reddit/ML, 2019. J. Schmidhuber on Seppo Linnainmaa, inventor of backpropagation in 1970. [R8] Reddit/ML, 2019. J. Schmidhuber on Alexey Ivakhnenko, godfather of deep learning 1965. [R9] Reddit/ML, 2019. We [R11] Reddit/ML, 2020. Schmidhuber: Critique of Honda Prize for Dr. Hinton [R12] Reddit/ML, 2020. J. Schmidhuber: Critique of Turing Award for Drs. Bengio & Hinton & LeCun [R15] Reddit/ML, 2021. J. Schmidhuber Preprint arXiv/1311.2524, Nov 2013. Preprint arXiv/1703.06870, 2017. Link. The Past, Present and Future of Artificial Intelligence. PDF. ACM Link. 1992. Based on TR FKI-148-91, TUM, 1991.[UN0] PDF. [UN2] J. Schmidhuber. Habilitation thesis, TUM, 1993. PDF. can be found here (depth > 1000). 2006. PDF. Link. [VAN1] S. Hochreiter. Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, TUM, 1991 (advisor J. Schmidhuber). PDF. PDF. [VAN4] Y. Bengio. Neural net language models. Scholarpedia, 3(1):3881, 2008. Link. Link. Youtube video [see 28:16]. But in 2010, our team showed[MLP1-2] Youtube video, 2018. Preprint arXiv:1609.08144 (PDF), 2016. Based on LSTM which it mentions at least 50 times. WWW link (retrieved 15 May 2020). PDF. Menu

    Menu ChatGPT and AI Usage Survey (Students) GNU Public License




    Subscribe to EIR  
    Subscribe to the NATIONAL POST
    Regina
    11°
    Partly cloudy