Menu directory status & updates copyrights help

Howell: Introduction

Introduction to this webSite

Table of Contents

Questions: Grossberg's c-ART, Transformer NNs, and consciousness?

The questions driving this "webSite" (collection of webPages) are :
  1. Do "Large Language Models (LLMs) (such as chatGPT, LaMDA, etc) already exhibit a [protero, incipient] consciousness, in particular given the rough similarity of the basic unit of "Transformer Neural Networks" (TrNNs) to one of Grossberg's general "modules". The latter are proposed as a small number of units that are readily recombined with slight modifications as a basis for much of brain architecture, much like the small number of concepts in physics can be applied across a broad range of themes?
  2. How difficult would it be to augment "Transformer Neural Networks" (TrNNs) with Grossberg's [concept, architecture]s, including the emergent systems for consciousness? Perhaps this would combine the scalability of the former with the [robust, extendable] foundations of the latter, which is supported by [broad, diverse, deep] :
  3. Are current (semi-manual) "controls" of "Large Language Models (LLMs) going in the direction of machine consciousness, without those involved being aware of this? Will "controls" ultimately require machine consciousness as one of their components, in particular for [learning, evolution] in a stable and robust manner?

Grossberg: why ART is relevant to consciousness in Transformer NNs

Stephen Grossberg 08Apr2023
Subject: relevance of Grossberg's conscious-ART concept to Transformer NNs?

Grossberg has shown that ART is the UNIQUE class of neural networks that can AUTONOMOUSLY learn to attend, classify, and correct predictive errors in a changing world that is filled with unexpected events.

Grossberg derived ART using a THOUGHT EXPERIMENT in his 1980 article in Psychological Review called How Does a Brain Build a Cognitive Code. The 2021 book reviews this thought experiment, which uses only a few familiar facts from daily life as the hypotheses that lead to ART. The conclusion that ART is unique is thus hard to contradict.

Many discoveries using ART have continued to be made to the present, including how ART dynamics are realized in laminar neocortical circuits with identified neurons using spiking dynamics, also reviewed in the 2021 book.

Why is this relevant to the Vaswani et al 2017 "Attention is all you need" paper?

It is because ART is currently the most advanced cognitive and neural theory about how humans and machines can learn to PAY ATTENTION, and how attention dynamically stabilizes recognition learning in ART, thereby solving the CATASTROPHIC FORGETTING problem that afflicts back propagation and Deep Learning.

The fact that ART generates feature-category resonances, that explain and simulate lots of psychological and neurobiological data about CONSCIOUS recognition, also makes it highly relevant to AI efforts to design "sentient" algorithms.

Grossberg has also shown how other resonances support conscious seeing, hearing, and feeling, and characterizes the kinds of attention that occur during these events.

Every AI practitioner who is interested in attention and consciousness should thus study ART, if only to avoid reinventing the wheel.

A workable [definition, context, model] for consciousness

A few common definitions of consciousness are provided my webPage [definitions, models] of [consciousness, sentience]. However, for reasons given on that webpage, only Stephen Grossberg's concept provide a workable basis that is tied to [].

A few models of consciousness are summarized on my webPage A quick comparison of Consciousness Theories. Only a few concepts are listed, almost randomly selected except for [Grossberg, Taylor]'s, as there are a huge [number, diversity] of concepts.

Stephen Grossberg may have the ONLY definition of consciousness that is directly tied to quantitative models for lower-level [neuron, general neurology, psychology] data. Foundational models, similar in nature to the small number of general theories in physics to describe a vast range of phenomena, were derived over a period of ?4-5? decades BEFORE they were found to apply to consciousness. That paralleled their use in very widespread applications in [science, engineering, etc]. As such, this is the only solidly-based EMERGENT theory of consciousness that I know of. Grossberg's book provides a wonderful description :
For a variety of reasons, many of which are listed in the link above, I have selected Grossberg's concepts as a focus.

non-[Grossberg, TrNN] topics

The topics below are linked in the light blue (bottom) row of the Menu at the top of every webPage of this project (see above). Only a very brief description is provided here.
I also really like [Juergen Schmidhuber 2021?] awesome Deep Learning NN historical account, and [Luccio Russo ?date?]'s fascinating history of science. The latter doesn't touch on NNs, but has very important lessons across science history since ancient Greece. Also fun was [?Raymond Burke ?date? "Connections"].