+-----+
olde texte
from "$d_web"'/ProjMini/TrNNs_ART/[definitions, models] of consciousness.html' :
Howell's foot-in-mouth : As yet another great example of why I self-impose "Multiple Conflicting Hypothesis" (MCH), quantum consciousness may have an alternative lease on life. I've always had the feeling that :
which are harsh words for concepts that work so well and are foundations of modern physics. In ~2008-2010 I took the time to look more closely to seek corrections to astronomical models and their relation to climate. That's when I became far more hard-nosed, but MCH still forces me to retain [GR, QM, all other historical mainstream-accepted theories].
As I am writing this, I am also working through [Kaal etal 2021], which provides yet-another-more-than-complete-theory to completely replace quantum mechanics, and then some. It looks very good, but then all other alternates have failed over the last ~120 years even though several might have been somewhat-right (?). Although I've thrown out consideration for [neurinos, [strong, weak] nuclear forces, etc] (and [GR, black holes, etc] from the GR perspective), Kaal etal possibly open the door to OTHER nuclear processes in [history, biology, geology] (I have read claims of these types over the last 10 or so years). I'm certainly not convinced about entanglement, especially as I see too much of its "host concepts" in tatters, but even without entanglement, maybe very small-scale nuclear "spooky" processes do have a role in [biology, neuron, brain]s? We'll just have to wait as physicists dream up another batch.
Phillips 2023 The Cooperative Neuron
W.A. Phillips 20Jul2023 "The Cooperative Neuron: Cellular Foundations of Mental Life" Oxford, 384pp, ISBN: 9780198876984 https://global.oup.com/academic/product/the-cooperative-neuron-9780198876984
"...
- Relates mental life to the intracellular processes from which it arises
- Introduces the new field of cellular psychology that is arising from those discoveries
- Describes the discoveries in cellular neurophysiology for a wide audience of those working in or intrigued by the behavioral and brain sciences
- Provides new insights into the similarities and differences between a wide range of neurodevelopmental and other psychopathologies
Description - The Cooperative Neuron is part of a revolution that is occurring in the sciences of brain and mind. It explores the new field of cellular psychology, a field built upon the recent discovery that many neurons in the brain cooperate to seek agreement in deciding what's relevant in the current context. This cooperative context-sensitivity provides the cellular foundations for knowledge, doubt, imagination, self-development, and the search for purpose in life. This emerging field has far-reaching and fundamental implications for psychology, neuroscience, psychiatry, neurology, and the philosophy of mind.
In a clear and accessible style, the book explains the neuroscience to psychologists, the psychology to neuroscientists, and both to philosophers, students of the behavioral and brain sciences, and to anyone intrigued by the enduring mystery of how brains can be minds.
..."
Howell 16Jul2023: This is a recent example of the endless stream of new ideas. It is included on this webPage of consciousness theories (it was on this list before publication) mostly because it reminds me of the book :
- Brian Kolb, Ian Wishaw 2001 “An introduction to brain and behaviour” www.WorthPublishers.com, 3rd printing 2002, 601pages plus appendices, ISBN: 0-7167-5169-0
If Phillips' book is anything like that of Kolb&Wishaw's, it will be well worth reading.
#08********08
#] 12Jul2023 What is Consciousness - some of Grossberg's insights
povrL_pStrPL_replace 1 "$povrL" "$pStrPL"
>> doesn't do global string replacement 's|||g' ???
povr_strP_replace()
change :
sed "s|$strOld448|$strNew448|" "$povr448" >"$ptmp448"
to :
sed "s|$strOld448|$strNew448|g" "$povr448" >"$ptmp448"
13Jul2023 rerun
$ bash "$d_web"'ProjMini/TrNNs_ART/1_TrNNS_ART run bash scripts.sh'
using function TrNNs_ART_pStrPL
#08********08
#] 10Jul2023 class lists for grep
animal
tree shrew,
#08********08
#] 07Jul2023 convert to FULL links!! - can copy and paste html to work anywhere
will be "http://www.BillHowell.ca/..." when webPage posted
search
r1c1 | r1c2 | r1c3 |
r2c1 | r2c2 | r2c3 |
r3c1 | r3c2 | r3c3 |
r4c1 | r4c2 | r4c3 |
r5c1 | r5c2 | r5c3 |
+-----+
problem with p080fib02.33
$ find "$d_web"'ProjMini/TrNNs_ART/images- Grossberg 2021/' || grep "p\([0-9]\{3\}\)fib\([0-9]\{2\}\)\."
>> YIKES!!! piping uses | NOT || !!!
$ find "$d_web"'ProjMini/TrNNs_ART/images- Grossberg 2021/' | grep "p\([0-9]\{3\}\)fib\([0-9]\{2\}\)\."
>>OK
$ find "$d_web"'ProjMini/TrNNs_ART/images- Grossberg 2021/' | grep "p\([0-9]\{3\}\)fib\([0-9]\{2\}\)\." | sed "s|p\([0-9]\{3\}\)fib\([0-9]\{2\}\)\.|p\1fig\2.|g"
create fileops.sh [script, run]
>> see "$d_bin"'fileops.sh' -> dir_renameFileL()
>> and "$d_bin"'fileops run.sh' :
dir_renameFileL "$d_web"'ProjMini/TrNNs_ART/images- Grossberg 2021/' 1 "p\([0-9]\{3\}\)fib\([0-9]\{2\}\)\." "s|p\([0-9]\{3\}\)fib\([0-9]\{2\}\)\.|p\1fig\2.|g"
#08********08
#] 03Jul2023 in-line display of tables in html lists (preserves same-line [grep, sed, etc])
Example :
image p011fig01.07 The choice of signal function f determines how an initial activity pattern will be transformed and stored in short-term memory (STM). Among [same, slower, faster]-than-linear signal functions, only the last one can suppress noise. It does so as it chooses the population that receives the largest input for storage, while suppressing the activities of all other population, thereby giving rise to a winner-take-all choice. || initial pattern (xi(0) vs i):
f | Xi(∞)= xi(∞)/sum[j: xj(∞)] | X(∞)= sum[j: xj(∞)] | linear | perfect storage of any pattern | amplifies noise (or no storage) | slower-than-linear | saturates | amplifies noise | faster-than-linear | chooses max [winner-take-all, Bayesian], categorical perception | suppresses noise, [normalizes, quantizes] total activity, finite state machine |
#08********08
#] 03Jul2023 NYET use html tables!! : play with byRow tables using Qnial
() parenthesis replace with (' and ')
; separate items of row replace with ' ' (space between 2 apos)
null no separator between rows no replacement
byRows must have :
[(#, #)] for byRows [open, close] parens
no spaces [before, after] parens in expression, example :
good: (f; Xi(∞)= xi(∞)/sum[j: xj(∞)]; X(∞)= sum[j: xj(∞)])
bad: (f; Xi(∞) = xi (∞)/sum[j: xj(∞)]; X(∞) = sum[j: xj(∞)])
pinn='(#f; Xi(∞)= xi(∞)/sum[j: xj(∞)]; X(∞)= sum[j: xj(∞)]#) (#linear; perfect storage of any pattern; amplifies noise (or no storage)#) (#slower-than-linear; saturates; amplifies noise#) (#faster-than-linear; chooses max [winner-take-all, Bayesian], categorical perception; suppresses noise, [normalizes, quantizes] total activity, finite state machine#)'
$ echo "$pinn" | sed "s|; |' '|g;s|(#|('|g;s|#)|')|g"
('f' 'Xi(∞)= xi(∞)/sum[j: xj(∞)]' 'X(∞)= sum[j: xj(∞)]') ('linear' 'perfect storage of any pattern' 'amplifies noise (or no storage)') ('slower-than-linear' 'saturates' 'amplifies noise') ('faster-than-linear' 'chooses max [winner-take-all, Bayesian], categorical perception' 'suppresses noise, [normalizes, quantizes] total activity, finite state machine')
qnial> preTbl := ('f' 'Xi(∞)= xi(∞)/sum[j: xj(∞)]' 'X(∞)= sum[j: xj(∞)]') ('linear' 'perfect storage of any pattern' 'amplifies noise (or no storage)') ('slower-than-linear' 'saturates' 'amplifies noise') ('faster-than-linear' 'chooses max [winner-take-all, Bayesian], categorical perception' 'suppresses noise, [normalizes, quantizes] total activity, finite state machine')
qnial> table := mix preTbl
+------------------+---------------------------------------------------------------+--------------------------
|f |Xi(∞)= xi(∞)/sum[j: xj(∞)] |X(∞)= sum[j: xj(∞)]
+------------------+---------------------------------------------------------------+--------------------------
|linear |perfect storage of any pattern |amplifies noise (or no sto
+------------------+---------------------------------------------------------------+--------------------------
|slower-than-linear|saturates |amplifies noise
+------------------+---------------------------------------------------------------+--------------------------
|faster-than-linear|chooses max [winner-take-all, Bayesian], categorical perception|suppresses noise, [normali
+------------------+---------------------------------------------------------------+--------------------------
----------------------------------------------------+
|
----------------------------------------------------+
rage) |
----------------------------------------------------+
|
----------------------------------------------------+
zes, quantizes] total activity, finite state machine|
----------------------------------------------------+
#08********08
#] 01Jul2023 cull list of tables - check captions
$ grep " p[0-9]\{3\}tbl[0-9]\{2\}\.[0-9]\{2\}" "$d_web"'ProjMini/TrNNs_ART/Grossbergs list of [figure, table]s.html'
>> looks OK, 5 tables found :
[p029tbl01.01, p030tbl01.02, p039tbl01.03, p042tbl01.04, p627tbl17.01]
Now, have the images been captured from Kindle?
$ find "$d_web"'ProjMini/TrNNs_ART/images- Grossberg 2021/' -name "^p[0-9]\{3\}tbl[0-9]\{2\}\.[0-9]\{2\}*"
$ find "$d_web"'ProjMini/TrNNs_ART/images- Grossberg 2021/' -name "*tbl*"
+--+
/home/bill/web/ProjMini/TrNNs_ART/images- Grossberg 2021/ :
p042tbl01.04 six main resonances which support different kinds of conscious awareness.png
p627tbl17.01 Homologs between [reaction-diffusion, recurrent shunting cellular network] models of development.png
p039tbl01.03 [consciousness, movement] links: visual, auditory, emotional.png
p030tbl01.02 complementary streams: What- [rapid, stable] learn invariant object categories, Where- [labile spatial, action] actions.png
p029tbl01.01 complementary streams[visual boundary, what-where, perception & recognition, object tracking, motor target].png
p029tbl01.01 complementary streams[visual boundary, what-where, perception & recognition, object tracking, motor target] title.png
+--+
worked - So do links to tables work? yes except one:
>> p029tbl01.01 : need to gimp title row to table!!, link doesn't work!!
link : p029tbl01.01 complementary streams [visual boundary, what-where, perception & recognition, object tracking, motor target].png
image : p029tbl01.01 complementary streams[visual boundary, what-where, perception & recognition, object tracking, motor target].png
image2: p029tbl01.01 complementary streams[visual boundary, what-where, perception & recognition, object tracking, motor target] title.png
>> I inserted spaces into image fnames -> works now, but still have to combine [title, body] images of table
#08********08
#] 30Jun2023 NRCan documents : consciousness sections
for retrieval from backups, see : "$d_bin""backup notes.txt"
"SPINE" Social media files were copied in with TrNN files.
"$d_web"'ProjMini/TrNNs_ART/Social media/Howell 111230 – Social graphs, social sets, and social media.doc'
sections :
Part VI - Far beyond current toolsets (for STAR GAZERS)
Machine and Hybrid Consciousness
Where are my comments regarding machine organization of teams and taking actions?
#08********08
#] 21Jun2023 save Kindle images from Grossberg's book
fNames from "$d_web"'ProjMini/TrNNs_ART/Grossbergs list of figures.html'
Kindle file: access online via Kindle software? I bought through Amazon
/home/bill/Calibre Library
Calibre - reads Kindle
Grossberg's [core, fun, strange] concepts - where is this?
(it's just link to one of my html files)
Amazon email 14May2022 : All Kindle content, including books and Kindle active content, that you've purchased from the Kindle Store is stored in your Kindle library on Amazon.ca.
https://www.amazon.ca/gp/f.html?C=VPELIYMUNHI5&K=1AS3NR2TXJE0U&M=urn:rtn:msg:202305150046037b699e135fb2453cb7ff57e7a240p0na&R=2HBQF2BIEZZAP&T=C&U=https%3A%2F%2Fread.amazon.ca%3Fref_%3Dpe_47689480_613027100_kfw_digital_order_confirmation_email&H=JAY4TQ1YHDHOYV6Z8YXYKUZAJKMA&ref_=pe_47689480_613027100_kfw_digital_order_confirmation_email
#08********08
#] 21Jun2023 create: convert_list_of_figures() in
#] "$d_web"'ProjMini/TrNNs_ART/1_TrNNS-ART run bash scripts.sh'
# convert_list_of_figures() - convert old list of figures to link-with-caption format
# 21Jun2023 initial, for one-time-use only, but handy for adaptation etc
works well, it seems, tests to come with images
# once done, backup pFnam_762, then move pTmp2_762 to pFnam_762
change fNames of existing images (from Kindle version)
+-----+
debug code
# echo 'p087fib03.01 A macrocircuit of key visual processes.png' | sed "||images- Grossberg 2021/|"
# oops - `/ in path?
# echo 'p087fib03.01 A macrocircuit of key visual processes.png' | sed "||images- Grossberg 2021|"
# sed: -e expression #1, char 1: unknown command: `|'
# sill a problem, why? MISSING s!!!
# echo 'p087fib03.01 A macrocircuit of key visual processes.png' | sed "s||images- Grossberg 2021|"
# images- Grossberg 2021p087fib03.01 A macrocircuit of key visual processes.png
# >> OK!!
https://www.amazon.ca/gp/f.html?C=VPELIYMUNHI5&K=1AS3NR2TXJE0U&M=urn:rtn:msg:202305150046037b699e135fb2453cb7ff57e7a240p0na&R=2HBQF2BIEZZAP&T=C&U=https%3A%2F%2Fread.amazon.ca%3Fref_%3Dpe_47689480_613027100_kfw_digital_order_confirmation_email&H=JAY4TQ1YHDHOYV6Z8YXYKUZAJKMA&ref_=pe_47689480_613027100_kfw_digital_order_confirmation_email
#08********08
#] 10Jun2023 search "Self-Organizing Map software"
+-----+
https://www.sciencedirect.com/science/article/pii/S266596382200032X
Álvaro José García-Tejedor, Alberto Nogales,
An open-source Python library for self-organizing-maps,
Software Impacts,
Volume 12,
2022,
100280,
ISSN 2665-9638,
https://doi.org/10.1016/j.simpa.2022.100280.
(https://www.sciencedirect.com/science/article/pii/S266596382200032X)
Abstract: Organizations have realized the importance of data analysis and its benefits. This in combination with Machine Learning algorithms has allowed us to solve problems more easily, making these processes less time-consuming. Neural networks are the Machine Learning technique that is recently obtaining very good best results. This paper describes an open-source Python library called GEMA developed to work with a type of neural network model called Self-Organizing-Maps. GEMA is freely available under GNU General Public License at GitHub (https://github.com/ufvceiec/GEMA). The library has been evaluated in different particular use cases obtaining accurate results.
Keywords: Machine learning; Neural networks; Self-organizing maps
https://github.com/ufvceiec/GEMA
GEMA is a Python library which can be used to develop and train Self-Organizing Maps (SOMs). It also allows users to classify new individuals, obtain reports and visualize the information with interactive graphs. mailing-list: gema-som@googlegroups.com NOTE: GEMA has only been implemented in Python 3.0
see "$d_SysMaint"'make, cmake, git etc/git notes.txt'
>> I simply downloaded the zip file!
+-----+
https://www.xlstat.com/en/solutions/features/self-organizing-maps
Self-Organizing Maps (SOM)
Self-Organizing Maps are an unsupervised Machine Learning method used to reduce the dimensionality of multivariate data
The som function developed in XLSTAT-R calls the som function from the kohonen package in R (Ron Wehrens and Johannes Kruisselbrink).
>> must pay for xlstat, R is of course free
#08********08
#] 10Jun2023 search "chatGPT comparison of scientific models"
Howell : For now - I can't find relevant papers but maybe they are there but buried?
+-----+
https://www.nature.com/articles/s41746-023-00819-6
Comparing scientific abstracts generated by ChatGPT to real abstracts with detectors and blinded human reviewers
Catherine A. Gao, Frederick M. Howard, Nikolay S. Markov, Emma C. Dyer, Siddhi Ramesh, Yuan Luo & Alexander T. Pearson
npj Digital Medicine volume 6, Article number: 75 (2023) Cite this article
Gao, C.A., Howard, F.M., Markov, N.S. et al. Comparing scientific abstracts generated by ChatGPT to real abstracts with detectors and blinded human reviewers. npj Digit. Med. 6, 75 (2023). https://doi.org/10.1038/s41746-023-00819-6
... ChatGPT writes believable scientific abstracts, though with completely generated data. Depending on publisher-specific guidelines, AI output detectors may serve as an editorial tool to help maintain scientific standards. The boundaries of ethical and acceptable use of large language models to help scientific writing are still being discussed, and different journals and conferences are adopting varying policies.
+-----+
https://scholar.harvard.edu/saghafian/blog/analytics-science-behind-chatgpt-human-algorithm-or-human-algorithm-centaur
Soroush Saghafian, Associate Professor, Harvard University
The Analytics Science Behind ChatGPT: Human, Algorithm, or a Human-Algorithm Centaur?
January 10, 2023
+-----+
https://www.theverge.com/2023/3/24/23653377/ai-chatbots-comparison-bard-bing-chatgpt-gpt-4
AI chatbots compared: Bard vs. Bing vs. ChatGPT
The web is full of chattering bots, but which is the most useful and for what? We compare Bard, Bing, and ChatGPT.
By James Vincent, Jacob Kastrenakes, Adi Robertson, Tom Warren, Jay Peters, and Antonio G. Di Benedetto
Mar 24, 2023, 8:19 AM MDT|
You can (and indeed should) scroll through our questions, evaluations, and conclusion below, but to save you time and get to the punch quickly: ChatGPT is the most verbally dextrous, Bing is best for getting information from the web, and Bard is... doing its best. (It’s genuinely quite surprising how limited Google’s chatbot is compared to the other two.)
+-----+
search "chatGPT comparison of scientific papers"
>> nada
search "chatGPT classification of scientific concepts"
>> blah
#08********08
#] 07Jun2023 search "THOMY NILSSON, consciousness"
Thomy Nilsson 2020 "What Came Out of Visual Memory: Inferences from Decay of Difference-Thresholds" Attention, Perception & Psycophysics, https://www.islandscholar.ca/islandora/object/ir%3A23199/datastream/PDF/download https://doi.org/DOI 10.3758/s13414-020-02032-z
This is a pre-print of an article published in Attention, Perception & Psycophysics, 2020.
The final authenticated version is available online at:
https://doi.org/DOI 10.3758/s13414-020-02032-z
Nilsson 2020 What Came Out of Visual Memory, Inferences from Decay of Difference-Thresholds.pdf
Thomy Nilsson. Emeritus Prof. Uof PEI. Canada
#08********08
#] 06Jun2023 KEEP survey on chatGPT use in education (see email)
https://docs.google.com/forms/d/1M9qOziU4C3TsPQKCcdIeeIP180ssS1UlS-eUF3Q-bEA/viewform?edit_requested=true
"$d_web"'ProjMini/Transformer NNs/230604 KEEP survey ChatGPT and AI Usage (Students).html'
"$d_web"'ProjMini/Transformer NNs/230604 KEEP survey ChatGPT and AI Usage (Teachers).html'
not me - I'm neither [teacher, student] :
Which of the following AI tools have you used? (Select all that apply) *
ChatGPT
Bing AI
Google Bard
Notion AI
Poe
HuggingChat
Jasper
Midjourney
DALL-E or DALL-E 2
GitHub Copilot
Runway
Other:
#08********08
#] 01Jun2023 search "Richard Sutton and consciousness"
https://cifar.ca/research-programs/brain-mind-consciousness/#topskipToContent
Exploring theories of consciousness
Program Co-Director Anil Seth (University of Sussex) and Fellow Tim Bayne (Monash University) have been putting theories about the biological and physical basis of consciousness to the test to explore how they relate to each other and whether they can be empirically distinguished. They are reviewing four prominent theoretical approaches to consciousness: higher-order theories, global workspace theories, re-entry and predictive processing theories, and integrated information theory. As part of this process, they are identifying which aspects of consciousness each approach proposes to explain, what their neurobiological commitments are and what empirical data are cited in their support. They are also considering how some prominent empirical debates might distinguish among these theories and are outlining three ways in which new theories need to be developed to deliver a mature regimen of theory-testing in the neuroscience of consciousness.
>> interesting : four prominent theoretical approaches to consciousness:
higher-order theories
global workspace theories
re-entry and predictive processing theories
integrated information theory
#08********08
#] 31May2023 Krichmar, Connectionists: Sentient AI Survey Results
saveEmTo Evolution email :
1_Newsgoups slow/neural-Connectionist themes/230531 Krichmar: Sentient AI Survey Results
-------- Forwarded Message --------
From: Jeffrey L Krichmar
To: connectionists@cs.cmu.edu
Subject: Connectionists: Sentient AI Survey Results
Date: Tue, 30 May 2023 14:30:24 -0700
Dear Connectionists,
I am teaching an undergraduate course on “AI in Culture and Media”. Most students are in our Cognitive Sciences and Psychology programs. Last week we had a discussion and debate on AI, Consciousness, and Machine Ethics. After the debate, around 70 students filled out a survey responding to these questions.
Q1: Do you think it is possible to build conscious or sentient AI? 65% answered yes.
Q2: Do you think we should build conscious or sentient AI? 22% answered yes
Q3: Do you think AI should have rights? 54% answered yes
I thought many of you would find this interesting. And my students would like to hear your views on the topic.
Best regards,
Jeff Krichmar
Department of Cognitive Sciences
2328 Social & Behavioral Sciences Gateway
University of California, Irvine
Irvine, CA 92697-5100
jkrichma@uci.edu
http://www.socsci.uci.edu/~jkrichma
https://www.penguinrandomhouse.com/books/716394/neurorobotics-by-tiffany-j-hwu-and-jeffrey-l-krichmar/
#08********08
#] 24May2023 da Silva, Elnabarawy, Wunsch: survey of ART NN models for engineering applications
Leonardo Enzo Brito da Silva, Islam Elnabarawy, Donald C. Wunsch,
A survey of adaptive resonance theory neural network models for engineering applications,
Neural Networks, Volume 120, 2019, Pages 167-203, https://www.sciencedirect.com/science/article/abs/pii/S0893608019302734
Abstract: This survey samples from the ever-growing family of adaptive resonance theory (ART) neural network models used to perform the three primary machine learning modalities, namely, unsupervised, supervised and reinforcement learning. It comprises a representative list from classic to contemporary ART models, thereby painting a general picture of the architectures developed by researchers over the past 30 years. The learning dynamics of these ART models are briefly described, and their distinctive characteristics such as code representation, long-term memory, and corresponding geometric interpretation are discussed. Useful engineering properties of ART (speed, configurability, explainability, parallelization and hardware implementation) are examined along with current challenges. Finally, a compilation of online software libraries is provided. It is expected that this overview will be helpful to new and seasoned ART researchers.
Keywords: Adaptive resonance theory; Clustering; Classification; Regression; Reinforcement learning; Survey
September 2019, Version of Record 4 December 2019.
Code repositories
A list of publicly available online source code/repositories is provided below:
y http://github.com/ACIL-Group
Missouri S&T Applied Computational Intelligence Laboratory
https://github.com/orgs/ACIL-Group/repositories?type=all
>> Howell: download all later...
x http://techlab.bu.edu/main/article/software
x https://www.ntu.edu.sg/home/asahtan/downloads.htm
y http://www2.imse-cnm.csic.es/%7Ebernabe
software link not obvious, try download another day...
x http://ee.bgu.ac.il/%7Eboaz/software.html
The connection has timed out
y https://libtopoart.eu/
A library of lifelong learning neural networks based on the Adaptive Resonance Theory
LibTopoART is a software library providing platform independent C# implementations of several neural networks based on the TopoART architecture. This architecture has been developed as a unified machine learning approach tackling frequent problems arising in cognitive robotics and advanced machine learning, such as online-learning, lifelong learning from data streams, as well as incremental learning and prediction from non-stationary data, noisy data, imbalanced data, and incomplete data.
The base neural network TopoART (TA) is an incremental neural network combining elements of several other approaches, in particular, Adaptive Resonance Theory (ART) and topology-learning networks. It is capable of parallel stable on-line clustering of stationary or non-stationary data at multiple levels of detail. These capabilities are complemented by derived neural networks dedicated to tasks such as classification, episodic clustering, and regression.
>> Howell: downloaded LibTopoART v0.97 txz file
"$d_web"'CompLangs/ART/LibTopoART_v0.97.0/'
#08********08
#] 22May2023 Phillips 20Jul2023 "The Cooperative Neuron: Cellular Foundations of Mental Life"
W.A. Phillips 20Jul2023 "The Cooperative Neuron: Cellular Foundations of Mental Life" Oxford, 384pp, ISBN: 9780198876984 https://global.oup.com/academic/product/the-cooperative-neuron-9780198876984
The Cooperative Neuron
Cellular Foundations of Mental Life
William A. Phillips
Relates mental life to the intracellular processes from which it arises
Introduces the new field of cellular psychology that is arising from those discoveries
Describes the discoveries in cellular neurophysiology for a wide audience of those working in or intrigued by the behavioral and brain sciences
Provides new insights into the similarities and differences between a wide range of neurodevelopmental and other psychopathologies
Description
The Cooperative Neuron is part of a revolution that is occurring in the sciences of brain and mind. It explores the new field of cellular psychology, a field built upon the recent discovery that many neurons in the brain cooperate to seek agreement in deciding what's relevant in the current context. This cooperative context-sensitivity provides the cellular foundations for knowledge, doubt, imagination, self-development, and the search for purpose in life. This emerging field has far-reaching and fundamental implications for psychology, neuroscience, psychiatry, neurology, and the philosophy of mind.
In a clear and accessible style, the book explains the neuroscience to psychologists, the psychology to neuroscientists, and both to philosophers, students of the behavioral and brain sciences, and to anyone intrigued by the enduring mystery of how brains can be minds.
#08********08
#] 16May2023 TrNNs_ART_TOC_cleanHtml()
remove :
s|$| |;
s||<#LI>|;
19May2023 replaced in 'Grossbergs list of figures.html' :
p100fib03.14 Boundaries are completed
#08********08
#] 16May2023 TrNNs_ART_TOC_grepL_bag() - keep on-hand themes in [chapter, section] headings
+-----+
olde code
+--+
TrNNs_ART_TOC_grep() :
# pTmp1="$d_temp"'TrNNs_ART_TOC_grep tmp1.html'
# pTmp2="$d_temp"'TrNNs_ART_TOC_grep tmp2.html'
# pTmp3="$d_temp"'TrNNs_ART_TOC_grep tmp3.html'
echo "pTmp1 = $pTmp1"
echo "pTmp2 = $pTmp2"
echo "pTmp3 = $pTmp3"
+--+
TrNNs_ART_TOC_grepL_bag() :
# Core themes of the book
title='ART'
grepStr='ART'
TrNNs_ART_TOC_grep ' ' "$title" "$grepStr"
title='Consciousness'
grepStr='Consciousness'
TrNNs_ART_TOC_grep '-i' "$title" "$grepStr"
title='conscious vs non-conscious'
grepStr='non-conscious'
TrNNs_ART_TOC_grep '-i' "$title" "$grepStr"
# Successful [datamodelling, applications] showing effectiveness of cART etc
title='[bio, neuro, psycho]logy data'
grepStr='data\|monkey\|sea urchin\|Slime mold\|slug\|biology\|Psychological\|neurophysiological\|perceptual'
TrNNs_ART_TOC_grep '-i' "$title" "$grepStr"
# this list was retyped manually from a paragraph in the book
# there are no section headings for this, too diverse!
title='Credibility from non-[bio, psycho]logical applications of Grossbergs ART'
grepStr='controller'
TrNNs_ART_TOC_grep '-i' "$title" "$grepStr"
# Fun themes
title='art (painting etc)'
grepStr='DaVinci\|Monet\|Matisse\|Seurat'
TrNNs_ART_TOC_grep '-i' "$title" "$grepStr"
title='Brain disorders and disease'
grepStr='autism\|amnesia\|Alzheimer\|sleep\|Agnosia\|schizophrenia\|ADHD\|Theory of Mind\|HeLa cancer cells\|Xenopus oocytes\|cardiac myocytes\|helplessness\|self-punitive\|fetish\|hallucinate'
TrNNs_ART_TOC_grep '-i' "$title" "$grepStr"
title='hippocampus (Grossberg: The hippocampus IS a cognitive map!)'
grepStr='hippocampus\|hippocampal'
TrNNs_ART_TOC_grep '-i' "$title" "$grepStr"
# Strange themes
title='AI, machine intelligence, etc'
grepStr='artificial intelligence\|autonomous adaptive mobile intelligence\|machine intelligence'
TrNNs_ART_TOC_grep '-i' "$title" "$grepStr"
title='Auditory continuity illusion'
grepStr='auditory continuity'
TrNNs_ART_TOC_grep '-i' "$title" "$grepStr"
title='Brain is NOT Bayesian!'
grepStr='Bayesian'
TrNNs_ART_TOC_grep '-i' "$title" "$grepStr"
title='brain rythms (Howell: = Schumann resonances in atmosphere down to Earths surface)'
grepStr='theta\|gamma\|beta\|delta'
TrNNs_ART_TOC_grep '-i' "$title" "$grepStr"
title='Explainable AI'
grepStr='Explainable AI'
TrNNs_ART_TOC_grep '-i' "$title" "$grepStr"
title='informational noise suppression'
grepStr='noise suppression\|rate spectra'
TrNNs_ART_TOC_grep '-i' "$title" "$grepStr"
title='learning and development '
grepStr='Infant development\|adult learning\|cortical development'
TrNNs_ART_TOC_grep '-i' "$title" "$grepStr"
title='logic vs connectionist'
grepStr='logic\|connectionist'
TrNNs_ART_TOC_grep '-i' "$title" "$grepStr"
title='Marcus, Pinker, Chomsky'
grepStr='Marcus\|Pinker\|Chomsky'
TrNNs_ART_TOC_grep '-i' "$title" "$grepStr"
title='neurotransmitter'
grepStr='neurotransmitter\|dopamine\|serotonin\|noradrenaline\|acetocholine'
TrNNs_ART_TOC_grep '-i' "$title" "$grepStr"
title='Why are there hexagonal grid cell receptive fields?'
grepStr='hexagonal'
TrNNs_ART_TOC_grep '-i' "$title" "$grepStr"
#08********08
#] 14May2023 *** PCA, IaCA, fMRI - how relevant?
Calibre - I downloaded via LMDE Software manager, says can read Kindle
https://read.amazon.ca/?asin=B094W6BBKN&ref_=dbs_t_r_kcr
#08********08
#] 11May2023 emto Stephen Grossberg: status, and permissions wll be required
I may need your permission at some time to post a number of [module, modular architecture] images that I can compare to "Attention is all you need" TrNN images. I may also need permission because I have retyped (many typos to correct) the [chapter title, section heading]s of your book so that it is easier for me to search back and forth as I read :
"http://www.BillHowell.ca/ProjMini/TrNNs_ART/Grossbergs TableOfContents.html"
This may also make it easier for others to see the [breadth, depth, data support, etc] of your work.
I have also done a number of theme searches (very incomplete, initial) as listed in :
"http://www.BillHowell.ca/ProjMini/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html"
Of course each reader of your book will have their own interests, but they can do their own searches after downloading the "Table of Contents" webPage.
I liked your reworking of my initial email very much, and it appears in two of the webPages.
You've probably already posted the section headings to one of your webPages, and hopefully the [index, reference]s as well (they are far too large for me to re-type!). If so, please send the links to me.
Don't waste time going through the webSite, as it is very incomplete. I am merely sending this email to let you know that I continue to work towards a connectionist posting. My own [summary, commentary] would not help me with the work I wold like to do, and I quickly realized it wouldn't help others, as the best service that I can do at this time is sikply bring out details of the book for others who have not yet purchased it.
I have spent the last month on [income taxes, staying my mother, catch-up reading on an awesome replacement theory for quantum mechanics], and "going backwards" on getting background information ready for a Connectionist posting about cART and the Transformer NNs. As such, I haven't been working on checking into the potential consciousness of LLMs, nor of reading further into your "Conscious Mind, Resonant Brain", but I need to do the background work anyways. I was stunned at the breadth, depth] of your book, and it will take months to go carefully through your book.
#08********08
#] 11May2023 sequential [webReady, lftp update] of TrNNs&ART
open 'fileops run.sh' and ensure that "active" command @ end-of-file is :
dWeb_addHdrFtr_dTmp "$d_web"'ProjMini/TrNNs_ART/' # prepare files in dWeb for upload
$ bash "$d_bin"'fileops run.sh'
>> works great, no errors
open 'lftp update specified dir.sh' and ensure that "active" command @ end-of-file is :
dWeb_update_all '/media/bill/ramdisk/dWeb2/' '/billhowell.ca/ProjMini/TrNNs_ART/'
$ bash "$d_PROJECTS"'bin - secure/lftp update specified dir.sh'
>> works great, no errors
browser load "http://www.billhowell.ca/ProjMini/TrNNs_ART/Introduction.html"
check various links etc
#08********08
#] 11May2023 upload "$d_web"'ProjMini/TrNNs_ART/', special treatment of html files
[finish, test] "$d_PROJECTS""bin - secure/lftp update specified dir.sh"
>> works now, after some twigging, change of online dirNames
#08********08
#] 11May2023 bash auto-generation of html TableOfContents
$ grep ' ||;s|["]||g;s|\(.*\)|\n\t\t\t\t \1|'
Explainable AI
logic vs connectionist
Marcus, Pinker, Chomsky
Why are there hexagonal grid cell receptive fields?
11May2023 Howell: This webPage won't be drafted for a few months, given the need to focus on project priorities.
11May2023 Howell: This webPage is only just being drafted. It is not a core priority (related to the 3 questions of the overview), so progress will be slow.
11May2023 Howell: This webPage is only just being drafted, with problems of finding old [reports, references]. It is not a core priority (related to the 3 questions of the overview), so progress will be slow.
11May2023 Howell: This webPage won't be drafted for a few months, given the need to focus on project priorities, and my sense that it's best to apply the concepts myself to get a feel for the subject. Or better yet, to search from related comments by others.
11May2023 Howell: This webPage is very incomplete. It won't be drafted for a few months, given the need to focus on project priorities.
#08********08
#] 10May2023 Table of Contents cleanup
continued ...->
|
popular vs. profound science
|
mis-attribution of research
|
???
|
new ideas are always needed
|
For whom the bell tolls (Sejnowski)
|
|
cART & Transformer NNs
#08********08
#] 08May2023 [page numbers, missing] section headings
#08********08
#] 05May2023 TblOfContents, Why is ART unknown? chatGPT [as-is, Grossberg’s book],
#] pictures of [module, modal architecture]s, Sejnowski, use chatGPT
... later...
#08********08
#] 03May2023 list of [bio, psych] data
whole book provides with this, best for now :
pxixh07 list of topics
#08********08
#] 02May2023 Table of consciousness concepts
+-----+
olde code
#08********08
#] 01May2023 search "google lamda download"
+-----+
https://github.com/conceptofmind/LaMDA-rlhf-pytorch
LaMDA-pytorch
Open-source pre-training implementation of Google's LaMDA research paper in PyTorch. The totally not sentient AI. This repository will cover the 2B parameter implementation of the pre-training architecture as that is likely what most can afford to train. You can review Google's latest blog post from 2022 which details LaMDA here. You can also view their previous blog post from 2021 on the model here.
/home/bill/web/ProjMini/TrNNs_ART/LaMDA-rlhf-pytorch-0.0.2/
-> source code (tar.gz)
>> OK, it did download, how do I use it?
seems that I may have to learn
from lamda_pytorch.lamda_pytorch import lamda_model
>> how do I get that source code?
+--+
https://github.com/conceptofmind/LaMDA-rlhf-pytorch
Basic Usage - Pre-training
lamda_base = LaMDA(
num_tokens = 20000,
dim = 512,
dim_head = 64,
depth = 12,
heads = 8
)
lamda = AutoregressiveWrapper(lamda_base, max_seq_len = 512)
tokens = torch.randint(0, 20000, (1, 512)) # mock token data
logits = lamda(tokens)
print(logits)
+--+
https://github.com/conceptofmind/LaMDA-rlhf-pytorch
Notes on training at scale:
There may be issues with NaN for fp16 training.
Pipeline parallelism should be used with ZeRO 1, not ZeRO 2.
+-----+
search "how do I download a git"
+--+
https://stackoverflow.com/questions/110205/want-to-download-a-git-repository-what-do-i-need-windows-machine
Download Git on Msys. Then:
git clone git://project.url.here
answered Sep 21, 2008 at 3:55
Greg Hewgill
+--+
https://www.howtogeek.com/827348/how-to-download-files-from-github/
How to Download Files From GitHub
Benj Edwards
@benjedwards
Aug 23, 2022, 11:00 am EDT | 2 min read
I didn't use :
$ git clone gt:"https://github.com/conceptofmind/LaMDA-rlhf-pytorch"
+--+
https://stackoverflow.com/questions/3697707/how-do-i-download-a-specific-git-commit-from-a-repository
How do I download a specific git commit from a repository?
Asked 12 years, 7 months ago
Modified 9 months ago
Viewed 44k times
+--+
https://blog.hubspot.com/website/download-from-github
How to Download From GitHub: A Beginner's Guide
Download Now: Free Coding Templates
Jamie Juviler
Updated: April 27, 2023
Published: November 15, 2022
How to Download a Release From GitHub
Repositories may also put out releases, which are packaged versions of the project. To download a release:
1. Navigate to the GitHub repository page. If it’s a public repository, you can visit the page without logging in. If it’s a private repository, you’ll need to log in and have the proper permissions to access it.
2. Click Releases, located on the right-side panel.
3. You’ll be brought to a page listing releases from newest to oldest. Under the release that you want to download, locate the Assets section. Click a file under this section to download it.
+-----+
search "download lamda_pytorch"
https://sourceforge.net › projects › lamda-pytorch.mirror
LaMDA-pytorch download | SourceForge.net
Mar 25, 2023Download Summary Files Reviews Open-source pre-training implementation of Google's LaMDA research paper in PyTorch. The totally not sentient AI. This repository will cover the 2B parameter implementation of the pre-training architecture as that is likely what most can afford to train
https://github.com › jarlold › LaMDA-pytorch
GitHub - jarlold/LaMDA-pytorch: Open-source pre-training implementation ...
Jan 17, 2023LaMDA-pytorch Open-source pre-training implementation of Google's LaMDA research paper in PyTorch. The totally not sentient AI. This repository will cover the 2B parameter implementation of the pre-training architecture as that is likely what most can afford to train. You can review Google's latest blog post from 2022 which details LaMDA here.
https://sourceforge.net › projects › lamda-pytorch.mirror › files
LaMDA-pytorch - Browse Files at SourceForge.net
Jul 1, 2022LaMDA-pytorch Files Open-source pre-training implementation of Google's LaMDA in PyTorch This is an exact mirror of the LaMDA-pytorch project, ... Download Latest Version v0.0.2.zip (76.7 kB) Get Updates. Home Name Modified Size Info Downloads / Week; v0.0.2: 2022-07-01: 1. v0.0.1: 2022-06-24: 0.
This is an exact mirror of the LaMDA-pytorch project, hosted at https://github.com/conceptofmind/LaMDA-rlhf-pytorch. SourceForge is not affiliated with LaMDA-pytorch. For more information, see the SourceForge Open Source Mirror Directory.
https://github.com › RichardBatka › LaMDA-pytorch-AI
RichardBatka/LaMDA-pytorch-AI - Github
LaMDA-pytorch Open-source pre-training implementation of Google's LaMDA research paper in PyTorch. The totally not sentient AI. This repository will cover the 2B parameter implementation of the pre-training architecture as that is likely what most can afford to train. You can review Google's latest blog post from 2022 which details LaMDA here.
forked from conceptofmind/LaMDA-rlhf-pytorch
>> Howell: multiple versions go back to one I gitted : conceptOfMind
#08********08
#] 01May2023 table comparing consciousness concepts
#08********08
#] 01May2023 search: http://www.BillHowell.ca/ replace: /home/bill/web/
/home/bill/web/webWork files/ /home/bill/web/webWork/
setup for more effective work
will have to do special html file upload script!!!
see TrNNs_ART_pStrPL() - replace strPL in d_TrNN_ART
manual create pStrPL (pairs above)
povrL created by script
$ bash "$d_bin"'fileops run.sh'
>> works after series of corrections
some renaming of files
#08********08
#] 27Apr2023 Eye on AI: Ilya Sutskever: The Mastermind Behind GPT-4 and the Future of AI
https://www.youtube.com/watch?v=SjhIlw3Iffs
Ilya Sutskever: The Mastermind Behind GPT-4 and the Future of AI
Eye on AI
11.3K subscribers
01May2023 watch video
230,426 views Premiered Mar 15, 2023 Eye on AI
In this podcast episode, Ilya Sutskever, the co-founder and chief scientist at OpenAI, discusses his vision for the future of artificial intelligence (AI), including large language models like GPT-4.
Sutskever starts by explaining the importance of AI research and how OpenAI is working to advance the field. He shares his views on the ethical considerations of AI development and the potential impact of AI on society.
The conversation then moves on to large language models and their capabilities. Sutskever talks about the challenges of developing GPT-4 and the limitations of current models. He discusses the potential for large language models to generate a text that is indistinguishable from human writing and how this technology could be used in the future.
Sutskever also shares his views on AI-aided democracy and how AI could help solve global problems such as climate change and poverty. He emphasises the importance of building AI systems that are transparent, ethical, and aligned with human values.
Throughout the conversation, Sutskever provides insights into the current state of AI research, the challenges facing the field, and his vision for the future of AI. This podcast episode is a must-listen for anyone interested in the intersection of AI, language, and society.
Timestamps:
00:04 Introduction of Craig Smith and Ilya Sutskever.
01:00 Sutskever's AI and consciousness interests.
02:30 Sutskever's start in machine learning with Hinton.
03:45 Realization about training large neural networks.
06:33 Convolutional neural network breakthroughs and imagenet.
08:36 Predicting the next thing for unsupervised learning.
10:24 Development of GPT-3 and scaling in deep learning.
11:42 Specific scaling in deep learning and potential discovery.
13:01 Small changes can have big impact.
13:46 Limits of large language models and lack of understanding.
14:32 Difficulty in discussing limits of language models.
15:13 Statistical regularities lead to better understanding of world.
16:33 Limitations of language models and hope for reinforcement learning.
17:52 Teaching neural nets through interaction with humans.
21:44 Multimodal understanding not necessary for language models.
25:28 Autoregressive transformers and high-dimensional distributions.
26:02 Autoregressive transformers work well on images.
27:09 Pixels represented like a string of text.
29:40 Large generative models learn compressed representations of real-world processes.
31:31 Human teachers needed to guide reinforcement learning process.
35:10 Opportunity to teach AI models more skills with less data.
39:57 Desirable to have democratic process for providing information.
41:15 Impossible to understand everything in complicated situations.
Craig Smith Twitter: https://twitter.com/craigss
+-----+
Howell
01:00 Geoff Hinton said he was brains behind 2012 stunning AlexNet that started AI revolution
03:33 2003 assumed machines can't learn? bullshit!!
05:45 Alex ?? & interviewer Craig Smith discussed ???
07:42 Sutskever: Hinton & imageNet
08:15 Sutskever: Google self-attention paper - word prediction long time-range,
prediction is compression
whether it would solve unsupervised learning, unsupervised learning was mysterious
first test - clear that TrNNs were solving problem, led to GP-3
11:15 Smith: Rich Sutton said don't need new algorithms, just need to scale,
Sutskever didn't influence me
11:57 Sutskever: great breakthough of Deep Learning was produtive use of big scale, worked better
it matters what you scale
13:16 Smith: limitation of LLMs - limited to knowledge they are trained on (eg Chomsky)
objective is to satisfy statistic nature of problem
is something beng done to address this?
14:30 Sutskever: difficult to discuss limitations - much has changed in 2 years,
very different now, not confident limitations will be there in 2 years
also these models just learn statistical irregularities : I feel far more important than assumed
prediction is important - must understand underlying data & the world
!!!!*** shocking degree of understanding of the world seen through the lens of man
17:10 example: ?Sidney
18:00 hallucinations - various technical reasons, different now
reason why learns so well, but outputs not as good as want, reinforcement learning on outputs
hallucinations - true, makes things up, greatly limits value, I have confidence in reinforcement learning on outputs from human feedbacks. Let's find out
20:07 Smith: eg it told me I won a Pulitzer
20:35 Sutskever: we hire people to teach chatGPT how to behave,
right now approach is a bit different
it's not what YOU wanted, I think there is a high chance will work
21:30 Smith: Yann LeCunn
Sutskever: I reviewd Yann's proposal, multi-modal understanding
we have done a lot of work on that : clip and ?dowie?
I don't see as binary [either, or]
some things are much easier to learn, but can learn more complex more slowly
embeddings - representationas of high dimension, eg color embeddings are exactly right
necessity of multi-modality - good direction to proceed
one of big challenges is high-D representation, LeCunn - already has properties
iGPT - eg Dalle_E 1, Google ?party?, definitely van predict high-D distributions
26:46 Smith: turning vectors into language
Sutskever: turn sequence into bits, doesn't matter concept - issue is effectiveness, ?speed?
27:50? Smith: intuitively doesn't seem like teaching a language model,
LeCunn says come up with algorithm
Sutskever: I claim that LLMs already have everything they need to know
large generative models learn some compressed version, situations etc
better the engine - better the quality
the tools are doing the majority of work, but still need oversight
overall, use pre-trained models, then ?improve? via teachers who also use AI
so the teachers efficiency is increasing
32:00 Smith: you're saying that through this process model will become more accurate
Sutskever: yes, in the outputs, teachers' final correction, work as efficient as possible
additional training, we will find out really soon
32:27 Smith: what are you researching now?
Sutskever: I can't talk in detail, just broad strokes
make models more reliable, controllable, learn from less data, less hallucinate
how far in the future? - near
34:23 Smith: similarities with the brain?
Geoff Hinton interesting LLms hold tremendous data with modest parameters compared to brain
is hardware or training
Sutskever: related to my previous comments on learnig from data
later in training requires far less data
more generally - requires creative ideas
faster learning is very nice
36:31 Smith: do we need faster processors to scale further?
Sutskever: of course want faster processors, do this outweigh the cost? If so, then good
27:54 Smith: are you involved at all with hardware, eg ?Cerebrus?
Sutskever: no, all hardware comes from ?Azura?
38:12 Smith: you did talk at one point about democracy & impact that AI can have
model could come up with optimal solutions, help humans manage society
Sutskever: no question systems will become far more capable
ability to come up with solutios, unpredictable how governments will use this
desireable for citizens to provide machines ith what they want to see
high ?diversity, bandwidth? of citizen input
Smith: world model - will machines understand
Sutskever: can read a hundred books,
or one book [slowly, carefully] and get more out of it
fundamentally impossible to understand everything in some sense, already beyond humans
can be incredibly helpful
transcript : http://www.eye-on.AI
https://www.eye-on.ai/s/Ilya-Sutskever-final-transcript.docx
/home/bill/web/References/TrNNs/Craig Smith, eye-on.AI Ilya-Sutskever Mastermind Behind GPT-4 and the Future of AI.docx
also :
episode 119 : Danny Tobey, an attorney with the global law firm DLA Piper, is at the forefront of the changing legal dynamics surrounding artificial intelligence. He talked about the current state of legislation and regulation, how regulatory bodies like the Federal Trade Commission are already tackling issues and what the law firm of the future will look like as AI transforms the economy.
/home/bill/web/References/TrNNs/Craig Smith, eye-on.AI Danny Tobey & legal profession
#08********08
#] 19Apr2023 start_TrNN_ART, txt outline, webPages
"$d_bin"'starter/start_TrNNs_ART.sh' - already exists!
/home/bill/web/ProjMini/TrNNs_ART/0_TrNNs_ART notes.txt
/home/bill/web/ProjMini/TrNNs_ART/1_outline.txt
/home/bill/web/ProjMini/TrNNs_ART/Grossberg: CLEARS-cART notes.txt
/home/bill/web/ProjMini/TrNNs_ART/Grossberg concepts.html
/home/bill/web/ProjMini/TrNNs_ART/paleontology.html
/home/bill/web/ProjMini/TrNNs_ART/[sentience, conscience] and Transformer NNs.html
/home/bill/web/ProjMini/TrNNs_ART/[sentience, conscience] and Transformer NNs notes.txt
/home/bill/web/ProjMini/TrNNs_ART/architectures: TrNNs and ART.html
/home/bill/web/ProjMini/TrNNs_ART/architectures: TrNNs and ART notes.txt
I need to modify (later, with bash work) : "$d_web"'Forms/0_form webPage.html'
#08********08
#] 19Apr2023 Consciousness and Kinkaide Zoom, my lessons learnt
PROJECTS/Personal & Events/Sony digicorder/230407 Consciousness and Kinkaide Zoom.mp3'
Kincaide Zoom [policy, hi-tech, mix] audience
[conformist, mainstream] reactions, important to a limited extent, diversity
[believe, definition] system limited
no new points at all regarding chatGPT
my points : Grossberg [biology, pyschology, neuroscience] supported
4 or 5 strong opinions - nothing novel, conformist, what heard from others
they emphasized no definition of consciousness
I commented on distiction between [, non-]conscious thinking
?Bruce Matichuk? - adamant that he is knowledgeable
doing old-guy PhD@UofA, final year
normally [great, insightful] comments, but not at all with consciousness
Kincaide worked on neurotransmitters?
Zoom worked well to see conformist thinking
I didn't handle it well, certain frustration on my part
souding out [opinion, reaction] - better pick up on web, [, non-]scientific papers
mostly not as good as a good non-scientists that is interested
comments (Gary Marcus etc)
Zoom really not useful for my intended posting on Connectionist
wait before posting
get substantive commentary
I'm not sure what the best approach is for this
only way is to seek people on internet - touch base with targeted individuals,
but this is usually one-way sale of ideas, rare that people are good at thinking
Grossberg CLEARS [Conscious, Learning, Expectation, Attention, Resonance, Synchrony]
Large Language Models (eg chatGPT) [protero, incipient] consciousness leading to CLEAR
especially LAMINART (laminar ART), cART (consciousness ART)
LLMs may accidentally incorporate Grossberg's thinking at primitive level
will it be possible to get enough information to [show, speculate] this
some way of formally incorporating ART into LLMs, tricking LLMs
small-world chatGPT - is being done
ART is not just another machine learning cookie-cutter
Fri 07Apr2023 back to subject of what did I learn from my [failure, frustration of Kinkaid Zoom
like Robert Hecht-Neilson over-the-top enthusiasm at WCCI2002 Honolulu
"... Out advice to you is to start your research immediately, to run as fast as you possibly can, and never look back. In a few short months you will see the rest of the world desperately scramblong to catch ip. ..." (I forget the phrase)
RHN it a dead-pan, struck me as odd, eventually came to conclusion because they all knew how the brain worked, even though all of their ideas were different
mammalian cogition concept was probably too simple for them?
know what the truth is, shit out new
none are strong enough to see at simple level that
their thinking is totally indadequate
not capable of escaping their belief systems
certainly not capable of discussing
I should have "expected" that: my "catastrophic failure of [rational, logical, scientific] reasoning" (evolved several years later or before?)
IJCNN2007 Orlando - sold his new book
#08********08
#] 10Apr2023 swearch "Transformer Neural Network structure"
+-----+
https://builtin.com/artificial-intelligence/transformer-neural-network
Transformer Neural Networks: A Step-by-Step Breakdown
The transformer neural network was first proposed in a 2017 paper to solve some of the issues of a simple RNN. This guide will introduce you to its operations.
Written byUtkarsh Ankit
Published on Jun. 28, 2022
>> vastly superior to Stefania Cristina's article (below)
+-----+
https://machinelearningmastery.com/the-transformer-model/
The Transformer Model
By Stefania Cristina on September 18, 2022 in Attention
Last Updated on January 6, 2023
Jan 6, 2023The encoder block of the Transformer architecture Taken from " Attention Is All You Need " The encoder consists of a stack of $N$ = 6 identical layers, where each layer is composed of two sublayers: The first sublayer implements a multi-head self-attention mechanism.
We have already familiarized ourselves with the concept of self-attention as implemented by the Transformer attention mechanism for neural machine translation. We will now be shifting our focus to the details of the Transformer architecture itself to discover how self-attention can be implemented without relying on the use of recurrence and convolutions.
#08********08
#] 10Apr2023 BARD available only in US & UK, not yet in Canada
https://bard.google.com/
#08********08
#] 08Apr2023 not sent in emto Grossberg - save for webPage
******************
Are [CLEARS, ART] Super-Turing? (as per Hava Siegelmann's past comments on Super-Turing)
"Conscious Mind, resonant Brain" page 47 :
"... Modal architectures and thus less general than the von Neumann architecture that provides the mathematical foundation of modern computers, but much more general than a traditiopnal AI algorithm. ..."
Of course, von Neuman architectures are "universal function approximators"
08********08
#] 03Apr2023 draft emto Grossberg’s [CLEAR, ART, etc] - already a reason for LLM success?
Subject : Transformer NNs : Accidental stumble towards Grossberg's CLEAR, perhaps later towards c-ART?
I am thinking of posting a comment to Connectionists to see if anyone has been looking for indications of your work in the modern Transformer NNs (TrNN). As part of thinking this through, I would like to pass it by you in case you already have some [thinking, results] on the issue. It's far too long, but it's easy to shorten it, and it's preferable not to forget key points from the start. The draft Connectionists posting below may take time for me to develop, particularly given other deadlines I am facing over the next 2 months, but it gives an initial picture of the direction I am heading.
If you do have some public papers or presentations on the possible linkages between your past work and its potential to [reform, extend] current popular trends in TrNNs based on work by others, I would appreciate links to them.
Deja vu all over again? I suspect that you may again be in the position that others are now developing related ideas without being aware of, or crediting, the earlier work. You've mentioned this in the past, as have Paul Werbos, Juergen Schmidhuber, and probably many others.
Sensitive timing?
This may not be a good time to bring this subject up with the public, with strong pressure to slow down and entangle the TrNNs, notably Large Language Models (LLMs), especially chatGPT. (Others have commented that some of the delays will help MicroSoft by hurting their opponents even more at a critical catch-up stage). But I am more interested in the longer-term, and given problems that I feel permeate the conventional mass media in modern times, manipulation and public danger are already an issue that seems to be a challenge for society, and the Large Language Models (LLMs) are being singled out without looking at the broader context. I guess that's fair to a certain extent, the fear of LLMs seems tangible...
Bill Howell
****************
Early stage, incomplete draft of an open question (Connectionists) :
Subject : Do Transformer NNs mimic some early stage Groosberg concepts from CLEARS to LAMINART?
I am very slowly reading through Stephen Grossberg's "Concious Mind, Resonant Brain". This really helps me because my early reading of some of his [book, paper]s and watching his (and Gail Carpenter's) presentations, left many long-forgotten gaps.
I am interested in whether the Vaswani etal 2017 "Attention is all you need" paper may accidentally mimic some of Stephen Grossberg's "Consciousness, Learning, Expectation, Awareness, Resonance, Synchrony" (CLEARS) concept, but not much of ART and its [precursor, derivative]s (note 1). This is based on the presented "structure" of the Transformer Neural Networks (TrNNs), and their performance and limitations. I suspect that the authors were not very aware of Grossbeg's work.
I do not have a "deep understanding" of how the LLMs work, but based primarily on the Vaswani etal 2017 "Attention is all you need" paper, to say the very least "Figure 1: The Transformer - model architecture" resembles (distantly) key structures like Grossberg's "Read Circuit" (2021, page 24, Figure 1.15), "... a recurrent shunting on-center off-surround network with habituative gates ...". How resonance may arise in TrNNs is something I still have to look into, but in a large TrNN, there may be many ways that the architecture could give rise to top-down bottom-up resonance.
Perhaps comments about "strangely effective" Large Language Models (LLMs) are a hint that more is going on than was planned? One of the Google team members public thought that chatGPT behaviour exhibited some aspects of exhibited "sentience", but was apparently terminated for being outspoken?
(Gary Marcus comment - I have to track this and other commentary down)
Can Grossberg's concepts :
- explain some of the successes of TrNNs, as incorporating very preliminary steps towards CLEARS and ART concepts?
- provide an opportunity to more completely implement ART in TrNNs? :
- newer versions of TrNNs might be difficult to influence, although that could happen as they evolve
- Advance the implementation of Cognitive [User Interface, Operating System, API]
considering the linguistic capabilities ot TrNNs and their precursors as a kind of , it might be much easier to build an independent add-on ART capability that builds on these models?
- could TrNNs somehow be a pathway to rapid scale-up of ART-like concepts?
As I understand it, this may be one of the reasons that ART has not had as much uptake in the past as might otherwise have been the case. "Machine learners" are primarily focussed on statistics (including Information Theoretics) and simple architectures , which is "easier" than paying attention to fine [architectual, process] detail and the lessons of [brain, psychology].
- lead to more advanced "machine consciousness" via large-scale TrNN LLMs, and further our understanding of consciousness itself, and its benefits in NN systems?
Note (1) Grossbert : ART and its [precursor, derivative]s :
Cognitive-Emotional cycles (CogEm)
Consciousness, Learning, Expectation, Attention, Resonance, Synchrony (CLEARS)
Adaptive Resonance Theory (ART)
cortical LAMINar ART (LAMINART)
consciousness ART (cART)
Stephen Grossberg 2021 “Conscious Mind, Resonant Brain” Oxford University Press, ISBN 978-0-19-007055-7 https://www.bu.edu/eng/profile/stephen-grossberg-ph-d/
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin 12Jun2017 "Attention Is All You Need" [v5] Wed, 6 Dec 2017 03:30:32 UTC https://arxiv.org/abs/1706.03762
#08********08
#] 04Apr2023 John Seiffertt Aug2018 "Adaptive Resonance Theory in the time scales calculus"
John Seiffertt Aug2018 "Adaptive Resonance Theory in the time scales calculus" Neural Networks,
Volume 120, Pages 32-39, ISSN 0893-6080, https://doi.org/10.1016/j.neunet.2019.08.010.https://www.sciencedirect.com/science/article/pii/S0893608019302278
Abstract: Engineering applications of algorithms based on Adaptive Resonance Theory have proven to be fast, reliable, and scalable solutions to modern industrial machine learning problems. A key emerging area of research is in the combination of different kinds of inputs within a single learning architecture along with ensuring the systems have the capacity for lifelong learning. We establish a dynamic equation model of ART in the time scales calculus capable of handling inputs in such mixed domains. We prove theorems establishing that the orienting subsystem can affect learning in the long-term memory storage unit as well as that those remembered exemplars result in stable categories. Further, we contribute to the mathematics of time scales literature itself with novel takes on logic functions in the calculus as well as new representations for the action of weight matrices in generalized domains. Our work extends the core ART theory and algorithms to these important mixed input domains and provides the theoretical foundation for further extensions of ART-based learning strategies for applied engineering work.
Keywords: Machine learning; Adaptive resonance theory; Unsupervised learning; Control theory; Time scales
#24********************************24
#] Done ToDos
20Oct2023 nomenclature side pane : p346fig09.16, p532fig14.08
20Oct2023 remove "no-content" webPages of TrNNs_ART
Grossbergs cellular patterns computing.html
Grossbergs complementary computing.html
etc, etc
# enddoc
|