#] #] ********************* #] "$d_web"'My sports & clubs/neural- Marcus, Gary/0_Marcus notes.txt' - ??? # www.BillHowell.ca 8Jul2023 initial # view in text editor, using constant-width font (eg courier), tabWidth = 3 #48************************************************48 #24************************24 # Table of Contents, generate with : # $ grep "^#]" "$d_web"'My sports & clubs/neural- Marcus, Gary/0_Marcus notes.txt' | sed "s/^#\]/ /" # #24************************24 # Setup, ToDos, #08********08 #] ??Sep2023 #08********08 #] ??Sep2023 #08********08 #] ??Sep2023 #08********08 #] ??Sep2023 08********08 #] 13Sep2023 Cars Are the Worst Product Category We Have Ever Reviewed for Privacy https://foundation.mozilla.org/en/privacynotincluded/articles/its-official-cars-are-the-worst-product-category-we-have-ever-reviewed-for-privacy/ It’s Official: Cars Are the Worst Product Category We Have Ever Reviewed for Privacy By Jen Caltrider, Misha Rykov and Zoë MacDonald | Sept. 6, 2023 https://foundation.mozilla.org/en/privacynotincluded/articles/its-official-cars-are-the-worst-product-category-we-have-ever-reviewed-for-privacy/ I didn't save it #08********08 #] 03Sep2023 Doug Lenat Cyc, project Doug Lenat, Gary Marcus 31Jul2023 "Getting from Generative AI to Trustworthy AI: What LLMs might learn from Cyc" https://arxiv.org/pdf/2308.04445.pdf /home/bill/web/My sports & clubs/neural- Marcus, Gary/Lenat, Marcus 31Jul2023 Getting from Generative AI to Trustworthy AI: What LLMs might learn from Cyc.pdf /home/bill/web/References/Neural Nets/Lenat, Marcus 31Jul2023 Getting from Generative AI to Trustworthy AI: What LLMs might learn from Cyc.pdf #08********08 #] 08Jul2023 Gary Marcus: Jumpstarting AI Governance Customizable governance-in-a-box, catalyzed by philanthropy https://garymarcus.substack.com/p/jumpstarting-ai-governance &&&&&&&& Howell - Perhaps we NEED conscious systems to effectively address the needs (with himan oversight, of course). I see this as a necessity, rather than a [luxury, fantasy] given the [complexity, beyond-human-or-organisation] size] of current Large Language Models (LLMs) and entirely new concepts to come. [Ethics, morality, law] are perhaps what is most sought, but consciousness (eg the ability to [predict, see [error, side-effect]s, learn, evolve, adapt to changing human demands], who-am-I-the-machine, etc etc) may be a necessary precursor? Above all, my guess is that all such systems will operate in a [COMPETITIVE, COOPERATIVE] environment, and like [law, markets, social relationships, business, sport, etc] probably would have to [be designed, interact, "survive"] accordingly? Such systems wouldn't just interact in the AI (I prefer CI for computational Intelligence as am acronym - old style), they could also be used as [measures, unknown-unknown identifiers, hypothesis spinners, human and machine global teams-builders], and serve to focus on the needles in the hurricane of of haystacks that would be difficult to follow for even large organisations of humans. A diversity of such systems acting in the real world, all somewhat different, hopefully not all controlled by the same powers: maybe that's a huge potential benefit of concepts like Marcus etal's "Customizable governance-in-a-box, catalyzed by philanthropy". 08********08 #] 26Feb2023 Gary Marcus - government pause on public deployment? >> I should have posted this on Connectionists!!??!! I agree with Gary Marcus's concerns, but I am also concerned that [mass media, universities, etc] have already been perverted too easily for quite some time (probably since the beginning of man). I agree with many details in the comments by [Phil Lawson, Keith Teare, W. James, Phil Tanny], and my feeling is that regulation, especially today;s politically-correct and biased society, may be the worst threat. As for Blair Mollitt's reference to the "Precautionary Principle", my warning is that this has too often been "one-sided blind", with enduring consequences. I really hope that doesn't happen here. On the other hand, current challenges with chatGPT are perhaps anachronistic. The technology may have a 1964+ basis in history (ignoring earlier work), and several revolutions in between. New major new concepts for Transformer neural networks seem impossible to track. Perhaps one way to look at it is beyond the "Large Language Model" framework, and consider that LLMs provide the start of "COGNITIVE [User Interface, Operating System, Programming Interface]s (CUI, COS, CPI), as a great extension of [Command Line Interface (CLI), Graphical User Interface (GUI), Applications Programming Interface (API)]. "Cognitive" relate directly back to Robert Hecht-Nielsen's 2002, 2007 "Confabulation Theory" based on a concept for mammalian cognition. The excitement today resembles some of his work, and expected, but not realized, enthusiasm. chatGPT and kin do not have to resolve all issues, they are already an interface to long existing capabilities in Computational Intelligence (CI - I don't like the AI term as to me that applies to the Kasparov versus Deep Blue era of [rational, logical, scientific] modes of thinking predominantly). It will take more than a generation to work these things out, and by then the beast will be very different. One concern I have is the extent to which LLMs and their underlying systems provide "Multiple Conflicting Hypothesis" (MCH), and not just the [consensus, mainstream, politically correct] belief systems. Perhaps users will have the ability to delve into MCHs as the systems develop. +-----+ Grant Castillou It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first. What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing. I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order. My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461 Howell - Grant Castillou - Thanks for the reference to Grald Edelman's Extended Theory of Neuronal Group Selection. After I get through Stephen Grossberg's "Conscious Mind, Resonant Brain" book, I will try to remember to get to Edelman's work. The only other theory of consciousness that I felt "somewhat comfortable" with is the late John Taylor's theory (kind of a comparison between expectations and actual results, with an understanding of the effect that one's self has had on the process). # enddoc