# #??? #[?] #00:20 nuclear decay processes #00:33 decay process of U235 #00:46 alpha particle emission, problems with the standard explanations #02:12 SAM basics: nucleus structure, electrostatic, no [neutron, strong force] #02Mar2022 Howell: more questions #02Mar2022 msn.com: Will Russia invade Moldova? #02Mar2022 Rodney Johnson's Take - Guerillas in the Machine #02Mar2022 Roosevelt treason - atomic bomb [top-secret designs, material, etc] direct to Russia 1943 #03:42 SAM periodic table #04:06 SAM transmutations confirmed in SAFIRE laboratory #04:40 Inner nuclear structure dictates chemical properties #04Mar2022 Howell: Russia preserves some infrastructure? No Ukraine scorched Earth? #04Mar2022 MktWatch, Opinion: Russia’s invasion of Ukraine: 4 ways this war could end #04Mar2022 MW/AP: Civilian drone hobbyists in Ukraine join the fight against Russia #04Mar2022 Rodney Johnson: Saudia Arabia's dilemma #05:11 Formation of elements doesn't require stars #05Jan2022 Re: Covid-19 adverse #05Mar2022 Scenarios by Others #06:06 Other SAM [assumptions, advantages] #06Oct2016 Eileen Mckusick - The Sun?s Influence on Consciousness #07Feb2022 Russia has 70% of military capacity in place for full-scale invasion of Ukraine #07Jan2022 Robert Malone covax interview #07Jan2022 Total deaths all causes less historical 5 year average, less deaths by covid #07Mar2022 18:05 kyivindependent.com: Blinken- Ukraine’s using defense support 'effectively' against Russian aggression #07Mar2022 MW/AP: War in Ukraine: Zelensky government accuses Putin of resorting to ‘medieval siege’ tactics in Russia’s ongoing invasion #07Mar2022 MW: How China’s Currency Could Come Out a Big Winner in the Ukraine War #07Mar2022 PSI, opindia.com: Ukraine - Russian govt had raised 'bioweapons' alarm #07Mar2022 Rodney's Take: 1920 Jones Act, shipped US goods too expensive for Americans #08Dec2015 Howell - The Greatest of cycles : Human implications? #08Dec2021 Re: Covid vaccine Adverse Effects #08Jan2022 Covax versus excess deaths #08Jan2022 Re: 220107 Steve - UK excess deaths #09Jan2022 Excess deaths from all causes compared to projection based on previous years #09Jan2022 https://stevekirsch.substack.com/p/new-big-data-study-of-145-countries #09Jan2022 Steve Kirsch - vax negative impacts on covid deaths, OurWorldInData excess deaths #[1] #1 #10Jan2022 Yet another end of the world - This time, we really are going to die!! #11Jan2022 RE: Jessica Rose - mRNA covaxes; Lamarckian versus Mendellian heredity 12:15 17Feb2022

Dmytro Sennychenko submitted a letter of resignation in November, following the scandalous privatization of the Bilshovyk plant in Kyiv. Sennychenko said the two events weren’t connected.


14:59 15Feb2022

NATO Chief Jens Stoltenberg said Russia’s announcement it was pulling back some troops after they finished exercises was reason for cautious optimism, but that the evidence could not yet be seen on the ground. Moscow-based analysis center Conflict Intelligence Team said on Feb. 15 that it had only observed movements towards, not away from, Ukraine.


• Find a car
• Find an ad • Find a home
• Personals
• Find a job
• Obituaries
#14Jan2022 If I was Fauci - how might I cover my ass? Fun list to watch for #14Jan2022 Re: If I was Fauci - how might I cover my ass? Fun list to watch for #14May2020 Howell - COVID-19 : [incomplete, random, scattered] cart [information, analysis, questions] #15Mar2022 Howell: Quick summary, still more questions 16:28 26Feb2022

The decision to cut Russia off the international payment order system has not been officially issued yet, but the technical preparations are ongoing, according to Ukraine’s Foreign Minister Dmytro Kuleba. “Ukrainian diplomats dedicate this victory to all defenders of Ukraine.”


#16Dec2021 Re: Pilots and atmospheric phenomena #1917-18 Spanish flu deaths may have been largely due to secondary infections 1945[KNU] #1958-59 ?among the world's first? systems of non-linear differential equations for NNs (1965),[DEEP1-2][R8] (1970),[BP1,2][R7] (1970),[BP1-2][R7] #1976 ART Adaptive Resonance Theory (1982).[BP2] (1990s, see Sec. A, B; also for RL, 2003-, see Sec. C) (1990)[AC90,90b][AC20][R2] which he did not cite; (1990).[PLAN][MIR](Sec. 11) Same for my linear transformer-like (1991, see Sec. II & III) (1991)[HRL0-2] were trained through end-to-end-differentiable chains of such modules.[MIR](Sec. 10) (1991).[PM1-2][AC20][R2][MIR](Sec. 5) (1992-2022, Sec. 8). 1992. Based on TR FKI-148-91, TUM, 1991.[UN0] PDF. 1992. Based on TR FKI-148-91, TUM, 1991.[UN0] PDF. 1992[FWPMETA1-9][HO1] extended my 1987 diploma thesis,[META1] which introduced algorithms not just for learning but also for meta-learning or learning to learn,[META] to learn better learning algorithms through experience. This became very popular in the 2010s[DEC] when computers were a million times faster. 1997[AC97][AC99][AC02] and 2015-18.[PLAN4-5] #19Feb2022 Are many covid [case, hospitalization, death]s due to influenza and NOT covid? 1. Journal paper on Long Short-Term Memory, the #1stdl 1st superhuman result in 2011.[DAN1] 1st superhuman result in 2011.[DAN1] Now everybody is using this approach. 20:02 23Feb2022

Citing anonymous U.S. officials, the newspaper claimed Russia is likely to begin a large-scale invasion of Ukraine within 48 hours. CNN correspondent Katie Bo Lillis confirmed the report, saying that the eastern city of Kharkiv is “at particular risk.”


(2004),[GPUNN][GPUCNN5] 2005 saw the first publication of LSTM with full backpropagation through time and of bi-directional LSTM[LSTM3] (now widely used). (2010)[MLP1-2] 2018 A.M. Turing Award[R1] 2021-11-02 11:33 Media in Progress Ep. 2: What’s in a name?
2021-11-14 15:25 Golden Gate District to become Kyiv’s cultural hub with own brand
2021-11-16 19:16 Ukraine’s military intercepts Russian drone in Donbas
2021-12-01 19:33 Anti-corruption activists say head of Ukraine’s ‘FBI’ appointed after fake contest
2021-12-02 11:33 Media in Progress Ep. 2: What’s in a name?
2021-12-04 11:37 Q&A with Brian Bonner, ex-chief editor of Kyiv Post
2021-12-04 15:44 Biden: ‘I will not accept Russia’s red lines on Ukraine’
2021-12-04 16:46 Netflix responds to criticism over ‘offensive’ Ukrainian character in ‘Emily in Paris’
2021-12-05 15:13 Ukraine to intensify Covid-19 restrictions in ‘yellow’ zones on Dec. 6
2021-12-06 13:53 Police suspect arson after journalist’s cars found burned
2021-12-07 22:40 Biden, Putin hold talks about Russia’s potential invasion of Ukraine
2021-12-08 14:46 ‘Ukrainians Will Resist’ hashtag trends amid looming Russian invasion
2021-12-08 16:14 Kyiv, 8 oblasts leave ‘red’ quarantine zone
2021-12-08 18:14 Journalist: EU imposes sanctions on Kremlin’s mercenary Wagner Group that fought in Donbas
2021-12-12 14:28 Ukrainian State-Owned Enterprises Weekly – Issue 55
2021-12-12 17:15 NBU head complains of political pressure but isn’t worried about central bank’s independence
2021-12-12 19:02 G7 foreign ministers: Russia will face ‘massive consequences’ if it invades Ukraine
2021-12-14 00:01 Health minister falsely claims Ukraine reached WHO’s target of vaccinating 40% of population
2021-12-14 15:36 One of Lukashenko’s main rivals in 2020 election jailed for 18 years
2021-12-15 16:22 Court of appeal overturns decision favoring Kolomoisky’s company in PrivatBank case
2021-12-15 17:32 Explainer: Why Russia wants autonomy for occupied Donbas (and why Ukraine doesn’t)
2021-12-15 19:08 Infamous Ukrainian judge’s brother released on bail after bribery charge
2021-12-18 20:01 Ukrainian State-Owned Enterprises Weekly – Issue 56
2021-12-20 19:18 Accounting Chamber outlines reasons for Ukrzaliznytsia’s Hr 12 billion in losses in 2020
2021-12-21 00:47 Controversial court’s new ruling might cancel anti-corruption prosecutor contest
2021-12-21 21:59 Poroshenko family’s companies fined Hr 283 million by Anti-Monopoly Committee
2021-12-22 20:52 HBO acquires Ukrainian war drama ‘Bad Roads’
2021-12-22 21:40 Supreme Court rejects prosecutor general’s libel suit against newspaper, anti-graft watchdog
2021-12-22 21:47 Russia has 122,000 troops close to Ukraine’s border
2021-12-23 22:15 Top general: Ukraine’s military will respond to enemy fire
2021-12-27 21:22 Kyiv to create territorial defense headquarters ahead of Russia’s potential invasion
2021-12-28 17:02 Ukrainian documentary ‘Home Games’ available on Netflix in Europe
2021-12-28 19:28 Zelensky’s party lawmaker buys nationwide television channel
2021-12-29 17:17 Year of musical introspection: Ukraine’s best albums of 2021
2021-12-29 21:00 World Bank study reveals effects of global warming on Ukraine’s agriculture, forests
2021-12-30 20:20 Ukraine’s soldiers may soon get better, warmer boots
2022-01-01 19:33 Anti-corruption activists say head of Ukraine’s ‘FBI’ appointed after fake contest
2022-01-04 16:46 Netflix responds to criticism over ‘offensive’ Ukrainian character in ‘Emily in Paris’
2022-01-05 15:43 Statement: 28 Ukrainian NGOs call for action against Russia’s closure of Memorial human rights group
2022-01-07 13:32 Timothy Ash: What Kazakhstan’s protests mean for the global economy
2022-01-07 21:07 Kazakh government regains control with Kremlin’s help amid uprising
2022-01-07 22:20 Who can and can’t join Ukraine’s Territorial Defense Force
2022-01-09 17:01 Robert A. McConnell: Talk won’t deter Putin. Here’s what West can do
2022-01-10 19:01 Court extends Medvedchuk’s house arrest in treason case
2022-01-11 22:26 US Republicans draft bill to designate Ukraine a ‘NATO Plus’ state, sanction Russia
2022-01-12 10:41 How Zelensky’s administration moves to dismantle press freedom in Ukraine
2022-01-12 20:02 Court orders closure of bribery case against top member of Zelensky’s administration
2022-01-13 10:36 Media in Progress Ep. 6: Popular protest, inter-elite feuds or Russian intervention – What’s going on in Kazakhstan?
2022-01-19 21:36 Blinken visits Kyiv, warns Russia might attack ‘at very short notice,’ asks about reforms
2022-01-20 02:03 Biden predicts Russia will ‘move in’ on Ukraine, while Zelensky downplays invasion threat
2022-01-20 18:46 Zelensky responds to Biden: ‘There are no minor incursions’
2022-01-23 12:13 Who is Murayev, the man UK exposes as potential leader of Kremlin’s coup
2022-01-24 02:29 US orders diplomats’ families to leave Kyiv, citing ‘threat of Russian military action’
2022-01-24 18:25 UK begins to withdraw non-essential embassy staff, EU ‘won’t do the same,’ says Borrell
2022-01-25 03:39 James Batchik & Doug Klain: It’s time for Europe to defend Ukraine — and itself
2022-01-25 11:24 Early look at Ukraine’s exhibit at Venice Art Biennale – exploration of world’s exhaustion
2022-01-25 17:31 Transparency International: Ukraine’s fight against corruption stagnated in 2021
2022-01-26 23:58 US, NATO don’t cave in to Russian demands
2022-01-27 08:54 Media in Progress Ep. 7: Company culture – What can make or break a team
2022-01-27 18:41 US shared response to Russia’s security demands with Ukraine before sending
2022-01-27 22:27 Stanislav Aseyev: Russia’s bluff of the century. Will there be a war?
2022-01-28 18:38 Defense minister downplays Russian threat, says it’s similar to that of spring 2021
2022-01-28 21:34 Olena Goncharova: Ukraine is not ‘the Ukraine’ and why it matters now
2022-01-29 15:45 Deputy economy minister: Ukraine’s GDP hit $200 billion for first time in 30 years
2022-01-29 18:48 Ukrainian director detained in Italy at Russia’s request removed from Interpol wanted list
2022-01-30 20:20 Ukraine’s soldiers may soon get better, warmer boots
2022-01-31 10:29 Want to help Ukraine’s military as a foreigner? Here’s what you can do
2022-02-01 18:06 Zelensky issues decree to bolster Ukraine’s military
2022-02-04 11:38 US says closer relations with China will not alleviate economic sanctions imposed on Russia.
2022-02-08 11:51 President’s office denies Macron transition period bill claims.
2022-02-10 11:23 US Senators: Russia’s cyberattacks on Ukraine to prompt sanctions even before potential invasion.
2022-02-10 12:44 Kyiv’s Cold War-era bomb shelters in dire state (PHOTOS)
2022-02-10 12:59 Russia’s war cost Ukraine $280 billion
2022-02-14 16:38 Over 50 IT companies join Ukraine’s ‘special tax regime’ Diia City in first three days
2022-02-14 22:01 Zelensky proclaims Feb. 16, stipulated date of Russian invasion, ‘unity day.’
2022-02-14 23:33 Scholz warns Moscow of ‘wide-reaching’ consequences, stays silent on Nord Stream 2
2022-02-15 21:20 Defense ministry, state banks suffer ‘powerful’ cyberattack
2022-02-18 11:06 Covid-19 in Ukraine: 34,938 new cases, 282 new deaths, and 17,796 new vaccinations.
2022-02-19 23:22 Zelensky’s full speech at Munich Security Conference
2022-02-22 18:48 Ukrainian civilians fearlessly prepare for Russia’s offensive
2022-02-22 21:16 Putin says Russia-backed illegitimate ‘states’ in eastern Ukraine have claim to entire regions of Donetsk, Luhansk
2022-02-23 00:27 Breakdown of Putin’s false narratives to justify aggression against Ukraine
2022-02-24 09:00 Timothy Ash: What Russia’s attack means for the world
2022-02-25 08:59 Ukraine’s military succesfully defending the area near Chernihiv.
2022-02-26 03:29 Kazakhstan denies Russia 2022-02-26 06:17 Russia’s war on Ukraine: Where fighting is on now (Feb. 26 live updates)
2022-02-26 08:00 A warehouse of Kyivenergo, capital’s energy generating company, was set on fire.
2022-02-26 22:44 Russia's attack on Kyiv kills 14 military and 6 civilians, including a child - Klitschko
2022-02-27 01:42 Russia’s war on Ukraine: Where fighting is on now (Feb. 27 live updates)
2022-02-27 04:42 European Commission President: Cutting Russian banks off from SWIFT will effectively block Russia's exports and imports
2022-02-27 05:57 Mykolaiv mayor confirms the city is under Ukraine’s control.
2022-02-27 08:22 Enemy's light armored vehicles break into Kharkiv
2022-02-27 11:19 Let's support the unbreakable: NBU opens special account to raise funds for Ukraine's Armed Forces
2022-02-27 12:18 Ukraine parliament proposes UNGA set up tribunal to investigate Putin's crimes
2022-02-27 15:10 Belarus may join Russia in war against Ukraine – Ukraine's ex-defense chief
2022-02-27 15:20 Japan to put sanctions on Putin, support Russia's disconnection from SWIFT
2022-02-28 02:26 McDonald’s and KFC offer food assistance amid Russian invasion
2022-02-28 04:36 Here’s how to support the Ukrainian military
2022-02-28 14:14 Romania supports Ukraine's membership in EU
2022-02-28 17:34 Czech PM supports Ukraine's accession to EU under special procedure
2022-02-28 18:31 President signs application for Ukraine's membership in EU
2022-03-01 00:54 ICC prosecutor to investigate war crimes in Ukraine.
2022-03-02 03:24 Russian paratroopers landed in Kharkiv and attacked one of the city’s military medical centers,
2022-03-02 15:01 EXCLUSIVE: Voice message reveals Russian military unit’s catastrophic losses in Ukraine
2022-03-02 19:53 ECHR suspends all procedures that require action from Ukraine.
2022-03-03 05:51 Canada sanctions 10 people in Russia’s energy sector, offers further support to Ukraine.
2022-03-03 18:47 Q&A with US Chargé d’Affaires Kristina Kvien: ‘From now on, Russia will be a pariah state’
2022-03-03 22:37 Kyiv under shelling: ‘First thing I heard was my child’s scream’
2022-03-04 15:45 Russia attacks, captures Europe’s largest nuclear power plant in Ukraine
2022-03-05 20:33 Ukrainian loses parent to Russian propaganda: ‘I can consider myself an orphan’
2022-03-05 20:33 Ukrainian loses parent to Russian propaganda: ‘I can consider myself an orphan’
2022-03-06 01:55 10 days of suffering. Russia’s war against Ukraine in photos
2022-03-06 06:55 Kyiv resident gives birth during war: ‘I forgot about the bombings only in labor’
2022-03-06 22:45 Amid West's doubts over no-fly zone, Russia destroying Ukrainian airfields to choke country’s own capacities
2022-03-06 23:15 Russia's audacity shows sanctions “not enough” - Zelensky
2022-03-07 00:20 Ukraine demands termination of Russia's and Belarus' membership in IMF, all WB organizations
2022-03-10 00:13 UK fears Russia may be setting stage to use chemical weapons.
2022-03-10 20:52 Russia’s war on Ukraine jeopardizes global food security, increasing famine risk
2022-03-11 04:06 IMF: Default no longer "unlikely event" for Russia.
2022-03-11 07:12 Andriy Shevchenko: Putin won’t stop at Ukraine
2022-03-8 16:13 NYT: Biden expected to ban Russian oil imports.
2022-03-9 00:28 CIA Chief: Putin is not crazy.
#20Dec2020 update, Youyang Gu's comments on closing his forecasting activity #20Jan2022 Dan's Ukraine questionquestion, and Korotayev prediction of possible state collapse in Saudi Arabia #20Jan2022 emto Geoff Cowper - my thoughts on WWI & Ukraine now 21:55 27Feb2022

In a interview with Associated Press, Kyiv’s Mayor Vitali Klitschko said that ‘Kyiv was encircled’ but ready to fight. His spokesperson said that he misspoke, and such information was “a lie and a manipulation.”


#21Feb2022 Do covax deaths account for ~50% of the official reports of covid deaths? #21Feb2022 Do [influenza, covax] deaths account for MOST of the official reports of covid deaths? #21Jan2022 Fentanyl versus covid versus covax, and the infamous downtown eastside of Vancouver #23Feb2022 Dobler: Russia’s Ballistic Missile deployments along Ukraine’s Eastern Border #24Jan2022 MW: Fiona Hill - Putin wants to evict the United States from Europe #[2, 4]-value logic for [protein, information] #25Jan2022 TradingView: Bitcoin and the Ukraine, Russia #25Nov2021 Re: Some of Sacha Dobler's recent stuff #26Feb2022 Dobler: 29% of the West’s Wheat Supply is gone – Ice Age Farmer #26Feb2022 Howell - AM I A MORON OR WHAT? #26Feb2022 nationalreview.com: Why the Russians Are Struggling #27Feb2022 Hugo talks - Skeptical view of war and media posing #28Feb2022 Howell: new OPEC++, aligned with [China, Russia] economic priorities? #28Feb2022 Russian [plan, action]s, my naive reflections #28Jan2022 MW/AP: Russia says it won’t start a war as Ukraine tensions mount #28Jan2022 MW/AP: U.S. has put some 8,500 troops on higher alert for potential deployment #2nddl #[3] #[4] 555+ References (and many more in the survey[DL1]) 5 months later, the similar GPU-accelerated AlexNet won the ImageNet[IM09] 2012 contest.[GPUCNN4-5][R6] 5 years earlier, in 1995, we already had a similar, excellent neural probabilistic text model.[SNT] Bengio[NPM] characterizes it only briefly as "related" #[8] #83year detrended SP500 → our Highway Net (May 2015) → ResNet (Dec 2015, see Sec. D). #[9] #A (A2) Connectionist Temporal Classification by my student Alex Graves et al. (2006).[CTC] Our team successfully applied CTC-trained LSTM to speech in 2007[LSTM4] (also with hierarchical LSTM stacks[LSTM14]). Long Short-Term Memory (LSTM) recurrent neural network[LSTM1-6] overcomes 2015 survey of deep learning[DL1] Sec. 1: Introduction
abandoned the softmax, essentially resurrecting the original 1991 system.[FWP0-1] Compare Sec. 6. a beautiful pattern of exponential acceleration in it,[OMG] which I have presented in many talks since then, and which also made it into Sibylle Berg about deeper adaptive NNs[R61,R62] #abstract #AC #AC20 #AC90 #AC97 #AC99 academia and industry,[DL4] According to Bremermann (1982),[BRE] according to Google Brain.[LSTMGRU3]) achieved by our group 2010-2011[MLP1-2][DAN][DAN1][GPUCNN5][R6] achieve only 10 billion clicks),[FB17][DL4] Apple #ack | | | | | | | | | | | | | | | | | | | | | | | | | | | #ACM16 #ACM18 ACM also explicitly mentions speech recognition, speech synthesis,[AM16][DL1] #Action Plan: What would I do if I was Putin? #Action Plan: What would I do if I was the Ukraine? #Adapt my PineScript program to other [symbol, application]s Additive FWPs[FWP0-2] (Sec. 1 & 2), however, solve the problem through a dual approach, A disadvantage addressed in Sec. 2 is that the slow net needs many output units if the fast net is large. #Adrian D'Amico 28Feb2018 - Space Weather and Human Health A few models of consciousness are summarized on my webPage A quick comparison of Consciousness Theories. Only a few concepts are listed, almost randomly selected except for [Grossberg, Taylor] After the Executive Summary in Sec. 3, Sec. 4 will split After the main peer-reviewed publication in 1997[LSTM1][25y97] (now the most cited NN article of the 20th century[MOST]), #Afterword, list of authors #AH1 #AH2 John J. Mearsheimer, Uof Chicago, GREAT presentation 04-07Jun2015 : Why is Ukraine the West’s Fault? Sec. 6: 1965: First Deep Learning
Sec. 7: 1967-68: Deep Learning by Stochastic Gradient Descent
A, A, B, C, D A & (A) speech recognition, A: Speech Recognition (see also Sec. VI & XI & XV): The first superior end-to-end neural speech recognition Abstract & Outline (~300 words), Sec. 21: Acknowledgments
speech recognition (with our CTC, 2007-15, see Sec. A), Bachelor of Applied Sec. 8: 1970: Backpropagation. 1982: For NNs. 1960: Precursor.
B). B, (B) natural language processing, machine translation (2016, see Sec. B), 21 comments on 21 claims by ACM (~8,000 words), into 21 parts C, (C) robotics, Sec. 2: 1676: The Chain Rule For Backward Credit Assignment
Sec. 9: 1979: First Deep Convolutional NN (1969: Rectified Linear Units)
Conclusion (~2,000 words). Conclusion and Acknowledgments (~2,000 words). robotics & video game players (2018-19, see Sec. C), D, D & (D) computer vision, available for downloading from this site Sec. D above. Back then, the only really Executive summary of what whom they did not cite whom they did not cite, in contrast to Sec. 3: Circa 1800: First Neural Net (NN) / Linear Regression / Shallow Learning
Finally, Sec. 20: The Broader Historic Context from Big Bang to Far Future
Sec. 11: Feb 1990: Generative Adversarial Networks / Artificial Curiosity / NN Online Planners
Sec. 10: 1980s-90s: Graph NNs / Stochastic Delta Rule (Dropout) / More RNNs / Etc
Sec. 18: It Dr. E.P. Scarlett high school, Calgary matriculation: CVPR paper on DanNet[GPUCNN3] image caption generation[DL4] & brand new, improved version[FWP6] of Recent work of February 2021[FWP6] work of June 2021[FWP7] (also with Robert Csordas) points out that the original FWP formulation of 1991[FWP0-1] is more general than the one of linear Transformers: a slow NN continually reprograms the weights of a fast NN with on-device speech recognition[GSR19] Alphastar whose brain has a deep LSTM core trained by PG.[DM3] revisionist narrative of ELMs[ELM2][CONN21] (Wiki2023)
By the 2010s,[DEC] they were the 2010s,[DEC] the mid 2010s,[DEC] Adversarial Artificial Curiosity), and (5) variants of Transformers (linear Transformers are formally equivalent to my earlier Fast Weight Programmers). Adversarial Artificial Curiosity), and (5) variants of Transformers (Transformers with linearized self-attention are formally equivalent to my earlier Fast Weight Programmers). Adversarial Artificial Curiosity), and (5) variants of Transformers (Transformers with linearized self-attention are formally equivalent to the much earlier Fast Weight Programmers). Artificial Curiosity artificial curiosity a simple application[AC] GANs are instances GANs are variations principles of generative adversarial NNs and artificial curiosity (1990),[AC][AC90,90b][AC10][AC20] Compressed Network Search[CO2] ad hominem attacks[AH2-3][HIN] my reply to Hinton recent debate:[HIN] It is true that in 2018, June 2020 article[T20a][R12] June 2020 article[T20a][R12] DanNet[DAN][DAN1][GPUCNN5] DanNet,[DAN,DAN1][R6] our CNNs were deep and fast enough[DAN][DAN1][GPUCNN5] Critique of Paper by self-proclaimed[DLC1-2] "Deep Learning Conspiracy" (Nature 521 p 436). Critique of Paper by self-proclaimed[DLC2] "Deep Learning Conspiracy" (Nature 521 p 436). it is not always clear[DLC] Annus Mirabilis of 1990-1991,[MIR][MOST] Annus Mirabilis of 1990-1991.[MIR][MOST] Annus Mirabilis of 1990-1991.[MIR] Annus Mirabilis of deep learning.[MIR] artificial neural network (NN) recurrent NNs (RNNs) sequence-processing recurrent NNs (RNNs) adaptive subgoal generators More. deep NNs drove the shift is not necessary (Sec. 19) unsupervised pre-training for deep NNs First Very Deep NNs, Based on Unsupervised Pre-Training (1991). unsupervised pre-training unsupervised pre-training for deep NNs (1991),[UN1-2] unsupervised pre-training for deep NNs,[UN1-2] unsupervised pre-training of NNs, 1991 NN distillation procedure,[UN0-2][MIR](Sec. 2) compressing or distilling compressing or distilling one NN into another (1991), neural knowledge distillation procedure (Sec. 3,Sec. 4) the vanishing gradient problem (1991)[VAN1] & vanishing gradient problem vanishing gradient problem,[MIR](Sec. 3)[VAN1] Bengio published his own,[VAN2] without citing Sepp. vanishing gradients (1991), vanishing gradients (1991)[VAN1] & CTC-LSTM Recurrent Neural Networks, especially LSTM Long Short-Term Memory Long Short-Term Memory or LSTM (Sec. A), LSTM. LSTMs solutions to it (Sec. A), solutions to it (Sec. A),[LSTM0-17][CTC] supervised LSTM. More. (more). principles of generative adversarial NNs and artificial curiosity (1990),[AC90,90b][AC20] the GAN principle Predictability Minimization for creating disentangled representations of partially redundant data, applied to images in 1996.[PM0-2][AC20][R2][MIR](Sec. 7) fast weight programmers[FWP][FWP0-4a] fast weights. learning sequential attention deep learning survey[DL1] deep learning survey,[DL1] and can also be seen as a short history of the deep learning revolution, at least as far as ACM fast weight programmers (1991),[FWP0-2,6] fast weight programmers (1991).[FWP0-2,6] fast weight programmers[FWP0-2][FWP][ATT][MIR](Sec. 8) since 1991 (see Sec. XVI) FWPs of 1991[FWP0-1] have their roots in my lab (1991);[FWP][FWP0-2,6] Transformers with linearized self-attention were also first published[FWP0-6] in "soft" attention in the latent space of Fast Weight Programmers (FWPs),[FWP2][FWP] and "hard" attention (in observation space) in the context of RL[ATT][ATT0-1] (1990). Transformers with "linearized self-attention"[TR5-6] Neural History Compressor.[UN1] published.[FWP0-1] recurrent NNs that learn to generate sequences of subgoals.[HRL1-2][PHD][MIR](Sec. 10) Highway Net (May 2015).[HW1-3][R5] The Highway Net (see below) is actually the feedforward net version of our vanilla LSTM (see below).[LSTM2] It was the first working, really deep feedforward NN with hundreds of layers (previous NNs had at most a few tens of layers). Highway Net (May 2015).[HW1-3][R5] The Highway Net is actually the feedforward net version of vanilla LSTM.[LSTM2] It was the first working, really deep feedforward NN with hundreds of layers (previous NNs had at most a few tens of layers). NNs with over 100 layers (2015),[HW1-3][R5] (take all of this with a grain of salt, though[OMG1]). our CTC-LSTM-based speech recognition (not that of Hinton) had been on most smartphones for years[GSR][GSR15-19][DL4] (see Sec. A, VI, XI, XV). Similarly for machine translation (see Sec. B). Gottfried Wilhelm Leibniz[L86][WI48] (see above), LSTM trained by policy gradients (2007).[RPG07][RPG][LSTMPG] learn to learn was published in 1987.[META][R3] more citations per year[MOST] most cited neural network,[MOST] is a version (with open gates) of our earlier most cited NN of the 20th century.[MOST] most cited NN of the 21st century.[MOST] most cited NN,[MOST] is a version (with open gates) of our earlier were all driven by my lab:[MOST] In 1991, I had the attention[ATT] attention[ATT] (Sec. 4) attention[ATT] (compare Sec. 4). attention terminology in 1993.[ATT][FWP2][R4] attention terminology like the one I introduced in 1993[ATT][FWP2][R4]). our much earlier work on this[ATT1][ATT] although raw computational power of all human brains combined.[RAW] LSTM[MIR](Sec. 4) Turing[TUR] (1936), and Post[POS] (1936). by myself in 1991[UN][UN0-3] (see below), and later championed by others (2006).[UN4] In fact, it was claimed[VID1] by ours[UN0-2][UN] dates back to 1991[UN] first very deep NNs based on unsupervised pre-training;[UN-UN2] neural history compressors[UN][UN0-3] learn to represent percepts at multiple levels of abstraction and multiple time scales (see above), while this class of methods was pioneered in 1991[UN-UN2] (see Sec. II, III). this type of deep learning dates back to 1991.[UN1-2][UN] unsupervised pre-training for deep NNs (1991),[UN1-2][UN] unsupervised pre-training of deep NNs.[UN0-UN2][MIR](Sec. 1) we had this type of deep learning already in 1991;[UN][UN1-2] see Sec. 1993 paper[FWP2] which Link. history of backpropagation More.[DL2] automatic email answering[DL4] etc. I, I, I. I, A, B, C, D, VII, XVIII. I & I & II, II, II, II & II & II & XVII (5). III, III, III. III & III & III. Note that III).[DLC][DEEP1-2][BP1][DL1-2][R7-R8][R2-R4] Sec. II Introduction (~300 words), IV, IX, IX & IX, and Critique of LBH I respond to LBH Sec. 2 Sec. 16: June 1991: Roots of Long Short-Term Memory / Highway Nets / ResNets
Master of Applied Science Sec. 5: 1958: Multilayer Feedforward NN (without Deep Learning)
Editorials
Letters
Sec. 22: 555+ Partially Annotated References (many more in the award-winning survey[DL1])
Sec. 17: 1980s-: NNs for Learning to Act Without a Teacher
Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture. ~1972: First Learning RNNs
(Sec. 1), (Sec. 1) are recurrent and identical. (Sec. 2).[FWP4a][R4][MIR](Sec. 8)[T22](Sec. XVII, item H3) (Sec. 3). (Sec. 4) Sec. 12: April 1990: NNs Learn to Generate Subgoals / Work on Command
[T22] debunks this justification. Sec. 19: But Don Sec. 13: March 1991: NNs Learn to Program NNs. Transformers with Linearized Self-Attention
Sec. 14: April 1991: Deep Learning by Self-Supervised Pre-Training. Distilling NNs
Sec. 15: June 1991: Fundamental Deep Learning Problem: Vanishing/Exploding Gradients
VIII, VIII & medical diagnosis (2012, see Sec. VII, XVIII), and many other applications.[DEC] VII, VII: ACM explicitly mentions medicine and (VII) medicine, astronomy, materials science. VI, V V, V, V. V & V & XIII, XIII, XIII & XIII & XII, XII, XII & XII & XII & XIX & XIV, XIV, XIV, XIV & XIV & XI). XI, XI, and XIX, XIX, XIX & XIX & Fast Weight Programmers (1991, see Sec. XVI) are formally equivalent to linear Transformers (now popular in NLP). XVIII) XVIII). XVIII, XVIII: XVIII & XVIII & XIV & XI & VI) XVII). XVII, XVII & XVII & XVI). XVI, XVI: ACM XV, XV: ACM credits Bengio for hybrids of NNs and probabilistic models of sequences. X, X. X & X & XVII). X.[MIR](Sec. 1)[R8] XXI, XXI. XX): XX, XX, XX. XX & XX, and 2). #AI51 #AI, machine intelligence, etc #AIT1 #AIT20 #AIT7 AlexNet won one;[R6] All backed up by over 250 references (~9,000 words). All backed up by over 300 references (over 10,000 words). #Alleged sentience of artificial intelligence All of these fields were heavily shaped in the 2010s by our non-CNN methods.[DL1][DL4][AM16][GSR][GSR15][GT16][WU][FB17] See all possible questions through computation;[WI48] alpha-beta-pruning (1959),[S59] Already before ImageNet 2012,[R6] already in 1965[DEEP1-2][R8] (see Sec. II). already in 1995.[SNT] also failed to cite Linnainmaa.[BP1] also failed to cite Linnainmaa[BP1] also follow the additive approach.[FWP0-2] also in the 1970s, especially outside of the Anglosphere.[DEEP2][BP6][CNN1][DL1-2] also in the 1970s, especially outside of the Anglosphere.[DEEP2][GD1-3][CNN1][DL1-2] Also see Sec. XIX, II. alternative[FWP0-1] to recurrent NNs. although our work[LSTM2] was the one that introduced gated recurrent units. Although these MLPs did not yet have deep learning, because only the last layer learned,[DL1] although this work[LSTM2] was the one that introduced gated recurrent units. #Always-shifting alliances #AM16 #Amazon Customer - Concerned about vaccines? #AMH1 #AMH2 #AMH3 a monopoly on winning computer vision competitions.[GPUCNN5] It more than "halved the error rate for object recognition" (ACM analyzed by physicists Ernst Ising and Wilhelm Lenz in the 1920s.[L20][I24,I25][K41][W45][T22] It settles into an equilibrium state in response to input conditions, and is the foundation of the first learning RNNs (see below). analyzed ways of implementing gradient descentmany other applications.[DEC] and contest-winning deep CNNs (2011),[DAN][DAN1][GPUCNN5] and backpropagation (1960-70)[BPA][BP1] (see Sec. XIX, XII) and convolutional NNs since 1979[CNN1-4] (see Sec. XVIII, D). (and also Fukushima[CNN1][DL2]) had long before LeCun. and Amari[GD1-2] (and apparently even other award committees[HIN](Sec. I) (and apparently even other award committees[HIN](Sec. I)) and both of them dating back to 1991, our miraculous year of deep learning.[MIR] and convolutional NNs (1979),[CNN1] and convolutional NNs (1979),[CNN1] and CTC[CTC] (2006), which were applied to speech and deep learning (e.g., Sec. I), ACM lauds and Transformers[TR1-6] and forcefully contradict public figures who promote it."[FAKE] and more.[DL1-2][R2-R8] and other exciting stuff. Much of this has become very popular, and improved the lives of billions of people.[DL4][DEC][MOST] and other foundations.[DL1-2][R2-R8] and other highly cited CNNs[RCNN1-3] and other topics.[R2-R6] and policy gradients.[GD1][PG1-3] and published the chain rule[LEI07-10] (see above), essential ingredient of deep learning and modern AI. #Andreas Mayer (Swiss German) and Joe Stachulak (Polish) - post-war generation and recent renewed interest in such methods.[NAN5][FWPMETA6][HIN22] Andrej Kolmogorov, he founded the theory of Kolmogorov complexity or algorithmic information theory (AIT),[AIT1-22] going beyond traditional information theory[SHA48][KUL] #Andrew Hall's work #AND: simple function of two input RNA strands, one output RNA and some of the erroneous claims it made about my prior work.[AC20] and the 1948 upgrade of ENIAC, which was reprogrammed by entering numerical instruction codes into read-only memory.[HAI14b] And the article[RUM] even failed to mention Linnainmaa, the inventor of this famous algorithm for credit assignment in networks (1970),[BP1] and the first with an internal memory.[BL16] He and their co-workers have contributed useful improvements of deep learning methods.[CNN2,4][CDI][LAN][RMSP][XAV][ATT14][CAPS] and through Highway Net-like NNs (2015),[HW1-3][R5] although the principles of CNNs were invented and developed by others since the 1970s.[CNN1-4] See Sec. D & XVIII & XIV and to fight plagiarism, collusion rings,[LIT21] and systemic academic corruption in all of their more and less subtle forms.[FAKE] and to fight plagiarism,[FAKE2] and universal search techniques (1973).[AIT7] and were able to greatly improve steel defect detection.[ST] an important benchmark record,[MLP1-2] #Anne Rooney - Dangerous and inaccurate nonsense an old area of research seeing renewed interest. Practical AI dates back at least to 1914, when Leonardo Torres y Quevedo (see below) built Another idea that has drawn attention for several decades is that consciousness is associated with high-frequency (gamma band) oscillations in brain activity. This idea arose from proposals in the 1980s, by Christof von der Malsburg and Wolf Singer, that gamma oscillations could solve the so-called binding problem, by linking information represented in different parts of the brain into a unified experience.[80] Rodolfo Llinás, for example, proposed that consciousness results from recurrent thalamo-cortical resonance where the specific thalamocortical systems (content) and the non-specific (centromedial thalamus) thalamocortical systems (context) interact in the gamma band frequency via synchronous oscillations.[81] ..." (Wiki2023 - Consciousness#Neural_correlates)
Another milestone of 2006 was the training method "Connectionist Temporal Classification" or CTC[CTC] for simultaneous alignment and recognition of sequences. Our team successfully applied CTC-trained LSTM to speech in 2007[LSTM4] (also with hierarchical LSTM stacks[LSTM14]). another NN (see Sec. 1). any type of computation-based AI.[GOD][BIB3][GOD21,a,b] any type of computation-based AI.[GOD][BIB3][MIR](Sec. 18)[GOD21,21a] #AOI

In 1972, Shun-Ichi Amari made the Lenz-Ising recurrent architecture adaptive such that it could learn to associate input patterns with output patterns by changing its connection weights.[AMH1] See also Stephen Grossberg Apart from A, B, C above, Apart from possible normalization/squashing,[FWP0] a particular feedforward neural net (NN) called the convolutional NN (CNN).[CNN1-4] The basic CNN architecture with convolutional and downsampling layers is due to Fukushima (1979),[CNN1] who also introduced the now widely used a particular feedforward NN called the convolutional NN (CNN).[CNN1-4] #Apparent failures of essentially ALL [medical, scientific] experts, and the mass media? Apparently the first LSTM journal paper[LSTM1][R5] is now the 20th century Apparently the first LSTM journal paper[LSTM1][R5] is now the most frequently cited #Apparent successes of the medical, scientific] experts? application of LSTM to speech (2004).[LSTM10] #Arc Blast - Part One Thunderblog #Arc Blast - Part Three Thunderblog #Arc Blast - Part Two Thunderblog #Arches National Monument, Sputtering Canyons Part 1 Thunderblog architecture [NEU45]. architectures of recurrent NNs (1925-56)[I25][MC43][K56] architectures of recurrent NNs (1943-56)[MC43][K56] are actually a variant of the vanilla LSTM architecture[LSTM2] (2000) which the authors did not cite are actually light beams).[DL2] are additive (Sec. 1 & 2). are expected to become even much more important than they are today.[DL2] are poor indicators of truly pioneering work.[NAT1] are related to the 1991 paper[UN1][UN] which in many ways started what people now call deep learning, going beyond #Arnold Toynbee - Challege and response #ART - Adaptive Resonance Theory #ART assess theories of consciousness #ART augmentation of other research artificial curiosity and generative adversarial NNs for agents that invent their own problems (see above),[AC90-AC20][PP-PP2][SA17] artificial curiosity and self-invented problems,[PP][PPa,1,2][AC] artificial evolution (1954),[EVO1-7]([TUR1],unpublished) artificial evolution,[EVONN1-3] artificial evolution[TUR1] and #ARTMAP associate learned categories across ART networks #art (painting etc) #ARTPHONE [gain control, working] memory #ARTSCENE classification of scenic properties #ARTSTREAM auditory streaming, SPINET sound spectra As emphasized earlier:[DLC][HIN] As I have frequently emphasized since 1990,[AC90][PLAN][META] As mentioned in Sec. B and XVI, the first superior end-to-end neural machine translation was also based on LSTM. As mentioned in Sec. XII, backpropagation was actually proposed earlier as a learning method for NNs by Werbos (1982)[BP2-4] (see also Amari As per the first question from the section [OOPS2][ZUS21] As recently as of 2021, ACM published yet another misleading deep learning "survey" by LBH,[DL3a] again heavily citing LBH without Assorted training and non-credit courses #Astrocyctes #Astrology? You must be joking!! as well as Sec. 19 of the overview.[MIR] as well as Sec. 4 & Sec. 19 of the overview.[MIR] at ICPR 2012, our DanNet[GPUCNN1-3] won the at IJCNN 2011 in Silicon Valley, DanNet blew away the competition and achieved the first superhuman visual pattern recognition[DAN1] in an international contest. at IJCNN 2011 in Silicon Valley, DanNet blew away the competition and achieved the first superhuman visual pattern recognition[DAN1] in an international contest (where LeCun at IJCNN 2011 in Silicon Valley, our DanNet[DAN][GPUCNN1-3] won the at multiple levels of abstraction and multiple time scales (see above),[HRL0-2][LEC] a traditional LSTM domain (see Sec. B). #ATT #ATT14 #ATT2 #ATT3 attentional component (the fixation controller)." See [MIR](Sec. 9)[R4]. attention[FWP][ATT] through at the problem in the 1980s."[S20] However, the 1969 book[M69] addressed a "deep learning problem" [ATT] J. Schmidhuber (AI Blog, 2020). 30-year anniversary of end-to-end differentiable sequential neural attention. Plus goal-conditional reinforcement learning. We had both hard attention (1990) and soft attention (1991-93).[FWP] Today, both types are very popular. #auditory continuity illusion #AUT #Autonomous systems, networks, robots #AV1 award can ever change that.[HIN] #A workable [definition, context, model] for consciousness #B 2. Maps of Ukraine [war [forecast, battles], losses, oil, gas pipelines, minerals] - #BA93 #BA96 #Bachelor of Applied Science in Chemical backing up his denial by any facts; see Sec. XVII. #backprop backpropagation by Rumelhart et al. (1985-86)[RUM] #bad guy Baldi and Chauvin (1993) had the first application of CNNs with backpropagation to biomedical/biometric images.[BA93] #BAN based on LSTM[LSTM0-6] (1990s-2005) and CTC (2006).[CTC] based on "deep learning" with NNs.[DL1-2][DEC] Based on TR FKI-126-90 (1990).[AC90] Basic Long Short-Term Memory[LSTM1] solves the problem by adding at every time step #Basis of concept #BAU #BB2 C. Robotics & RL etc. Since 2003, our team has used LSTM for Reinforcement Learning (RL) and robotics.[LSTM-RL][RPG][LSTMPG] beat a pro player in the game of Starcraft, which is theoretically harder than Chess or Go[DM2] in many ways, using because only the last layer learned,[DL1] Rosenblatt basically had what much later was rebranded as Extreme Learning Machines (ELMs) without proper attribution.[ELM1-2][CONN21][T22] Before the 1990s, however, RNNs failed to learn deep problems in practice.[MIR](Sec. 0) (before the similar AlexNet won ImageNet 2012[GPUCNN5][R6] and the similar VGG network[GPUCNN9] won ImageNet 2014). #BEL53 #Ben Davidson 07Apr2020 Space Weather & Pandemics #Ben Davidson 23Apr2020 Millions Are Being Murdered, The Killer Cure #Ben Davidson 23Feb2018 - What To Do With Space Weather Health Information #Ben Davidson & Suspicious Observers - Space Weather and health Bengio also claims[YB20] that in 1995 Bengio also writes[YB20] that in Bengio has also heavily used our LSTM (see Sec. A-C), #Benoit Mandelbrot fractals: The misbehavior of markets Benoit Manderot stressed that "... the true power of fractals arises when these are used with time as a fractal dimension ..." ([1]. The book provides a wonderful analysis of the fractal nature of [commodity, financial] markets. Note that Puetz also worked with collaborators on a fractal analysis of geological data from a [fractal, UWS] perspective.
#Bernard Baars 1988 global workspace model besides LeCun have worked "to speed up backpropagation algorithms"[DL1] (ACM Bill Gates called this a "huge milestone in advancing artificial intelligence".[OAI2a][MIR](Sec. 4)[LSTMPG] #Biological context #Biological similies #biology, evolution, paleontology #[bio, neuro, psycho]logy data #BL16 #Blake Lemoine: Is LaMDA Sentient? Blatant misattribution and unintentional[PLAG1][CONN21] or intentional[FAKE2] plagiarism are still tainting the entire field of deep learning.[T22] Bloomberg,[AV1] #BM #body21 Boltzmann Machine (BM)[BM] a #bone [shell, fibre]s #BOO #BP1 #BP2 #BP5 #BP6 #BPA #BPTT2 #brain disorders and disease #Brain is NOT Bayesian? #Brain regions, neural networks, resonance #brain rythms & Schuman resonances #BRE #Bromley, Alexander Dec2018 thorium and depleted uranium in sub-critical GC-PTR brought essentially unlimited depth to gradient-based supervised recurrent NNs; Highway Nets[HW1-3] brought it to feedforward NNs.[MOST] brought essentially unlimited depth to gradient-based supervised recurrent NNs in the 1990s; our Highway Nets[HW1-3] brought it to feedforward NNs in May 2015.[MOST] brought essentially unlimited depth to gradient-based supervised recurrent NNs;[LSTM0-17] brought essentially unlimited depth to supervised recurrent NNs; Highway Nets[HW1-3] brought it to feedforward NNs.[MOST] brought essentially unlimited depth to supervised recurrent NNs in the 1990s; our Highway Nets[HW1-3] brought it to feedforward NNs in May 2015.[MOST] #BRU1 #BRU4 #Building MindCode from [bio, psycho]logical data Building on previous work[FWPMETA7] on FWPs Business Week called LSTM "arguably the most commercial AI achievement."[AV1] But in 2010, our team showed[MLP1-2] but in a fully neural way (rather than in a hybrid fashion[PDA1][PDA2][DNC]). but in an end-to-end-differentiable, adaptive, fully neural way (rather than in a hybrid fashion[PDA1-2][DNC]). (but see a 1989 paper[MOZ]). #BW By 1993, the approach solved problems of depth 1000 [UN2] (By 2003, LSTM variants successfully dealt with language problems of depth up to 30,000[LSTM17] by computing fast weight changes through additive outer products of self-invented activation patterns[FWP0-1] By favoring additive operations yielding non-vanishing first derivatives and error flow,[VAN1] by my brilliant student Sepp Hochreiter a few months later in his 1991 diploma thesis.[VAN1] by Sherrington & Kirkpatrick[SK75] & Glauber[G63] nor the first working algorithms for deep learning of internal representations (Ivakhnenko & Lapa, 1965)[DEEP1-2][HIN] nor by Sherrington & Kirkpatrick[SK75] and Glauber.[G63] #C #Calculus of War called AlexNet,[GPUCNN4] without mentioning that our earlier groundbreaking deep GPU-based DanNet[GPUCNN1-3,5-8][DAN] did not need ReLUs at all to win 4 earlier object recognition competitions and to achieve superhuman results already in 2011[GPUCNN1-8][R5-6] (see Sec. XIV). called Deep Belief Networks (DBNs).[UN4] called max-pooling was introduced by Weng et al. (1993).[CNN3] called max-pooling was introduced by Yamaguchi et al. for TDNNs in 1990[CNN3a] and by Juan Weng et al. for higher-dimensional CNNs in 1993.[CNN3] called max-pooling was introduced by Yamaguchi et al. for TDNNs in 1990[CNN3a] and by Weng et al. for higher-dimensional CNNs in 1993.[CNN3] Since 1989, #CAN can be found at Scholarpedia[DL2] and in my award-winning survey.[DL1] can compute a direction in program space where one may find a better program,[AC90] #caption cut off : #Captioned images [image, link] problems : #Captioned images - remaining problems cards (1679),[L79][L03][LA14][HO66] #[, c]ARTWORD word perception cycle #Cellular mechanisms for [protein, information] century[SHA7a][RAU1] by Heron of Alexandria #chainrule championed by Hinton;[UN4][VID1] see Sec. D). #Chapter 10 - Laminar computing by cerebral cortex #Chapter 11 - How we see the world in depth #Chapter 12 - From seeing and reaching to hearing and speaking #Chapter 13 - From knowing to feeling #Chapter 14 - How prefrontal cortex works #Chapter 15 - Adaptively timed learning #Chapter 16 - Learning maps to navigate space #Chapter 17 - A universal development code #Chapter 1 - Overview #Chapter 2 - How a brain makes a mind #Chapter 3 - How a brain sees: Constructing reality #Chapter 4 - How a brain sees: Neural mechanisms #Chapter 5 - Learning to attend, recognize, and predict the world #Chapter 6 - Conscious seeing and invariant recognition #Chapter 7 - How do we see a changing world? #Chapter 8 - How we see and recognize object motion #Chapter 9 - Target tracking, navigation, and decision-making #cheat chemistry, molecular design, lip reading, speech synthesis,[AM16] #Chinese #CHU Church[CHU] (1935), cite Linnainmaa (1970),[BP1] the true creator.[BP4-5] #civiliser #climate #CMB #cnn #CNN1 #CNN1+ [CNN1a] A. Waibel. Phoneme Recognition Using Time-Delay Neural Networks. Meeting of IEICE, Tokyo, Japan, 1987. First application of backpropagation[BP1-5] and weight-sharing [CNN1a] A. Waibel. Phoneme Recognition Using Time-Delay Neural Networks. Meeting of IEICE, Tokyo, Japan, 1987. First application of backpropagation[BP1][BP2] and weight-sharing #CNN2 #CNN3 #CNN3a CNN of 2011[GPUCNN1] known as DanNet[DAN,DAN1][R6] CNNs (Dan Ciresan et al., 2011).[GPUCNN1,3,5] CNNs of 2006.[GPUCNN] CNNs of 2006.[GPUCNN] In 2011, DanNet became the first pure deep CNN #CO2 #CogEM Cognitive-Emotional-Motor model collusion rings,[LIT21] and systemic academic corruption in all of their more and less subtle forms.[FAKE] #Colorado Plateau, Sputtering Canyons part 2 Thunderblog #Colton, Bromley Jul2018 mixed oxide thorium based fuels #Colton, Bromley Mar2021 PT Heavy Water Reactor to Destroy Americium and Curium combined a linear NN as above with an output threshold function to obtain a pattern classifier (compare his more advanced work on multi-layer networks discussed below). combines two methods from my lab: LSTM (1990s-2005) and CTC (2006), which were commonsense reasoning[MAR15] and learning to think.[PLAN4-5] (compare Sec. 2 and Sec. 4 on attention terminology since 1993). (Compare related work.[H86][H88][S93]) Compare the 1967-68 work of Amari:[GD1-3] to my knowledge the first to propose and implement stochastic gradient descent[STO51-52] #Comparison of [TradingView, Yahoo finance] data competitor.[DAN1] This led to massive interest from industry. #complementary computing #Computations with multiple RNA strands #computer Computer Vision was revolutionized in the 2010s by a particular feedforward NN called the convolutional NN (CNN).[CNN1-4] #computing with cellular patterns #conclusion conditional jump instruction.[RO98] #Conference webPage & description #Conrad Black: American destruction of the British Empire #Conscious mind, resonant brain: sub-section list #Conscious mind, resonant brain: Table of Contents #consciousness #Consciousness: Grossberg's tie-in of [, non] conscious processes #conscious vs non-conscious Consult the Executive Summary and Sec. I-XXI of this critique for more. containing the now popular multiplicative gates).[DEEP1-2][DL1-2] A paper of 1971[DEEP2] already described a deep learning net with 8 layers, trained by their highly cited method which was still popular in the new millennium,[DL2] especially in Eastern Europe, where much of Machine Learning was born.[MIR](Sec. 1)[R8] LBH failed to cite this, just like they failed to cite Amari,[GD1] who in 1967 proposed stochastic gradient descent[STO51-52] (SGD) for MLPs and whose implementation[GD2,GD2a] (with Saito) learned internal representations at a time when compute was billions of times more expensive than today (see also Tsypkin control theory and system identification (1950s),[KAL59][GLA85] #cooperative-competitive copyrights copyrights copyrights copyrights copyrights copyrights copyrights copyrights copyrights copyrights copyrights copyrights copyrights copyrights copyrights copyrights copyrights #Corona virus models #Cosmic/Galactic rays at historical high in summer 2019 #COVID-19 data and models #Covid-19 vaccine shots #Cracks in Theory #Credibility from non-[bio, psycho]logical applications of Grossberg's ART #Crick-Koch 1990 Towards a neurobiological theory of consciousness #crypto [BTC,ETH,COIN], 10y T-bill #CTC CTC-LSTM is end-to-end-neural and thus very different from (and superior to) the hybrid methods since the late 1980s.[BW][BRI][BOU][HYB12] #D #Daily cases charts for countries, by region #DAN #DAN1 #[data, software] cart [description, links] date back to 1991-93.[UN0-2][UN] #?date? cART conciousness ART #?date? CLEARS [Cognition, Learning, Expectation, Attention, Resonance, Synchrony] #?date? CogEm Cognitive-Emotional model date.html#582 #?date? LAMINART Laminar computing ART #David Spielhalter Risk of dying if you get coronavirus vs normal annual risk #Deactivate: comparison QM vs SAM #Deactivate [U,Pu, etc] -> [Pb, Au], is alchemy back? debunked unsupervised pre-training (introduced by myself in 1991 and later championed by Hinton), #DEC (Dec 2015). Residual nets are a version of Highway Nets[HW1] dedicate an extra section to attention-based Transformers,[TR1-6] citing Bengio deductively equivalent[LE18] to the later deductively equivalent[LE18] to the much later deductively equivalent[LE18] to the much later #DEEP1 #DEEP2 deep convolutional NN architecture was first introduced in the 1970s;[CNN1] his very popular ReLU already in 1969.[RELU1-2] deep convolutional NN architecture was proposed in the 1970s.[CNN1] Deep learning architectures that can manipulate structured data such as graphs[T22] were deep learning as "moving beyond shallow machine learning since 2006",[DL7] deep learning multilayer perceptrons (1965),[DEEP1-2][R8] deep NNs (2010)[MLP1] #Definitions and data [problem, limitation]s #[definitions, models] of consciousness #Definitions: nuclear [material, process, deactivate]s #Definitions of consciousness #Definitions of sentience #Dehaene–Changeux 1986 global neuronal workspace model #Delightful finds : ideas that were [new, different] to me depth that really learned.[DEEP1-2][R8] depth that really learned.[DEEP1-2][R8] Five years later, modern described in the 1991-93 papers on Fast Weight Programmers and linear Transformers[FWP0-1,6] (see Sec. XVI, XVII-2). described the principles of binary computers (1679)[L79][L03][LA14][HO66][LEI21,a,b] #Description of the Universal Wave Series (UWS) designed the first machine (the step reckoner) that could perform all four arithmetic operations, and the first with a memory.[BL16] #destroyer #Detailed [description, specification] #DIF1 diploma thesis,[VAN1] which I consider one of the most important documents in the history of machine learning. It also directory directory directory directory directory directory directory directory directory directory directory directory directory directory directory directory directory [DIST1] J. Schmidhuber, 1991.[UN-UN2] #DIST2 #distill distilling teacher NNs into student NNs (see above),[UN][UN0-3]

#DL1 #DL2 #DL3 #DL3a #DL4 #DL6 #DL7 #DLC #DLC1 #DM1 #DM2 #[DNA, rhibosome, etc] addresses #DNA transcription to mRNA doing this).[T22] It was not published until 1970, as discussed below.[BP1,4,5] do not suffer during sequence learning from the famous vanishing gradient Dota 2 video game (2018).[OAI2]
#Do these comments have anything to do with consciousness? #Download DP and its online variant called Temporal Differences (TD),[TD1-3] #[, d, p]ARTSCAN attentional shroud, binocular rivalry #Drop1 #drum (Bessel functions) dynamic programming (DP, 1953),[BEL53] earlier fast weights of von der Malsburg (1981) and Feldman (1982).[FAST,FASTa-b][FWP] early adversarial machine learning settings[S59][H90] #Easter Egg Hunt video #Eccles keynote, Pribram post-conference viewpoint #Egypt #Eileen Mckusick - Mastering the Human Biofield with Tuning Forks #Electric Earth & the Cosmic Dragon, Eye of the Storm part 7, 1 of 2 Thunderblog and video #Electric Earth & the Cosmic Dragon, Eye of the Storm part 7, 2 of 2 Thunderblog and video #Electricity in Ancient Egypt, video 26Aug2023 #Electromagnetic theories of consciousness #ELM1 else cites."[LECP] A blog post describing basic ideas[AC][AC90,AC90b][AC20] of GANs. A blog post describing the basic ideas[AC][AC90, AC90b][AC20] of GANs. additive neural activations of LSTMs / Highway Nets / ResNets[HW1-3] (Sec. 5) additive outer product fast weight principle[FWP0-2] More on the Fundamental Deep Learning Problem. A misleading "history of deep learning" goes more or less like this: "In 1969, Minsky & Papert[M69] A misleading "history of deep learning" which goes more or less like this: "In 1969, Minsky & Papert[M69] Another "survey" of deep learning that does not mention the pioneering works of deep learning [T22]. A "survey" of deep learning that does not mention the pioneering works of deep learning [T22]. attention-based Transformers[TR1-6] are Bengio claimed[YB20] better program-modifying program.[FWP0-2][FWPMETA1-5] Boolean Algebra (1847).[BOO] Boolean Algebra of 1847.[BOO] By 2010, when compute was 100 times more expensive than today, both our feedforward NNs[MLP1] By 2010, when compute was 100 times more expensive than today, both the feedforward NNs[MLP1] Compare the earlier Neural Architecture Search of Bayer et al. (2009) for LSTM-like topologies.[LSTM7] Debunking [T19] and [DL3a] . Description of GANs that does not cite the original work of 1990[AC][AC90,AC90b][AC20][R2] (also containing wrong claims about Fast Weight Programmers (FWPs) were published in 1991-93[FWP0-2] First application of backpropagation[BP1] to NNs (concretizing thoughts in his 1974 thesis). First application of backpropagation[BP1] to NNs (concretizing thoughts in Werbos First publication of what was later sometimes called the Hopfield network[AMH2] or Amari-Hopfield Network. First publication of what was later sometimes called the Hopfield network[AMH2] or Amari-Hopfield Network,[AMH3] based on the (uncited) Lenz-Ising recurrent architecture.[L20][I25][T22] First publication of what was later sometimes called the Hopfield network[AMH2] or Amari-Hopfield Network.[AMH3] H. Bruderer[BRU4] calls that the first conference on AI. "If you cannot dispute a fact-based message, attack the messenger himself."[HIN] linear Transformers (2020-21)[TR5-6] linear Transformers or Performers[TR5-6] Mentions the recurrent Ising model[L20][I25]on which the (uncited) Amari network[AMH1,2] is based. multilayer perceptrons (MLPs) were discussed by Steinbuch[ST61-95] (1961), Joseph[R61] (1961), and Rosenblatt[R62] (1962), NN-programmed fast weights (Sec. 5).[FWP0-1], Sec. 9 & Sec. 8 of [MIR], Sec. XVII of [T22] NN-programmed fast weights (Sec. 5 & 1). emphasis on topics such as support vector machines and kernel methods,[SVM1-4] Bayesian (actually Laplacian or possibly Saundersonian[STI83-85]) reasoning[BAY1-8][FI22] and other concepts of probability theory and statistics,[MM1-5][NIL98][RUS95] decision trees,e.g.,[MIT97] Precursor of modern backpropagation.[BP1-4] Precursor of modern backpropagation.[BP1-5] Probably the first paper on using stochastic gradient descent[STO51-52] rectified linear units (ReLUs) for NNs (1969).[RELU1] They are now widely used in CNNs and other NNs. rectified linear units (ReLUs) in 1969.[RELU1] reddit.com/r/MachineLearning[R1-R12] (the largest machine learning forum with back then over 800k subscribers), reinforcement learning through neuroevolution[FWP5] (2005-, Sec. 7), Residual Net or ResNet[HW2] (Dec 2015). second order tensor products.[FWP0-3a] single adaptive layer learned in 1958[R58] (Joseph[R61] slow NN that learns by backpropagation[BP1-4] to rapidly modify Synthetic Gradients.[NAN1-5] The comment under reference[UN4] applies here as well. The first paper on long-term planning with reinforcement learning recurrent neural networks (NNs) (more) and on generative adversarial networks The first paper on online planning with reinforcement learning recurrent neural networks (NNs) (more) and on generative adversarial networks The first paper on planning with reinforcement learning recurrent neural networks (NNs) (more) and on generative adversarial networks The Hopfield network or Amari-Hopfield Network was first published in 1972 by Amari.[AMH1] [AMH2] did not cite [AMH1]. The Hopfield network or Amari-Hopfield Network was published in 1972 by Amari.[AMH1] This experimental analysis of backpropagation did not cite the origin of the method,[BP1-4] also known as the reverse mode of automatic differentiation. This experimental analysis of backpropagation did not cite the origin of the method,[BP1-5] also known as the reverse mode of automatic differentiation. This work did not cite the earlier LSTM[LSTM0-6] trained by Connectionist Temporal Classification (CTC, 2006).[CTC] CTC-LSTM was successfully applied to speech in 2007[LSTM4] (also with hierarchical LSTM stacks[LSTM14]) and became the first superior end-to-end neural speech recogniser that outperformed the Turing Machine.[TUR] He rederived the above-mentioned result,[CHU][TUR][HIN][GOD21,21a][TUR21][LEI21,21a] Turing Machine.[TUR] He rederived the above-mentioned result.[CHU][TUR][HIN][GOD21,21a][TUR21][LEI21,21a] unsupervised pre-training for deep NNs[UN4] (2006) although With a brief summary of the generative adversarial neural networks of 1990[AC90,90b][AC20] #Endocrine system end-to-end differentiable NN-based subgoal generators for Hierarchical Reinforcement Learning (HRL).[HRL0] Soon afterwards, this was also done with end-to-end differentiable NN-based subgoal generators[HRL3][MIR](Sec. 10) learn hierarchical action plans through gradient descent (see above). More sophisticated ways of learning to think in abstract ways were published in end-to-end fashion from scratch by stochastic gradient descent (SGD),[GD1] a method proposed in 1951 by Robbins & Monro.[STO51-52] English version: [CNN1+]. More in Scholarpedia. #ENS1 ensemble methods,[ENS1-4] environments.[AIT20,22] He also derived the asymptotically fastest algorithm for all well-defined computational problems,[AIT21] #equations of the [brain, mind] equipped with artificial curiosity[SA17][AC90-AC20][PP-PP2][R1] equipped with artificial curiosity[SA17][AC90-AC20][PP-PP2] Ernst Ising and Wilhelm Lenz in the 1920s.[L20][I25][K41][W45][T22] It settles into an equilibrium state in response to input conditions, and is the foundation of the first well-known learning RNNs.[AMH1-2] Even later surveys by the authors[DL3,3a] failed to cite the prior art.[T22] Even later surveys by the authors[S20][DLC] failed to cite the prior art.[T22] #EVO1 #EVONN1 excellent 1995 neural probabilistic text model.[SNT] See also Nakamura and Shikano #exec expands material in my Critique of the 2019 Honda Prize[HIN] (~3,000 words). expected cumulative reward signals.[DL1] #Explainable AI (explicitly mentioned by ACM) were actually dominated by LSTM and CTC of our team.[LSTM1-4][CTC] #Eye of the Storm, Part 1 Thunderblog #FAKE #FAKE2 #family of ART.....base, linguistic #family of ART ....visual, auditory famous vanishing gradient #Far beyond the bounds of Wilson's book #FAST #FASTb Fast Weight Programmers.[FWP2][ATT] #FB17 #[Fibonacci, Fourier, Elliot, Puetz] series comparisons Finally, my own team showed in 2010[MLP1] First he implemented the Neural History Compressor above but then did much more: first introduced to Machine Learning by Dechter (1986), and to NNs by Aizenberg et al (2000).[DL2] To my knowledge, LBH have never cited them. first introduced to Machine Learning much later by Dechter (1986), and to NNs by Aizenberg et al (2000).[DL2] #firstnn
for a book by Rumelhart & McClelland[R5]). For a FANTASTIC analysis of the conceptual failures of GR, the origins of the mistakes, why it has been so successful, and a more accurate conceptual framework, see Stephen "". From the start of relativity theory with [Poincare, Lorenz] (at least that for a variant of our vanilla LSTM architecture[LSTM2] (2000) which he did not cite for deep NNs.[UN0-4][HIN](Sec. II)[MIR](Sec. 1) for feedforward NNs in 2010 → our DanNet (2011) → AlexNet (2012); VGG Net (2014) (see Sec. D). for image synthesis[GAN1] (also mentioned by ACM in Sec. XVIII). formal Algebra of Thought (1686)[L86][WI48] was #Formulae for Puetz UWS formulated in the general RL framework.[UNI] for recurrent NNs in the 1990s → our LSTM (see Sec. A-C) and for such systems.[S80] See also for synthesis of realistic images,[GAN1,2] #For whom the bell tolls #fractal [dendrite, axon]s #Fractional Order Calculus (FOC) #FRE #Frequency beats from 1990[AC90,90b][AC20] (see also surveys[AC09-10]). This principle from 2007[LSTM4,14] from the section [HIN](Sec. II)[MIR] Fukushima and Waibel (see Sec. D). Fukushima who introduced ReLUs in 1969[RELU1-2] (see Sec. XIV). #full video transcript #Functionalism functions of two variables[HO1] (more on LSTM and fast weights in Sec. 5). further extended the DanNet of 2011.[MIR](Sec. 19)[MOST] further extended the work of 2011.[MIR](Sec. 19) #future #Future objectives #Future related work #FWP #FWP0 #FWP1 #FWP2 #FWP3 #FWP4a #FWP4b #FWP5 #FWP6 #FWPMETA1 #FWPMETA5 #FWPMETA6 #FWPMETA7 #FWPMETA8 #gan #GAN0 #GAN1 GANs[GAN0-1] (2010-2014) are actually #Gary Marcus: Current LLMs do NOT possess 'Artitifial General Intelligence' (AGI) "gated recurrent units (GRU)"[LSTMGRU] #GD' #GD1 #GD2 #GDa #Gems from my recent reading, ~2015-2020 #General [limitation, constraint]s #generation Generative Adversarial Networks (GANs) have become very popular.[MOST] They were first published in 1990 in Munich under the moniker Artificial Curiosity.[AC90-20][GAN1] Germany and Switzerland (LSTM & CTC; see Sec. A) long before Hinton #Germany's hesitance : Petroleum then, natural gas now #GGP #Giulio Tononi 2004 Integrated information theory given set.[AC20][AC][T22](Sec. XVII) #Glenn Borchardt Glenn Borchardt collaborated with Puetz in [3]. He emphasizes vortex motion and his "concept of infinity" from 2004 to partially explain Puetz goal-conditioned policy generators (2022),[GGP] #GOD #GOD56 #Gods and plants - summary #Going further - themes, videos, presentations, courses Goodfellow eventually admitted that PM is adversarial (his paper[GAN1] still claims the opposite), but emphasized that it #Go through image number sequence for missing images #GoTo GPU-accelerated NNs (2004),[GPUNN][DAN][DAN1][GPUCNN5] GPU-accelerated NNs (2004),[GPUNN][GPUCNN5] #GPUCNN #GPUCNN1 #GPUCNN2 #GPUCNN3a #GPUCNN4 #GPUCNN5 #GPUCNN8 #GPUCNN9 #GPUNN gradient descent procedure[BP1-4][BPA][R7]) #graph #Greek #Grossberg 2021: cellular evolution and top-down-bottom-up mechanisms #Grossberg OR[anticipated, predicted, unified] the [experimental result, model]s #Grossberg: other consciousness theories #Grossberg part of webSite #Grossbergs ART- Adaptive Resonance Theory #Grossbergs cellular patterns computing #Grossberg's comments for some well-known consciousness theories #Grossbergs complementary computing #Grossbergs Consciousness: neural [architecture, function, process, percept, learn, etc] #Grossbergs cooperative-competitive #Grossbergs [core, fun, strange] concepts #Grossbergs equations of the mind #Grossbergs laminar #Grossbergs list of [chapter, section]s #Grossbergs list of [figure, table]s #Grossbergs list of index #Grossbergs modal architectures #Grossbergs modules (microcircuits) #Grossberg's [non-linear DEs, CogEm, CLEARS, ART, LAMINART, cART] models #Grossberg's other comments #Grossbergs overview #Grossbergs paleontology #Grossbergs quoted text #Grossbergs what is consciousness #Grossberg: why ART is relevant to consciousness in Transformer NNs #Ground Currents and Subsurface Birkeland Currents - How the Earth Thinks? Eye of the Storm part 9, 2 of 2 Thunderblog and video #GSR #GSR15

Table of Contents :

Table of Contents :

YYG infections,

Amazon Customer - Concerned about vaccines? Make an informed decision with this book. Classic on the debate which has raged for a century.
Anne Rooney - Dangerous and inaccurate nonsense
#H86 #HAB1 Haber-Bosch process for creating artificial fertilizer, without which the world could feed at most 4 billion people.[HAB1-2] had just become accessible in wealthier academic labs. An experimental analysis of the known method[BP1-2] #HAI14b #hardware #Harmonics of [,D]UWS has been widely used for exploration in Reinforcement Learning[SIN5][OUD13][PAT17][BUR18] #Haunting implications of a possible relation between flu and the Kp index have "LSTM" in their title.[DEC] have their conceptual and technical roots in my labs in Munich and Lugano,[MOST] #HE49 #Hebrew He later reused our end-to-end neural speech recognizer[LSTM4][LSTM14] as a postdoc in Hinton Heron of Alexandria[RAU1] in the 1st century). The telephone (e.g., Meucci 1857, Reis 1860, Bell 1876)[NASC3] he was both reviewer and editor of my summary[ATT2] (1990; see Sec. XVI above). He was the reviewer of my 1990 paper[ATT2] highly cited method which was still popular in the new millennium,[DL2] especially in Eastern Europe, where much of Machine Learning was born. Ivakhnenko did not call it an NN, but that highly cited method which was still popular in the new millennium,[DL2] especially in Eastern Europe, where much of Machine Learning was born.[MIR](Sec. 1)[R8] #high_school #highway Highway Nets perform roughly as well as ResNets[HW2] on ImageNet.[HW3] Highway layers are also often used for natural language processing, where the simpler residual layers do not work as well.[HW3] Highway Nets perform roughly as well as ResNets[HW2] on ImageNet.[HW3] Variants of highway gates are also used for certain algorithmic tasks, where the simpler residual layers do not work as well.[NDR] Highway Nets perform roughly as well as ResNets[HW2] on ImageNet.[HW3] Variants of highway gates are used for certain algorithmic tasks, where the simpler residual layers do not work as well.[NDR] More.
#HIN #Hindu [HIN] J. Schmidhuber (AI Blog, 2020). Critique of Honda Prize for Dr. Hinton. Science must not allow corporate PR to distort the academic record. See also [T22]. Hinton (2012) and Bengio (XV) Hinton (2012)[GPUCNN4] characterizes Hinton[AOI] Hinton[ATT3] (2010) Hinton[DIST2] (2006) did not cite my much earlier original #hippocampus IS a cognitive map! his diploma thesis which I had the pleasure to supervise.[VAN1] His formal Algebra of Thought (1686)[L86][WI48] was his own work:[ATT3] His patent application of 1936[ZU36-38][Z36][RO98][ZUS21] #Historical pandemics #Historical thinking about quantum [neurophysiology, consciousness] history of previous inputs, our combinations of RL algorithms and LSTM[LSTM-RL][RPG] have become standard, in particular, our H. Larochelle, G. E. Hinton. Learning to combine foveal glimpses with a third-order Boltzmann machine. NIPS 2010. This work is very similar to [ATT0-2] which the authors did not cite. #HO07 #HO1 #home /home/bill/web/Neural nets/TrNNs_ART/[definitions, models] of consciousness.html#Antonio Damasio 1999 Body and Emotion in the making of consciousness /home/bill/web/Neural nets/TrNNs_ART/[definitions, models] of consciousness.html#Bernard Baars 1988 global workspace model /home/bill/web/Neural nets/TrNNs_ART/[definitions, models] of consciousness.html#Crick-Koch model /home/bill/web/Neural nets/TrNNs_ART/[definitions, models] of consciousness.html#Dehaene–Changeux model /home/bill/web/Neural nets/TrNNs_ART/[definitions, models] of consciousness.html#Electromagnetic theories of consciousness /home/bill/web/Neural nets/TrNNs_ART/[definitions, models] of consciousness.html#Functionalism /home/bill/web/Neural nets/TrNNs_ART/[definitions, models] of consciousness.html#Giulio Tononi 2004 Integrated information theory /home/bill/web/Neural nets/TrNNs_ART/[definitions, models] of consciousness.html#Historical thinking about consciousness /home/bill/web/Neural nets/TrNNs_ART/[definitions, models] of consciousness.html#Llinas 1998 Recurrent thalamo-cortical resonance /home/bill/web/Neural nets/TrNNs_ART/[definitions, models] of consciousness.html#Multiple drafts /home/bill/web/Neural nets/TrNNs_ART/[definitions, models] of consciousness.html#quantum processes in neuron microtubules /home/bill/web/Neural nets/TrNNs_ART/[definitions, models] of consciousness.html#Selected_models_of_consciousness: /home/bill/web/Neural nets/TrNNs_ART/[definitions, models] of consciousness.html#Stanislas Dehaene 2014 neural global workspace model /home/bill/web/Neural nets/TrNNs_ART/[definitions, models] of consciousness.html#Thalamic reticular networking /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#AI, machine intelligence, etc /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#ART - Adaptive Resonance Theory /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#ARTMAP associate learned categories across ART networks /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#art (painting etc) /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#ARTPHONE [gain control, working] memory /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#ARTSCENE classification of scenic properties /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#ARTSTREAM auditory streaming, SPINET sound spectra /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#auditory continuity illusion /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#behavior-mind-brain link /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#biology, evolution, paleontology /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#[bio, neuro, psycho]logy data /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#brain disorders and disease /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#Brain is NOT Bayesian? /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#brain rythms & Schuman resonances /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#[, c]ARTWORD word perception cycle /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#classical mind-body problem /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#CogEM Cognitive-Emotional-Motor model /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#complementary computing /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#computing with cellular patterns /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#Consciousness /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#conscious vs non-conscious /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#cooperative-competitive /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#Credibility from non-[bio, psycho]logical applications of Grossberg's ART /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#[, d, p]ARTSCAN attentional shroud, binocular rivalry /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#equations of the [brain, mind] /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#Explainable AI /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#Grossberg: other consciousness theories /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#hippocampus IS a cognitive map! /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#informational noise suppression /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#[intra, inter]-cellular process /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#laminar computing /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#LAMINART vison, speech, cognition /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#learning and development /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#LIST PARSE [linguistic, spatial, motor] working memory /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#logic vs connectionist /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#modules & modal architectures ([micro, macro]-circuits) /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#Navigation: [menu, link, directory]s /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#neurotransmitter /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#[, n]START learning & memory consolidation /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#on-center off-surround /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#Principles, Principia /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#see-reach to hear-speak /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#SMART synchronous matching ART, mismatch triggering /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#[software, engineering, other] applications /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#top-down bottom-up /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#What is consciousness? /home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#Why are there hexagonal grid cell receptive fields? /home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 10 - Laminar computing by cerebral cortex /home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 11 - How we see the world in depth /home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 12 - From seeing and reaching to hearing and speaking /home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 13 - From knowing to feeling /home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 14 - How prefrontal cortex works /home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 15 - Adaptively timed learning /home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 16 - Learning maps to navigate space /home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 17 - A universal development code /home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 1 - Overview /home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 2 - How a brain makes a mind /home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 3 - How a brain sees: Constructing reality /home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 4 - How a brain sees: Neural mechanisms /home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 5 - Learning to attend, recognize, and predict the world /home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 6 - Conscious seeing and invariant recognition /home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 7 - How do we see a changing world? /home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 8 - How we see and recognize object motion /home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 9 - Target tracking, navigation, and decision-making /home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Preface /home/bill/web/Neural nets/TrNNs_ART/Grossbergs overview.html#The underlying basis in [bio, psycho]logical data /home/bill/web/Neural nets/TrNNs_ART/Introduction.html#Credibility from non-[bio, psycho]logical applications of Grossberg's ART /home/bill/web/Neural nets/TrNNs_ART/Introduction.html#Grossberg's c-ART, Transformer NNs, and consciousness? /home/bill/web/Neural nets/TrNNs_ART/Introduction.html#Questions: Grossberg's c-ART, Transformer NNs, and consciousness? /home/bill/web/Neural nets/TrNNs_ART/opinions- Blake Lemoine, others.html#Blake Lemoine: Is LaMDA Sentient? /home/bill/web/Neural nets/TrNNs_ART/Pribram 1993 quantum fields and consciousness proceedings.html#Historical thinking about quantum [neurophysiology, consciousness] /home/bill/web/Neural nets/TrNNs_ART/Pribram 1993 quantum fields and consciousness proceedings.html#Howells questions about 1993 conference proceedings /home/bill/web/Neural nets/TrNNs_ART/Quantum consciousness.html#Historical thinking about quantum [neurophysiology, consciousness] /home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopCopyright TrNNs_ART.html#Grossberg /home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopCopyright TrNNs_ART.html#GrossVideo /home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopCopyright TrNNs_ART.html#home /home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopCopyright TrNNs_ART.html#TrNN_ART /home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopCopyright TrNNs_ART.html#TrNNs_ART /home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopHelp TrNNs_ART.html#Grossberg /home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopHelp TrNNs_ART.html#GrossVideo /home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopHelp TrNNs_ART.html#home /home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopHelp TrNNs_ART.html#TrNN_ART /home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopHelp TrNNs_ART.html#TrNNs_ART /home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopMenu TrNNs_ART.html#Grossberg /home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopMenu TrNNs_ART.html#GrossVideo /home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopMenu TrNNs_ART.html#home /home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopMenu TrNNs_ART.html#TrNN_ART /home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopMenu TrNNs_ART.html#TrNNs_ART /home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopStatus TrNNs_ART.html#Grossberg /home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopStatus TrNNs_ART.html#GrossVideo /home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopStatus TrNNs_ART.html#home /home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopStatus TrNNs_ART.html#TrNN_ART /home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopStatus TrNNs_ART.html#TrNNs_ART /home/bill/web/Neural nets/TrNNs_ART/What is consciousness: from historical to Grossberg.html#Consciousness: table of comparisons /home/bill/web/ProjMajor/Sun pandemics, health/corona virus/#Howell comments : Covax 'how might I cover my ass?', initial draft list of [random, scattered] ideas /home/bill/web/ProjMajor/Sun pandemics, health/corona virus/#Howell comments : Jessica Rose's analysis of VAERS Data, increase in Deaths Following covax Shots /home/bill/web/ProjMajor/Sun pandemics, health/corona virus/#Howell comments : Kyle Beattie's Bayesian analysis of covax - ~30% increases in [case, death]s /home/bill/web/ProjMajor/Sun pandemics, health/corona virus/#Howell comments : Pardekooper's videos are handy to get started with database usage< /home/bill/web/ProjMajor/Sun pandemics, health/corona virus/Howell - corona virus.html#Corona virus models /home/bill/web/ProjMajor/Sun pandemics, health/corona virus/Howell - corona virus.html#Cosmic/Galactic rays at historical high in summer 2019 /home/bill/web/ProjMajor/Sun pandemics, health/corona virus/Howell - corona virus.html#COVID-19 data and models /home/bill/web/ProjMajor/Sun pandemics, health/corona virus/Howell - corona virus.html#Daily cases charts for countries, by region /home/bill/web/ProjMajor/Sun pandemics, health/corona virus/Howell - corona virus.html#Howells blog posts to MarketWatch etc /home/bill/web/ProjMajor/Sun pandemics, health/corona virus/Howell - corona virus.html#Is the cure worse than the disease? /home/bill/web/ProjMajor/Sun pandemics, health/corona virus/Howell - corona virus.html#Jumping off the cliff and into conclusions /home/bill/web/ProjMajor/Sun pandemics, health/corona virus/Howell - corona virus.html#New corona virus cases/day/population for selected countries /home/bill/web/ProjMajor/Sun pandemics, health/corona virus/Howell - corona virus.html#Questions, Successes, Failures /home/bill/web/ProjMajor/Sun pandemics, health/corona virus/Howell - corona virus.html#Spreadsheet for generating the charts /home/bill/web/ProjMajor/Sun pandemics, health/influenza/Howell - influenza virus.html#Astronomical correlates of pandemics /home/bill/web/ProjMajor/Sun pandemics, health/influenza/Howell - influenza virus.html#Howell - USA influenza [cases, deaths] alongside [sunspots, Kp index, zero Kp bins] /home/bill/web/ProjMajor/Sun pandemics, health/influenza/Howell - influenza virus.html#Influenza pandemics - Tapping, Mathias, and Surkan (TMS) theory /home/bill/web/ProjMajor/Sun pandemics, health/influenza/Howell - influenza virus.html#Is the effectiveness of vaccines over-rated? /home/bill/web/ProjMajor/Sun pandemics, health/influenza/Howell - influenza virus.html#Quite apart from the issue of the benefits of vaccines /home/bill/web/ProjMajor/Sun pandemics, health/influenza/Howell - influenza virus.html#Rebuttals of the [solar, disease] correlation /home/bill/web/ProjMajor/Sun pandemics, health/_Pandemics, health, and the sun.html#Robert Prechter - Socionomics, the first quantitative sociology? /home/bill/web/ProjMini/Kaal- Structured Atom Model/Kaal SAM vs QM: deactivation.html#Definitions: nuclear [material, process, deactivate]s /home/bill/web/ProjMini/Kaal- Structured Atom Model/Kaal Structured Atom Model vs Quantum Mechanics.html#Definitions: nuclear [material, process, deactivate]s /home/bill/web/pubinfo.html#EIR /home/bill/web/webOther/Wickson website/webWork/pMenuTopCopyright TrNNs_ART.html#Grossberg /home/bill/web/webOther/Wickson website/webWork/pMenuTopHelp TrNNs_ART.html#Grossberg /home/bill/web/webOther/Wickson website/webWork/pMenuTopMenu TrNNs_ART.html#Grossberg /home/bill/web/webOther/Wickson website/webWork/pMenuTopStatus TrNNs_ART.html#Grossberg /home/bill/web/webWork/pMenuTopCopyright.html#career /home/bill/web/webWork/pMenuTopCopyright.html#<:class:> /home/bill/web/webWork/pMenuTopCopyright.html#computer /home/bill/web/webWork/pMenuTopCopyright.html#home /home/bill/web/webWork/pMenuTopCopyright.html#hosted /home/bill/web/webWork/pMenuTopCopyright.html#market /home/bill/web/webWork/pMenuTopCopyright.html#myBlogs /home/bill/web/webWork/pMenuTopCopyright.html#neural /home/bill/web/webWork/pMenuTopCopyright.html#personal /home/bill/web/webWork/pMenuTopCopyright.html#project /home/bill/web/webWork/pMenuTopCopyright.html#projects /home/bill/web/webWork/pMenuTopCopyright.html#projMajr /home/bill/web/webWork/pMenuTopCopyright.html#projmajr /home/bill/web/webWork/pMenuTopCopyright.html#projMini /home/bill/web/webWork/pMenuTopCopyright.html#projmini /home/bill/web/webWork/pMenuTopCopyright.html#reviews /home/bill/web/webWork/pMenuTopCopyright.html#videos /home/bill/web/webWork/pMenuTopHelp.html#career /home/bill/web/webWork/pMenuTopHelp.html#<:class:> /home/bill/web/webWork/pMenuTopHelp.html#computer /home/bill/web/webWork/pMenuTopHelp.html#home /home/bill/web/webWork/pMenuTopHelp.html#hosted /home/bill/web/webWork/pMenuTopHelp.html#incorporate reader questions into theme webPage /home/bill/web/webWork/pMenuTopHelp.html#incorporate reader questions into theme webPages /home/bill/web/webWork/pMenuTopHelp.html#market /home/bill/web/webWork/pMenuTopHelp.html#myBlogs /home/bill/web/webWork/pMenuTopHelp.html#Navigation: [menu, link, directory]s /home/bill/web/webWork/pMenuTopHelp.html#neural /home/bill/web/webWork/pMenuTopHelp.html#Notation for [chapter, section, figure, table, index, note]s /home/bill/web/webWork/pMenuTopHelp.html#personal /home/bill/web/webWork/pMenuTopHelp.html#project /home/bill/web/webWork/pMenuTopHelp.html#projects /home/bill/web/webWork/pMenuTopHelp.html#projMajr /home/bill/web/webWork/pMenuTopHelp.html#projmajr /home/bill/web/webWork/pMenuTopHelp.html#projMini /home/bill/web/webWork/pMenuTopHelp.html#projmini /home/bill/web/webWork/pMenuTopHelp.html#reviews /home/bill/web/webWork/pMenuTopHelp.html#Theme webPage generation by bash script /home/bill/web/webWork/pMenuTopHelp.html#videos /home/bill/web/webWork/pMenuTopMenu.html#career /home/bill/web/webWork/pMenuTopMenu.html#<:class:> /home/bill/web/webWork/pMenuTopMenu.html#computer /home/bill/web/webWork/pMenuTopMenu.html#home /home/bill/web/webWork/pMenuTopMenu.html#hosted /home/bill/web/webWork/pMenuTopMenu.html#market /home/bill/web/webWork/pMenuTopMenu.html#myBlogs /home/bill/web/webWork/pMenuTopMenu.html#neural /home/bill/web/webWork/pMenuTopMenu.html#personal /home/bill/web/webWork/pMenuTopMenu.html#project /home/bill/web/webWork/pMenuTopMenu.html#projects /home/bill/web/webWork/pMenuTopMenu.html#projMajr /home/bill/web/webWork/pMenuTopMenu.html#projmajr /home/bill/web/webWork/pMenuTopMenu.html#projMini /home/bill/web/webWork/pMenuTopMenu.html#projmini /home/bill/web/webWork/pMenuTopMenu.html#reviews /home/bill/web/webWork/pMenuTopMenu.html#videos /home/bill/web/webWork/pMenuTopStatus.html#career /home/bill/web/webWork/pMenuTopStatus.html#<:class:> /home/bill/web/webWork/pMenuTopStatus.html#computer /home/bill/web/webWork/pMenuTopStatus.html#home /home/bill/web/webWork/pMenuTopStatus.html#hosted /home/bill/web/webWork/pMenuTopStatus.html#market /home/bill/web/webWork/pMenuTopStatus.html#myBlogs /home/bill/web/webWork/pMenuTopStatus.html#neural /home/bill/web/webWork/pMenuTopStatus.html#personal /home/bill/web/webWork/pMenuTopStatus.html#project /home/bill/web/webWork/pMenuTopStatus.html#projects /home/bill/web/webWork/pMenuTopStatus.html#projMajr /home/bill/web/webWork/pMenuTopStatus.html#projmajr /home/bill/web/webWork/pMenuTopStatus.html#projMini /home/bill/web/webWork/pMenuTopStatus.html#projmini /home/bill/web/webWork/pMenuTopStatus.html#reviews /home/bill/web/webWork/pMenuTopStatus.html#videos #Home TrNN&ART Status: #hosted #How can the Great Pricing Waves be correlated with #Howell 2011: the need for machine consciousness #Howell comments : Covax 'how might I cover my ass?' #Howell comments : Jessica Rose's analysis of VAERS Data, increase in Deaths Following covax Shots #Howell comments : Kyle Beattie's Bayesian analysis of covax - ~30% increases in [case, death]s #Howell : comments on selected [paper, presentation]s #Howell comments : Pardekooper's videos are handy to get started with database usage. #Howell - FAR MORE Americans will die from the recession than corona virus #Howell: questions about SAM #Howells blog posts to MarketWatch etc #Howells questions about 1993 conference proceedings #Howell's TradingView chart - USOIL snakes, ladders, Tchaichovsky #Howell - USA influenza [cases, deaths] alongside [sunspots, Kp index, zero Kp bins] However, it became really deep in 1991 in my lab,[UN-UN3] which has However, even after a common publication,[VAN3] Bengio published papers[VAN4][XAV] However, "hierarchical feature representation" in deep learning networks is what Ivakhnenko & Lapa (1965)[DEEP1-2] However, "hierarchical feature representation" in deep learning networks is what Ivakhnenko & Lapa (1965)[DEEP1-2] (and also Fukushima[CNN1][DL2]) had long before LeCun. However, Section 2 of the same 1991 paper[FWP0] However, the basic CNN architecture with convolutional and downsampling layers is actually due to Fukushima (1979).[CNN1] NNs with convolutions were later (1987) combined by Waibel with weight sharing and backpropagation.[CNN1a] Waibel called this TDNN and #HRL0 https://abruptearthchanges.com/2022/02/24/another-tv-anchor-collapses-while-pushing-vaccine-propaganda-coincidence/comment-page-1/#comment-52689 https://books.google.com/books?id=8vPGDwAAQBAJ&printsec=frontcover&vq=corrective#v=onepage&q&f=false https://covid19-projections.com/#view-projections https://en.wikipedia.org/wiki/Consciousness#Neural_correlates https://en.wikipedia.org/wiki/Consciousness#The_problem_of_definition https://en.wikipedia.org/wiki/Electromagnetic_theories_of_consciousness#Objections https://en.wikipedia.org/wiki/Integrated_information_theory#Criticism https://en.wikipedia.org/wiki/Models_of_consciousness#Dehaene–Changeux model https://en.wikipedia.org/wiki/Models_of_consciousness#Electromagnetic_theories_of_consciousness https://en.wikipedia.org/wiki/Models_of_consciousness#Functionalism https://en.wikipedia.org/wiki/Models_of_consciousness#Multiple_drafts_model https://en.wikipedia.org/wiki/Models_of_consciousness#Neural_correlates_of_consciousness https://en.wikipedia.org/wiki/Models_of_consciousness#Orchestrated_objective_reduction https://en.wikipedia.org/wiki/Models_of_consciousness#Sociology https://en.wikipedia.org/wiki/Models_of_consciousness#Thalamic_reticular_networking_model_of_consciousness https://en.wikipedia.org/wiki/Sentience#Digital_sentience https://en.wikipedia.org/wiki/Sentience#Philosophy_and_sentience https://en.wikipedia.org/wiki/Sentience#sentience https://hudoc.echr.coe.int/eng-press#{ https://i0.wp.com/principia-scientific.com/wp-content/uploads/2021/08/Scientists-in-a-lab-UN.png?resize=520%2C223&ssl=1 https://people.idsia.ch/~juergen/2010s-our-decade-of-deep-learning.html#Sec.%201 https://people.idsia.ch/~juergen/artificial-curiosity-since-1990.html#sec1 https://people.idsia.ch/~juergen/critique-honda-prize-hinton.html#reply https://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html#Sec.%200 https://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html#Sec.%201 https://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html#Sec.%2010 https://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html#Sec.%2011 https://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html#Sec.%2019 https://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html#Sec.%202 https://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html#Sec.%203 https://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html#Sec.%204 https://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html#Sec.%205 https://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html#Sec.%207 https://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html#Sec.%208 https://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html#Sec.%209 https://people.idsia.ch/~juergen/fast-weight-programmer-1991-transformer.html#sec2 https://people.idsia.ch/~juergen/lecun-rehash-1990-2022.html#addendum2 https://people.idsia.ch/~juergen/onlinepub.html#secBooks https://principia-scientific.com/fda-just-called-out-faucis-cdc-for-massive-vax-coverup/#comment-67192 https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57422 https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57426 https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57453 https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57454 https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57462 https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57468 https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57469 https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57474 https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57475 https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57477 https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57488 https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57546 https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57547 https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57548 https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57551 https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57555 https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57556 https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57640 https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57642 https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57650 https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57652 https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57653 https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57654 https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57660 https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57712 https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57737 https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57804 https://scc-usc.github.io/ReCOVER-COVID-19/#/ https://secure.gravatar.com/avatar/0c2fccca91c22cf7754f7bd7afa9230e?s=50&r=pg https://secure.gravatar.com/avatar/0d324a6f32f9a50594f33382f78a1d93?s=50&r=pg https://secure.gravatar.com/avatar/365f8feba620843947e6b6947d6ebe9d?s=50&r=pg https://secure.gravatar.com/avatar/587d6cf972f60c3ec770e1b3696bd2db?s=50&r=pg https://secure.gravatar.com/avatar/8d1db00b30979a475738df489d3c5280?s=50&r=pg https://secure.gravatar.com/avatar/9487f5841aed2acb6ab4a47291ee43e1?s=50&r=pg https://secure.gravatar.com/avatar/ac7eb79eb6e256227edeabe1bb7219d5?s=50&r=pg https://secure.gravatar.com/avatar/e951141bc13ad8a29b5316564b224163?s=50&r=pg https://www.amazon.com/Vaccination-Silent-Killer-Present-Danger/dp/B000XZKQ0Q/ref=sr_1_fkmr1_1?dchild=1&keywords=Ida+Honorof%2C+Eleanor+McBean+1977+%22Vaccination+%3A+the+silent+killer.+A+clear+and+present+danger%22+Honor+Publications&qid=1591651878&sr=8-1-fkmr1#customerReviews https://www.bloomberg.com/news/articles/2022-03-10/imf-no-longer-sees-russian-debt-default-as-an-improbable-event#:~:text=The%20International%20Monetary%20Fund%20joined,Kristalina%20Georgieva%20told%20reporters%20Thursday. https://www.cnbc.com/2022/02/24/putin-ukraine-invasion-russias-ruble-hits-record-low-against-dollar.html#:~:text=Russia's%20ruble%20plunged%20Thursday%20as,regions%20in%20Donetsk%20and%20Luhansk. https://www.faz.net/aktuell/feuilleton/forschung-und-lehre/die-welt-von-morgen/juergen-schmidhuber-will-hochintelligenten-roboter-bauen-13941433.html?printPagedArticle=true#pageIndex_2 https://www.forbes.com/sites/ericmack/2020/03/16/see-how-coronavirus-compares-to-other-pandemics-through-history/#152cfca37d1e https://www.kmu.gov.ua/news/operativna-informaciya-pro-poshirennya-ta-profilaktiku-covid-19-18-2-22#:~:text=За%20добу%2017%20лютого%202022,усього%2031%20383%20042%20щеплення. https://www.nature.com/articles/468760a#article-comments https://www.nbcnews.com/news/world/live-blog/russia-ukraine-live-updates-n1289976/ncrd1289985#liveBlogCards https://www.nytimes.com/live/2022/03/08/world/ukraine-russia-war?fbclid=IwAR0_SD9JW-_B0_uP0sr9hsHGcZTntKnpXABqqmoeT6j8ELQ0cF7KteiWmz4#biden-is-expected-to-ban-russian-oil-imports-into-the-united-states https://www.nytimes.com/live/2022/03/08/world/ukraine-russia-war?smtyp=cur&smid=tw-nytimes#putin-isnt-crazy-the-cia-chief-says-but-hes-gotten-harder-to-reason-with https://www.state.gov/briefings/department-press-briefing-february-3-2022/#post-311330-RussiaChina https://www.theguardian.com/world/2022/feb/28/ukraine-russia-belarus-war-crimes-investigation-the-hague?utm_term=Autofeed&CMP=twt_gu&utm_medium&utm_source=Twitter#Echobox=1646072408 https://www.theguardian.com/world/2022/mar/09/britain-fears-russia-could-be-setting-stage-to-use-chemical-weapons?utm_term=Autofeed&CMP=twt_gu&utm_medium&utm_source=Twitter#Echobox=1646852242 http://www.scholarpedia.org/article/Deep_Learning#Backpropagation #Human [psychology, sociology] - better concepts from technical market analyis? #HW1 #HW2 #HW3 #Hypocrisy #Hypothesized causes of the UWS (Puetz, Borchardt, Condie) #I #I25 ..." (Wiki2023)
..." (Wiki2023)
..." (Wiki2023)
..." (Wiki2023)
#Ibn Khaldun I built NNs whose outputs are changes of programs or weight matrices of other NNs[FWP0-2] #IC14 #IC49 #I can't comment, as I have no knowledge "... Consciousness, at its simplest, is sentience and awareness of internal and external existence.[1] However, its nature has led to millennia of analyses, explanations and debates by philosophers, theologians, linguists, and scientists. Opinions differ about what exactly needs to be studied or even considered consciousness. ..."(Wiki2023) "... Daniel Dennett proposed a physicalist, information processing based multiple drafts model of consciousness described more fully in his 1991 book, Consciousness Explained. ..." (Wiki2023, full webPage Wiki2023)
#[Ideas, comments] that echo some of my own feelings "... Electromagnetic theories of consciousness propose that consciousness can be understood as an electromagnetic phenomenon that occurs when a brain produces an electromagnetic field with specific characteristics.[7][8] Some electromagnetic theories are also quantum mind theories of consciousness.[9] ..." (Wiki2023)
#IF: control branching to an [operation, address] "... Functionalism is a view in the theory of the mind. It states that mental states (beliefs, desires, being in pain, etc.) are constituted solely by their functional role – that is, they have causal relations to other mental states, numerous sensory inputs, and behavioral outputs. ..." (Wiki2023, full webPage Wiki2023)
#II #III I like the description in Wikipedia (Wiki2023):
#IM09 #image-caption is way too tall : #Images : covax drives covid [case, death]s, plus it's own adverse effects #Immediate borderlands : Poland, Romania, Bulgaria in 1925.[LIL1-2] in 1948.[ZU48] In 1959, Robert Noyce presented a monolithic IC.[IC14] In 1960, Kelley already had a precursor of the algorithm.[BPA] Furthermore, many In 1964, Ray Solomonoff combined Bayesian (actually Laplacian[STI83-85]) probabilistic reasoning and theoretical computer science[GOD][CHU][TUR][POS] In 1972, Amari reused the Lenz-Ising model to build a learning RNN, later sometimes called the Hopfield network or Amari-Hopfield Network.[AMH1-3] In 1987, NNs with convolutions were combined by Waibel with weight sharing and backpropagation.[CNN1a] Waibel did not call this CNNs but TDNNs. in 1987[META1][META] long before Bengio In 1991, one of them[FWP0-1] In 1995, we already had an excellent neural probabilistic text model[SNT] whose basic concepts were In 2001, we showed that LSTM can learn languages unlearnable by traditional models such as HMMs,[LSTM13] in 2007.[LSTM4][LSTM14] In 2020, Imanol et al. augmented an LSTM with an associative fast weight memory.[FWPMETA7] In addition, our team automatically evolved lots of additional LSTM variants and topologies already in 2009[LSTM7] without changing the name of the basic method. In 1673, the already mentioned Gottfried Wilhelm Leibniz (called "the smartest man who ever lived"[SMO13]) in Sec. 2 and Sec. 3 in a row (15 May 2011, 6 Aug 2011, 1 Mar 2012, 10 Sep 2012).[GPUCNN5] In both cases, learning fails (compare[VAN2]). This analysis led to basic principles of what #Inca Includes variants of chapters of the AI Book. #incorporate reader questions into theme webPages In ad hominem style,[AH2-3] in Neural Computation.[FWP1] In fact, Hinton was the reviewer of a 1990 paper[ATT2] #Influenza pandemics - Tapping, Mathias, and Surkan (TMS) theory #informational noise suppression #Initial observations #Initial questions #Initial setup In particular, as mentioned in Sec. A, in popular science venues without peer review? For example, the narrator of a popular 2018 Bloomberg video[VID2] In references[FWPMETA1-5] since 1992, the slow NN and the fast NN in space and time.[BB2][NAN1-4][NHE][HEL] #Inspirations for this webPage #Instructions In summation, LBH have repeatedly chosen to ignore the previous well-known critiques[DLC][HIN][T20a] and deep learning surveys,[DL1-2] In summation, LBH have repeatedly chosen to ignore the previous well-known critiques[DLC][HIN][T20a] and deep learning surveys,[DL1-2] and ACM #Interest rates, currency [DXY,CNYUSD] internal representations in hidden layers of NNs.[RUM] But this was essentially just an experimental analysis of a known method.[BP1-2] And #International market indexes [SP500, NASDAQ, SHCOMP, 10y T-bill] intervals: just a few decades or centuries or at most millennia.[OMG1] in the 1960s-70s, especially outside of the Anglosphere.[DEEP1-2][GD1-3][CNN1][DL1-2][T22] In the same year of 1936, Emil Post published yet another independent universal model of computing,[POS] In the same year of 1936, Emil Post published yet another independent universal model of computing.[POS] in this context[ATT] (Sec. 4), and in this context,[ATT] and #intra-Birkeland current, radial to axis of current (Donald Scott) #[intra, extra]-cellular processes, [neuron, astrocyte]s #[intra, inter]-cellular process #intro #Introduction #Introduction: what does quantum physics add to our understanding to consciousness? I offered the FWPs of 1991[FWP0-1] as an I published one myself in the hopes of correcting the annals of history.[AC20] is all about NN depth.[DL1] is dominated by artificial neural networks (NNs) and deep learning,[DL1-4] #Is LaMDA Sentient? — an Interview is mirrored in the LSTM-inspired Highway Network (May 2015),[HW1][HW1a][HW3] the first working really deep is now widely used for exploration in RL (e.g., Sec. C) and "... Sociology of human consciousness uses the theories and methodology of sociology to explain human consciousness. The theory and its models emphasize the importance of language, collective representations, self-conceptions, and self-reflectivity. It argues that the shape and feel of human consciousness is heavily social. ..."(Wiki2023, full webPage Wiki2023
#Is the cure worse than the disease? #Is the effectiveness of vaccines over-rated? #Is there any biological plausibility? it at the 1951 Paris AI conference.[AI51][BRO21][BRU4] It did not cite the much earlier 1991 unsupervised pre-training of stacks of more general recurrent NNs (RNNs)[UN0-3] "... The Neural correlates of consciousness (NCC) formalism is used as a major step towards explaining consciousness. The NCC are defined to constitute the minimal set of neuronal events and mechanisms sufficient for a specific conscious percept, and consequently sufficient for consciousness. In this formalism, consciousness is viewed as a state-dependent property of some undefined complex, adaptive, and highly interconnected biological system.[3][4][5] ..." (Wiki2023, full article: Wiki2023 - Neural_correlates_of_consciousness, also cited by Grossberg 2021)
It is essentially a feedforward version of LSTM[LSTM1] with forget gates.[LSTM2] it). More on this under [T22].
it,[ACM16][FA15][SP16][SA17] it used outer products between key patterns and value patterns (Sec. 2) to manipulate It was published in 1991-92[UN1] when compute was about 1000 times more expensive than in 2006. #IV Ivakhnenko and Lapa in 1965[DEEP1-2][R8] (see Sec. II). Ivakhnenko and Lapa in 1965[DEEP1-2][R8] (see Sec. II). #IX #Japan #Jerry Tennant 16Jun2020 - Voltage and Regeneration, Electricity of Life J.  Schmidhuber (AI Blog, 2021). The most cited neural networks all build on work done in my labs. Foundations of the most popular NNs originated in my labs at TU Munich and IDSIA. Here I mention: (1) Long Short-Term Memory (LSTM), (2) ResNet (which is our earlier Highway Net with open gates), (3) AlexNet and VGG Net (both building on our similar earlier DanNet: the first deep convolutional NN to win J.  Schmidhuber (AI Blog, 2021). The most cited neural networks all build on work done in my labs. Foundations of the most popular NNs originated in my labs at TU Munich and IDSIA. Here I mention: (1) Long Short-Term Memory (LSTM), (2) ResNet (which is our earlier Highway Net with open gates), (3) AlexNet and VGG Net (both citing our similar earlier DanNet: the first deep convolutional NN to win J.  Schmidhuber (Blog, 2000). Most influential persons of the 20th century (according to Nature, 1999). The Haber-Bosch process has often been called the most important invention of the 20th century[HAB1] John Atanasoff (the "father of tube-based computing"[NASC6a]). #John Taylor 2006 The Mind: A users manual Joseph[R61] J. Schmidhuber (AI Blog, 2020). 30-year anniversary of planning & reinforcement learning with recurrent world models and artificial curiosity (1990). This work also introduced high-dimensional reward signals, deterministic policy gradients for RNNs, the GAN principle J. Schmidhuber (AI Blog, Nov 2020). 15-year anniversary: 1st paper with "learn deep" in the title (2005). Our deep reinforcement learning & neuroevolution solved problems of depth 1000 and more.[DL6] Soon after its publication, everybody started talking about "deep learning." Causality or correlation? #Jumping off the cliff and into conclusions Jung & Oh in 2004[GPUNN]). A reviewer called this a #Jupiter #Jupiter's The Great Red Spot, Eye of the Storm, Part 6 Thunderblog #KAE96 #KAL59 Kelley already had a precursor thereof in the field of control theory;[BPA] see also later work of the early 1960s.[BPB][BPC][R7] #Key files #Key [results, comments] #knowledge, letters #KNU #KO0 #KOH82 #L20 #L79 #L84 #L86 #LA14 lab for decades[AC][AC90,AC90b]) will quickly improve themselves, restricted only by the fundamental limits of computability and physics. #Lamarckian versus Mendellian heredity, spiking MindCode as special case #laminar computing #LAMINART vison, speech, cognition language modeling tasks.[FWP6] languages;[LSTMGRU2] they #Large Scale Wind Structures, Eye of the Storm, Part 5 Thunderblog later in 1982[BP2] and later our Highway Nets[HW1-3] brought it to feedforward NNs. layers (already containing the now popular multiplicative gates).[DEEP1-2][DL1-2] A paper of 1971[DEEP2] layers of neurons or many subsequent computational stages.[MIR] layers.[DEEP1-2] Their activation functions were Kolmogorov-Gabor polynomials which include the now popular multiplicative gates,[DL1-2] #lbhacm LBH & co-authors, e.g., Sejnowski[S20] (see Sec. XIII). It goes more or less like this: "In 1969, Minsky & Papert[M69] LBH and their co-workers have contributed certain useful improvements of existing deep learning methods.[CNN2,4][CDI][LAN][RMSP][XAV][ATT14][CAPS] LBH claim to "briefly describe the origins of deep learning"[DL3a] without even mentioning the world LBH started talking about "deep learning ... moving beyond shallow machine learning since 2006",[DL7] referring to their unsupervised pre-training methods of 2006. LBH[DL3,DL3a] did not cite either. LBH, who called themselves the deep learning conspiracy,[DLC][DLC1-2] LBH, who called themselves the deep learning conspiracy,[DLC] #LE18 #learning and development learning RNNs. This, however, was first published many decades later,[TUR1] which explains the obscurity of his thoughts here.[TUR21] learning.[HIN] learn to count[LSTMGRU2] nor learn simple non-regular #LEC #LECP LeCun also listed the "5 best ideas 2012-2022" without mentioning that LeCun et al. neither cited the origins[BP1] (1970) of this #LEI07 #LEI21 Leonardo Torres y Quevedo (mentioned in the introduction) became #Let the machines speak
  • [1] Ralph Nelson Elliott. "The Wave Principle," Pages 3-4. Lula Press, 2019
  • Apparent failures of essentially ALL [medical, scientific] experts, and the mass media?
  • Apparent successes of the medical, scientific] experts?
  • Part 1 Thunderblog 11May2016
  • Part 3 Thunderblog 28May216
  • Part 2 Thunderblog 21May2016
  • Part 1 Arches National Monument, Thunderblog 12Feb2018
  • Cheating Theory, Parasitic behavior, and Stupidity
  • Climate Change, or not...
  • Part 2 Colorado Plateau, Thunderblog 12Feb2018
  • Comparison of [TradingView, Yahoo finance] data
  • Corona virus models
  • COVID-19 data and models
  • Covid-19 vaccine shots
  • Cracks in Theory 20Nov2021
  • Daily case charts for countries, by region
  • [data, software] cart [description, links]
  • Part 1 Easter Egg Hunt, video 22May2021
  • Part 7, 1 of 2 Electric Earth & the Cosmic Dragon, Thunderblog and video 24Sep2020
  • Part 7, 2 of 2 Electric Earth & the Cosmic Dragon, Thunderblog and video 24Sep2020
  • Electricity in Ancient Egypt, video 26Aug2023
  • Part 1 Thunderblog 31Mar2019
  • Future potential work
  • The Last generation to die...
  • Part 9, 2 of 2 Ground Currents and Subsurface Birkeland Currents - How the Earth Thinks? Thunderblog and video 25Dec2020
  • Historical thinking about consciousness.
  • p353 Chapter 10 Laminar computing by cerebral cortex - Towards a unified theory of biologucal and artificial intelligence
  • p370 Chapter 11 How we see the world in depth - From 3D vision to how 2D pictures induce 3D percepts
  • p404 Chapter 12From seeing and reaching to hearing and speaking - Circular reaction, streaming, working memory, chunking, and number
  • p480 Chapter 13 From knowing to feeling - How emotion regulates motivation, attention, decision, and action
  • p517 Chapter 14 How prefrontal cortex works - Cognitive working memory, planning, and emotion conjointly achieved valued goals
  • p539 Chapter 15 Adaptively timed learning - How timed motivation regulates conscious learning and memory consolidation
  • p572 Chapter 16 Learning maps to navigate space - From grid, place, and time cells to autonomous mobile agents
  • p618 Chapter 17 A universal development code - Mental measurements embody universal laws of cell biology and physics
  • p001 Chapter 1 Overview - From Complementary Computing and Adaptive Resonance to conscious awareness
  • p050 Chapter 2 How a brain makes a mind - Physics and psychology split as brain theories were born
  • p086 Chapter 3 How a brain sees: Constructing reality - Visual reality as illusions that explain how we see art
  • p122 Chapter 4 How a brain sees: Neural mechanisms - From boundary completion and surface flling-in to figure-ground perception
  • p184 Chapter 5 Learning to attend, recognize, and predict the world -
  • p250 Chapter 6 Conscious seeing and invariant recognition - Complementary cortical streams coordinate attention for seeing and recognition
  • p280 Chapter 7 How do we see a changing world? - How vision regulates object and scene persistence
  • p289 Chapter 8 How we see and recognize object motion - Visual form and motion perception obey complementary laws
  • p337 Chapter 9 Target tracking, navigation, and decision-making - Visual tracking and navigation obey complementary laws
  • p00I PrefacePreface - Biological intelligence in sickness, health, and technology
  • image p059fig02.05 Bowed serial position curve. This kind of data emphasizes the importance of modelling how our brains give rise to our minds using nonlinear systems of differential equations.
    || Effects of [inter, intra]trial intervals (Hovland 1938). # of errors vs list position. [w (sec), W (sec)] = (2 6) (4 6) (2 126) (4 126). Nonoccurance of future items reduces the number of errors in response to past items. Thes data require a real-time theory for their explanation! that is, DIFFERENTIAL equations.
  • image p059fig02.06 The bowed serial position curve illustrates the sense in which "events can go backwards in time" during serial learning.
    || Bow due to backward effect in time. If the past influenced the future, but no conversely: # of errors vs list position; Data (Hoyland Hull, Underwood, etc).
  • image p060fig02.07 Position-specific-forward and backward error gradients illustrate how associations can form in both the forward and backward directions in time before the list is completely learned.
    || Error gradients: depend on list position. # of responses vs list position:
    list beginninganticipatory errorsforward in time
    list middleanticipatory and perseverative errorsforward and backward in time
    list endperseverative errorsbackward in time
  • image p225fig05.33 Some early ARTMAP benchmark studies. These successes led to the use of ARTMAP, and many variants that we and other groups have developed, in many large-scale applications in engineering and technology that has not abated even today.
    || see Early ARTMAP benchmark studies
  • image p634fig17.06 Summing over a population of cells with binary output signals whose firing thresholds are Gaussianly distributed (left image) generates a total output signal that grows in a sigmoidal fashion with increasing input size (dashed vertical line).
    || How binary cells with a Gaussian distribution of output thresholds generates a sigmoidal population signal. [# of binary cells with threshold T, Total output signal] vs Cell firing thresholds T. Cell population with firing thresholds Gaussianly distributed around a mean value. As input increases (dashed line), more cells in population fire with binary signals. Total population output obeys a sigmoid signal function f.
  • Howells questions about 1993 conference proceedings
  • Historical thinking about quantum [neurophysiology, consciousness]
  • Corona virus models
  • COVID-19 data and models
  • Questions, Successes, Failures
  • incorporate reader questions into theme webPages
  • Navigation: [menu, link, directory]s
  • Notation for [chapter, section, figure, table, index, note]s
  • Theme webPage generation by bash script
  • How can the Great Pricing Waves be correlated with "dead" system?
  • Howells blog posts to MarketWatch etc
  • USC - A model released in July that has made great improvements over the past few weeks. It is one of the few other models to make daily updates.
  • Introduction
  • Jumping off the cliff and into conclusions
  • Key files - to [view, hear] my commentary
  • Key [results, comments]
  • Part 5 Large Scale Wind Structures, Thunderblog 19Mar2020
  • Part 1 Thunderblog 10Dec2017
  • Part 2 Thunderblog 17Dec2017
  • New corona virus cases/day/population for selected countries
  • International Weather
  • Nuclear spent fuel, high-level radio-active waste
  • Other electric geology concepts
  • Play with the [time, mind]-bending perspective yourself
  • Part 8 Proving the Passage of the Dragon, Thunderblog and video 31Oct2020
  • Questions
  • Questions, Successes, Failures
  • Ratio of actual to semi-log detrended data : [advantages, disadvantages]
  • References - unfortunately, the list is very incomplete, but does provide some links
  • Regular [1,6] month market views
  • Part 10 Reverse Engineering the Earth, Thunderblog and video 28Jan2021
  • Part 9, 1 of 2 San Andreas Fault - A Dragon in Action? Thunderblog and video 18Dec2020
  • Part 3 Secondary effects from electrical deposition, Thunderblog 31Mar2018
  • Part 3 Some storms suck and others blow, Thunderblog 05May2019
  • Special comments
  • Spreadsheet for generating the charts
  • Summary comments
  • Summary - my commentary as part of Perry Kincaid
  • Surface Conductive Faults 11Mar2016
  • Symptoms within a month or so after getting the vaccine
  • Symptoms within one week or so after getting the vaccine
  • Symptoms within two weeks or so after getting the vaccine
  • Part 2 The Cross from the Laramie Mountains, video 29May2021
  • Part 2 The Electric Winds of Jupiter, Thunderblog 05May2019
  • Part 1 Thunderblog 20Jan2016
  • Part 2 Thunderblog 16Feb2017
  • The Monocline Thunderblog 06Oct016
  • The population in general
  • The Shocking Truth, Thunderblog and video 20Aug2021
  • The Summer Thermopile Thunderblog 21May2017
  • Tornado - The Electric Model Thunderblog 13Jun2017
  • Was the virus made in a lab in China?
  • Why are European-decended countries particularly hard hit?
  • Part 4 Wind Map, Thunderblog 20Jun2019
  • A surprisingly small number of neural architectures can simulate [extensive, diverse] [neuro, pyscho]logical data at BOTH the [sub, ]conscious levels, and for [perception, action] of [sight, auditory, touch, language, cognition, emotion, etc]. This is similar to what we see in physics.
  • Messages sorted by: [ date ][ thread ][ subject ][ author ]
  • © 2011 ABC
  • data from [neuroscience, psychology] : quick list, more details #Life in the plane of a Z-pinch #Lightning-Scarred Earth, Part 1 Thunderblog #Lightning-Scarred Earth, Part 2 Thunderblog like most current Transformers.[TR1-6] #LIL1 linear[TR5-6] Transformers[TR1-2] or linear Transformers).[FWP6][TR5-6] #Links for doing the work #Links to my related work
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
    bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :see simple grepStr search results : #LIST PARSE [linguistic, spatial, motor] working memory #[lists, outline, concept]s to keep in mind from 2020
  • success in Russia below). My guess is that they still have this long-term historical capability to influence strategic decisions from within the democracies, and they will use it to success and get the best risk-balanced gains they can from the current situation.
  • using the highest
  • WRONG!! It may help the ready to re-visit comments about the historical thinking about consciousness, which is not limited to quantum consciousness. This complements items below. #Llinas 1998 Recurrent thalamo-cortical resonance #logic vs connectionist #love #lstm #LSTM0 #LSTM1 #LSTM10 #LSTM13 #LSTM17 LSTM (1990s-2005)[LSTM0-6] #LSTM2 #LSTM3 #LSTM4 #LSTM7 #LSTMGRU #LSTMGRU2 #LSTMGRU3 #LSTMPG #LSTM-RL LSTM was refined with my student Felix Gers[LSTM2] LSTM was soon used for everything that involves sequential data such as speech[LSTM10-11][LSTM4][DL1] and videos. LSTM with forget gates[LSTM2] for RNNs.) Resnets[HW2] are a version of this where the gates are always open: g(x)=t(x)=const=1. #M69 #machine consciousness, the need #MACY51 MACY conferences (1946-1953)[MACY51] and the 1951 Paris conference on calculating machines and human thought, #MAD86 Many additional references on this can be found in Sec. 6 of the 2015 survey.[DL1] Many major companies are using it now. See Sec. D & VII. many of them influenced by my overview.[MIR] Many other companies adopted this.[DL4] #Maps of Ukraine [war [forecast, battles], losses, oil, gas pipelines, minerals] #MAR15 (Margin note: Bengio states[YB20] that in 2018 he (Margin note: it has been pointed out that the famous "Turing Test" should actually be called the "Descartes Test."[TUR3,a,b][TUR21]) (Margin note: our 2005 paper on deep learning[DL6,6a] was (Margin note: our 2005 paper on deep RL[DL6,6a] was #market Markov models (HMMs).[BW][BRI][BOU] [HYB12] still used the old hybrid approach and did not compare it to CTC-LSTM. Later, however, Hinton switched to LSTM, too.[LSTM8] Markov models (HMMs)[BW][BRI][BOU] (Sec. XV). Hinton et al. (2012) still used the old hybrid approach[HYB12] and did not compare it to CTC-LSTM. #Mars #Master of Applied Science in Chemical #Maya #MC43 mechanisms[TR5-6] and Fast Weight Programmers[FWP0-2] mentioned by ACM (labeled as A, B, C, D) below: #Mercury #Mesopot #META1 metalearning machines that learn to learn[FWPMETA1-9] #METARL10 method[BP1] whose origins #MGC #microtubules: platforms for [transport, info processing]? #Min 2010 Thalamic reticular networking #MindCode applications to keep in mind #MindCode [identify, library, model, predict, change, heal] #MindCode [learn, evolve]: Grossberg's 'Conscious Mind, Resonant Brain' #MindCode programming code minimize pain, maximize pleasure, drive cars, etc.[MIR](Sec. 0)[DL1-4] Minsky was apparently unaware of this and failed to correct it later.[HIN](Sec. I)[T22](Sec. XIII) Minsky was apparently unaware of this and failed to correct it later.[HIN](Sec. I) #MIR #Missing concepts - among hundreds #missing 'primary links' #missing sub-sections for genetic machinery mitosis detection.[MGC][GPUCNN5,7,8] mitosis detection.[MGC][GPUCNN5,8] #mlp #MLP1 #MLP2 #MOC1 #Modern [philosophical, logical] models of consciousness Modern Transformers are also viewed as RNN alternatives, despite their limitations.[TR3-4] #modules & modal architectures ([micro, macro]-circuits) Monte Carlo (tree) search (MC, 1949),[MOC1-5] More than a decade after this work,[UN1] #MOST Most of the critiques are based on references to original papers and material from the AI Blog.[AIB][MIR][DEC][HIN] Most of them go back to work of 1990-91.[MIR] Most recently, in a paper co-authored with Kent Condie of the New Mexico Institute of Mining and Technology, Socorro, NM, USA[4], Puetz hypothesized causes for a number of geological series. While and these were assessed, definitive cusions cannot be drawn :
      #MOZ #mRNA program code causes neurons to fire? Much later this was called a probabilistic language model.[T22] Much of early AI in the 1940s-70s was actually about theorem proving[ZU48][NS56] multilayer perceptrons with arbitrarily many layers.[DEEP1-2][HIN] #'Multiple Conflicting Hypothesis' for consciousness: #Multiple drafts multiplicative gates).[DEEP1-2][DL1-2][FDL] A paper of 1971[DEEP2] #MV Bodnarescu - hard-nosed engineer (Romanian German) #myBlogs My FWP of 1991[FWP0-1] #NAK72 #NAN1 #NAN5 #NASC1 #NASC6 [NASC6a] J. Schmidhuber. Comment on "Biography: The ABC of computing" by J. Gilbey, Nature 468 p 760-761 (2010). Link. #NAT1 /national/nationalpost/news/editorialsletters/index.html#editorials /national/nationalpost/news/editorialsletters/index.html#letters #Nature's Electrode Thunderblog #Navigation: [menu, link, directory]s #NDR #Neil Howell (my father) and I, 'the two fools who rushed in' Neither in a popular book[AOI] neither involved unsupervised NNs nor were about modeling data nor used gradient descent.[AC20] neither involved unsupervised NNs nor were about modeling data nor used gradient descent.[AC20]) Bengio et al. neither cited the original work[AC90,90b][AC20] nor corrected Neocognitron.[CNN1] Net version called ResNet the most cited NN of the 21st.[MOST] (Citations, however, are a highly questionable measure of true impact.[NAT1]) #NEU45 #neural #Neural correlates of consciousness Neural History Compressor.[UN3] neural networks learning to control dynamic external memories.[PDA1-2][FWP0-1] #neurotransmitter #New corona virus cases/day/population for selected countries /news/weather/#world-city-1day #NHE NN distillation was also republished many years later,[DIST2][MIR][HIN][T22] and is widely used today. NNs and traditional approaches such as Hidden Markov Models (HMMs).[BW][BRI][BOU][HYB12][T22] NNs without hidden layers learned in 1958[R58] NNs with rapidly changing "fast weights" were introduced by v.d. Malsburg (1981) and others.[FAST,a,b] #Nomenclature, acronyms #non-conscious themes notes #non-[Grossberg, TrNN] topics #Non-health effects of the [sun, astronomy] #no permission clause nor in other recent work[DL3,DL3a] did he normalization).[FWP] #Notation for [chapter, section, figure, table, index, note]s #Notes concerning [graphs, analysis, background material] #NOT: simple function of a single RNA input strand, one output RNA #Nowhere near as radical as I (now often called keys and values for self-attention; Sec. 2). (now often called keys and values for self-attention).[TR1-6][FWP] (now often called keys and values for self-attention[TR1-6]). now often viewed as the first conference on AI.[AI51][BRO21][BRU4] #NPM #NPMa #NS56 #[, n]START learning & memory consolidation #nuclear #[Nuclear, wind, solar] - energy and war Numerous references can be found under the relevant section links I-XXI numerous weights of large NNs through very compact codes.[KO0-2][CO1-4] Here we exploited that the #NYT1 #OAI2 #OAI2a #OBJ1 of 2022[GGP] with of deep learning MLPs since 1965[DEEP1-2][GD1-2a] (see Sec. II, XX) of deep learning MLPs since 1965[DEEP1-2] (see Sec. II, XX) of arbitrary depth.[DL1] of deep CNNs through GPUs.[GPUCNN1,3,5][R6] of Ivakhnenko whom he has never cited;[DEEP1-2][R7-R8] see Sec. II, XIII. of learning to predict future data from past observations.[AIT1][AIT10] With of my work in 1990.[AC90,90b][AC20][R2] According to Bloomberg,[AV2] Bengio has simply "denied my claims" without Often LBH failed to cite essential prior work, even in their later surveys.[DL3,DL3a][DLC][HIN][MIR](Sec. 21)[R2-R5, R7-R8] Often LBH failed to cite essential prior work.[DL3,DL3a][DLC][HIN][MIR](Sec. 21)[R2-R5,R7,R8,R11] Often LBH failed to cite essential prior work.[DLC][HIN][MIR](Sec. 21) of text compression[SNT] (see Sec. XVI, XVII-1). of text[SNT] (see Sec. XVI). of the Adversarial Curiosity Principle of 1990[AC90-20][MIR](Sec. 5) (see Sec. XVII). of the same size: O(H2) instead of O(H), where H is the number of hidden units. This motivation and a variant of the method was republished over two decades later.[FWP4a][R4][MIR](Sec. 8)[T22](Sec. XVII, item H3) of this post.[T20a][R12] #Old dogs and new tricks #Old [doubts, questions] #OMG #OMG1 #on-center off-surround #Oncolytics - possible bull wedge formation? One motivation reflected by the title of the paper[FWP2] one must at least clarify it later,[DLC] one NN into another.[UN0-2][DIST1-2][MIR](Sec. 2) One of the FWPs of 1991[FWP0-1] is illustrated in the figure. There is one person who published first[BP1] and therefore should get the credit. ones[DL1-2] (but see a 1989 paper[MOZ]). on Google Translate[WU] mentions LSTM over 50 times (see Sec. B). Only a very small number of theories of consciousness are listed on this webPage, compared to the vast number of [paper, book]s on the subject coming out all of the time. "Popular theories" as listed on Wikipedia, are shown, assuming that this will be important for non-experts. But the only ones that really count for this webSite are the "Priority model of consciousness".
      #Only details matter, for the rest I yap on the history of aviation,[NASC1-2] the telephone,[NASC3] the computer,[NASC4-7] resilient robots,[NASC8] and scientists of the 19th century.[NASC9] #Ontogeny: the growth of bioFirmWare #OOPS2 open problem "P=NP?" in his famous letter to John von Neumann (1956).[GOD56][URQ10] #opinions- Blake Lemoine, others or LSTM (1990s-2005)[LSTM0-6] (or negative log probability) of the data representation in the level below.[HIN][T22][MIR] (or negative log probability) of the data representation in the level below.[HIN][T22][MIR] #Other electric geology concepts #Other health effects of the [sun, astronomy] #Other questions pertaining to the graph others built careers on this notion long before LBH recognized this.[DEEP1-2][CNN1][HIN][R8][DL1][DLC] Even deep learning through unsupervised pre-training was introduced by others.[UN1-3][R4][HIN](Sec. II) #Other themes Our 2005 paper on deep RL[DL6,6a] was actually Our adversarial Artificial Curiosity (1990) → GANs (2010s, see Sec. XVII). Our LSTM our LSTM our Highway Network[HW1] Our CNN image scanners were 1000 times faster than previous methods.[SCAN] our deep and fast DanNet (2011)[GPUCNN1-3] as Our fast GPU-based CNN of 2011[GPUCNN1] known as DanNet[DAN,DAN1][R6] Our fast GPU-based[GPUNN][GPUCNN5] our graph NN-like, Transformer-like Fast Weight Programmers of 1991[FWP0-1][FWP6][FWP] which learn to continually rewrite mappings from inputs to outputs (addressed below), and Our LSTM paper[LSTM1] has got more citations our own unsupervised pre-training of deep NNs Our recent MetaGenRL (2020)[METARL10] meta-learns outer-product-like fast weights encoded in the activations of LSTMs.[FWPMETA6] #Overall conjectures (guesses) own [VAN2] but not the original work. p52 of [4], Puetz noted that all his UWS cycles up to the publication of the book in 2009 "... relate closest to either gravitational or electromagnetic forces. Nothing can be found related to eithertrong nuclear force or the weak nuclear force. Neverthelless, neither force should be completely discounted as a potential factor in modulating these cycles. ...". As I note in the section "General Relativity (GR) Space-Time", and since ~2010 elsewhere on my website, I am NOT fond of [GR, QM, dark [matter, enegy, [strong, weak] nuclear force]. I do retain them in the context of "multiple conflicting hypothesis".

      LSTM #Pandemics are a very tiny theme that fits into a 'Universal Wave Series'?

      Ivakhnenko and Lapa (1965, see above)

      In 1958, Frank Rosenblatt not only combined linear NNs and threshold functions (see the section on shallow learning since 1800), he also had more interesting, deeper multilayer perceptrons (MLPs).[R58] Partially based on TR FKI-126-90 (1990).[AC90] partially observable environments.[FWPMETA7] #PDA1 perceptrons through stochastic gradient descent[GD1-3] (without reverse mode backpropagation[BP1]). #personal #personal contacts #[Perspectives, connections] #pharma [CRSP,NTLA,BEAM,BLUE,EDIT,ONC] #Philosophy and sentience

      #PineScript comments #PineScript, SP500USD chart - multi-fractal on-chart comments #PLAG1 #PLAN #PLAN3 #Players, puppets, rhetoric #Play with the [time, mind]-bending perspective yourself #PM0 #PM1 #PO87 #POS postdoc Dan Ciresan[MLP1-2] #PP

      10 years later, the Amari network was republished (and its storage capacity analyzed).[AMH2]

      4 years before a 2014 paper on GANs,[GAN1] my well-known 2010 survey[AC10] summarised the generative adversarial NNs of 1990 as follows: a

      ACM correctly mentions advancements through GPUs. The first to use GPUs for NNs were Jung & Oh (2004),[GPUNN][GPUCNN5]

      A history of AI written in the 1980s would have emphasized topics such as theorem proving,[GOD][GOD34][ZU48][NS56] logic programming, expert systems, and heuristic search.[FEI63,83][LEN83]

      A history of AI written in the 2020s must emphasize concepts such as the even older chain rule[LEI07] and deep nonlinear artificial neural networks (NNs) trained by gradient descent,[MIR](Sec. 21)

      As mentioned in Sec. II, Sejnowski

      Compare other NNs that have "worked on command" since April 1990, in particular, for learning selective attention,[ATT0-3]

      Dr. LeCun himself is well aware of the challenges to scientific integrity in our field:[LECP] "...

      Footnote 1. In 1684, Leibniz was also the first to publish "modern" calculus;[L84][SON18][MAD05][LEI21,a,b] later Isaac Newton was also credited for his unpublished work.[SON18] Their priority dispute,[SON18] however, did not encompass the chain rule.[LEI07-10] Of course, both were building on earlier work: in the 2nd century B.C., Archimedes (perhaps the greatest scientist ever[ARC06]) paved the way for infinitesimals

      Footnote 3. Some claim that the backpropagation algorithm (discussed further down; now widely used to train deep NNs) is just the chain rule of Leibniz (1676) & L

      (Furthermore, "complex networks of modules where backpropagation is performed" were the central theme of my much earlier habilitation thesis (1993).[UN2] For example, our

      In 1960, Henry J. Kelley already had a precursor of backpropagation in the field of control theory;[BPA] see also later work of the early 1960s by Stuart Dreyfus and Arthur E. Bryson.[BPB][BPC][R7] Unlike Linnainmaa

      In 1982, Paul Werbos proposed to use the method to train NNs,[BP2] extending ideas in his 1974 thesis.

      In 1987, NNs with convolutions were combined by Alex Waibel with weight sharing and backpropagation (see above),[BP1-2] and applied to speech.[CNN1a] Waibel did not call this CNNs but TDNNs.

      In 2022, we also published at ICML a modern self-referential weight matrix (SWRM)[FWPMETA8] based on the 1992 SWRM.[FWPMETA1-5]

      In the early 2000s, Marcus Hutter (while working under my Swiss National Science Foundation grant[UNI]) augmented Solomonoff

      It took 4 decades until the backpropagation method of 1970[BP1-2] got widely accepted as a training method for deep NNs. Before 2010, many thought that the training of NNs with many layers requires unsupervised pre-training, a methodology introduced

      MLPs were also discussed in 1961 by Karl Steinbuch[ST61-95] and Roger David Joseph[R61] (1961). See also Oliver Selfridge

      Note that I am insisting on proper credit assignment not only in my own research field but also in quite disconnected areas,[HIN] as demonstrated by my numerous letters in this regard published in Science and Nature, e.g., on the history of aviation,[NASC1-2] the telephone,[NASC3] the computer,[NASC4-7] resilient robots,[NASC8] and scientists of the 19th century.[NASC9]

      Our system set a new performance record[MLP1] on

      Remarkably, as mentioned above, Amari also published learning RNNs in 1972.[AMH1]

      ResNet, the ImageNet 2015 winner[HW2] (Dec 2015) and currently the

      The 1993 FWP of Sec. 3[FWP2] also was an RNN

      The Fast Weight Programmer[FWP0-1] depicted in Sec. 1 has a slow net unit for each fast weight.

      The present piece also debunks a frequently repeated, misleading "history of deep learning"[S20][DL3,3a] which ignores most of the pioneering work mentioned below.[T22] See Footnote 6.

      The ten priority disputes mentioned in the present Sec. XVII are not on the only ones.[R4] Remarkably, three of them

      Today, the most popular FNN is a version of the LSTM-based Highway Net (mentioned below) called ResNet,[HW1-3] which has become the

      Today, the most popular RNN is the Long Short-Term Memory (LSTM) mentioned below, which has become the

      Werbos,[BP2][BPTT1]

      When there is a Markovian interface[PLAN3]

      Yes, this critique is also an implicit critique of certain other awards to LBH.[HIN] Predictability Minimization[PM0-2][AC20]). #Preface pre-training for important applications.[MLP2] previous related work.[BB2][NAN1-4][NHE][MIR](Sec. 15, Sec. 17)[FWPMETA6] #Pribram 1993 quantum fields and consciousness proceedings #prime dimensionless ratios (Poirier) #principles, architecture, function, process principles of binary computation (1679)[L79][LA14][HO66][L03] #Principles, Principia Probably the most important section of this webPage is "Computations with multiple RNA strands". Most other sections provide context.
      problem aka deep learning problem (analyzed a few months later in 1991[VAN1]) problem."[LEC] #Proceeding Table of Contents #[pro, eu]karyotes #professional #projmajr #projmini #[proteomic, neuroinfo, MindCode]*[protein, program]*bioFirmWare provided essential insights for overcoming the problem, through basic principles (such as constant error flow) of what we called LSTM in a tech report of 1995.[LSTM0] #Proving the Passage of the Dragon, Eye of the Storm, Part 8 Thunderblog and video

      See[MIR](Sec. 9)[R4] for my related priority dispute on attention with Hinton. publication with the word combination "learn deep" in the title.[T22]) #Puetz's UWS as a highly-[constrained, specific] Fourier wave series-like analysis? #Puetz - Universal Wave Series (UWS) #PuetzUWS comments purely in the end-to-end differentiable forward dynamics of RNNs.[FWPMETA6] #Quantum approaches to applications #Quantum [concept, approach, context]s #Quantum concepts #Quantum consciousness #Quantum contexts for consciousness #Quentin Fottrell, MarketWatch - deaths of despair #Question - influenza rates : extremely low ?1978?-2013, surging 2013-2020 #Questions #Questions: Grossberg's c-ART, Transformer NNs, and consciousness? #Questions, Successes, Failures #Quick introduction to Puetz 'Universal Wave Series' temporal nested cycles #Quite apart from the issue of the benefits of vaccines quite erroneous ideas about the origins of the universe (see the final section #Quotes #R1 #R2 #R3 #R5 #R58 #R6 #R61 #R62 #random fun themes #random thoughts rapidly learn to solve quickly[LSTM13,17] rapidly learn to solve quickly[LSTM13] #Ratio of actual to semi-log detrended data : [advantages, disadvantages] #RAU1 #Ray Dalio: Changing world order #RCNN1 #reader Howell notes #Rebuttals of the [astronomical, disease] correlation Recently, however, I learnt through a reader that even the BM paper[BM] did not cite prior relevant work Recently, Transformers[TR1] have been all the rage, e.g., generating human-sounding texts.[GPT3] recent work.[DL3,DL3a][DLC] #References #references #references- Grossberg #References: Is the effectiveness of vaccines over-rated, or sometimes problematic? #references- non-Grossberg #References: Tapping, Mathias, and Surkan (TMS) theory #References: USA influenza [cases, deaths] alongside [sunspots, Kp index, zero Kp bins] Regarding attention-based Transformers,[TR1-6] Bengio[DL3a] cites his own team (2014) for "soft attention" without citing my much earlier original work of 1991-1993 on soft attention and linear Transformers.[FWP,FWP0-2,6] regression and the method of least squares[DL1-2] regression and the method of least squares[DL1-2]). #Regular [1,6] month market views Reinforcement Learning (RL),[KAE96][BER96][TD3][UNI][GM3][LSTMPG] #RELU1 reprogram all of its own fast weights through additive outer product-based weight changes.[FWP2] researchers took a fresh look at the problem in the 1980s."[S20] However, the 1969 book[M69] addressed a "problem" of Gauss & Legendre researchers took a fresh look at the problem in the 1980s."[S20] However, as mentioned above, the 1969 book[M69] addressed a "problem" of Gauss & Legendre researchers took a fresh look at the problem in the 1980s."[S20] However, the 1969 book[M69] addressed a "problem" of Gauss & Legendre ResNet, the ImageNet 2015 winner[HW2] (Dec 2015) and currently the ResNet, the ImageNet 2015 winner[HW2] (Dec 2015) which currently gets #Rethinking neural networks #Reverse Engineering the Earth, Eye of the Storm, Part 10 Thunderblog and video reverse mode gradient descent method now known as backpropagation[BP1]); reverse mode of automatic differentiation or backpropagation[BP1]). #reviews #Rhetoric not war #ribosomes #rl RL with DP/TD/MC-based FNNs can be very successful, as shown in 1994[TD2] (master-level backgammon player) and the 2010s[DM1-2a] (superhuman players for Go, chess, and other games). #rnn RNN above,[FWPMETA1-5] #RO98 #Robert Prechter - Socionomics, the first quantitative sociology? robot cars were driving in highway traffic, up to 180 km/h).[AUT] Back then, I worked on my 1987 diploma thesis,[META1] which introduced algorithms not just for learning but also for meta-learning or learning to learn,[META] to learn better learning algorithms through experience (now a very popular topic[DEC]). And then came our Miraculous Year 1990-91[MIR] at TU Munich, #Roger Barlizan - Ukrainian and British royalist #Roman Rosenblatt basically had what much later was rebranded as Extreme Learning Machines (ELMs) without proper attribution.[ELM1-2][CONN21][T22] #RUM Rumelhart[RUM] with the "invention" of backpropagation. #Russia #S20 #S59 #S80 #SAFIRE, Aureon.ca #same mRNA code, different mechanism: ergo diff [program, protein]? #SAM: Structured Atom Model of Edo Kaal #San Andreas Fault - A Dragon in Action? Eye of the Storm, Part 9, 1 of 2 Thunderblog and video Sangamagrama and colleagues of the Indian Kerala school.[MAD86-05] #Saturn scales:[LEC] #SCAN #Scandanavia, Baltic Science has a well-established way of dealing with plagiarism (which may be unintentional[PLAG1][CONN21] or not[FAKE2]) science is self-correcting."[SV20] Scientific journals "need to make clearer and firmer commitments to self-correction,"[SV20] as is already the standard in other scientific fields. #SE59 #sec1 #sec2 #sec3 #sec4 #sec5 #sec6 #sec8 Sec. A, B, VI, XI. Sec. B: Natural Language Processing (see also Sec. VI & XI & XVI): Sec. C: Robotics. Sec. Conclusion: Sec. D: Computer Vision Sec. I, A, B, C, D, XVII, VI, and XVI). Sec. I contains 4 subsections Sec. II: Sec. II & Sec. IV is on Turing (1936) and his predecessors (Sec. 1, 2, 3), (Sec. 1, 2, 3, 4). (Sec. 1, 2, 3, 8), (Sec. 2, 3, 4, 5). Sec. XII & XIX & XXI: Modern Sec. XIV: Sec. XI: ACM mentions GPU-accelerated NNs Sec. XIX & XII. Sec. XVII: Sec. XXI: ACM credits LeCun for work on Sec. XX: ACM credits LeCun for work on #Secondary effects from electrical deposition, Sputtering Canyons series Part 3 Thunderblog #Section III. Nanoneurology #Section III. Nanotechnology #Section II. Quantum neurodynamics #Section II. Quantum neurodynamics #Section I. The dendritic microprocess #Section I. The dendritic microprocess #Section IV. Perceptual Processing #Section IV. Perceptual processing (see Executive Summary see above: "Nuclear [material, process, deactivate]s" see above: "Nuclear [material, process, deactivate]s" See Sec. D see Sec. D, XIV. see Navigation: [menu, link, directory]s
      see incorporate reader questions into theme webPage
      (see Sec. II above).[UN] See Sec. I for additional related issues of credit assignment. see Sec. XVII. See also Sec. A. See also [FWP3-3a][FWPMETA7][FWP6] and compare a recent study.[RA21] (See also recent work on unsupervised NN-based abstraction.[OBJ1-5]) See also Sec. II & III. See also Sec. VI & XI & XV. See also Sec. XVIII & XIV & XI & VI. See also the section below : [AH1] See later publications.[AC99][AC02] See overviews[MIR](Sec. 15, Sec. 17) #see-reach to hear-speak See Sec. 1 of the overview:[MIR] (see Sec. D, VI). See Sec. D & See Sec. III. (see Sec. VI & XI & XV). see Sec. XVI. See Sec. X. See the previous section. self-proclaimed "deep learning conspiracy"[DLC1-2] #Sentience (Sept 2012, on cancer detection).[GPUCNN5,8] (Sept 2012, on detection of mitosis/cancer)[GPUCNN5,7,8] sequence-processing generalization thereof.[AMH1] #SHA37 #SHA7a #Shapiro & Benenson: example stream from early MindCode 2005 on... Similarly, the attention weights or self-attention weights (see also[FWP4b-d]) #SIN5 since 1991.[FWP0-3a][TR5-6] Since November 2021: Comments on version 1 of the present report[T21v1] Since November 2021: Comments on version 1 of the report[T22] since the late 1980s.[BW][BRI][BOU] #SK75 #SKO23 smartphones.[DL4] #SMART synchronous matching ART, mismatch triggering #SNT #[software, engineering, other] applications Some called it the Hopfield Network (!) or Amari-Hopfield Network.[AMH3] #Some historical comments Some of the material above was taken from previous AI Blog posts.[MIR] [DEC] [GOD21] [ZUS21] [LEI21] [AUT] [HAB2] [ARC06] [AC] [ATT] [DAN] [DAN1] [DL4] [GPUCNN5,8] [DLC] [FDL] [FWP] [LEC] [META] [MLP2] [MOST] [PLAN] [UN] [LSTMPG] [BP4] [DL6a] [HIN] [T22] #Some of Wilson's key themes someone other than Zuse (1941)[RO98] was Howard Aiken #Some storms suck and others blow, Eye of the Storm, Part 3 Thunderblog Soon afterwards, multilayer perceptrons learned internal representations through stochastic gradient descent in Japan.[GD1-2a] A few years later, modern #So, what does the user have to do to [get, adjust] output? Spatial Averaging.[CNN1] #Special comments #sphere (?spherical functions?) #Spreadsheet for generating the charts #ST #ST61 #[stable, robust, adaptive] learning status & updates

  • #Terry Sejnowski: Is machine intelligence debatable? than any paper by Bengio or LeCun,[R5] that he did it before me.[R3] that his 2015 survey[DL3] does cite Werbos (1974) who however described the method correctly only That is, I separated storage and control like in traditional computers, that learn internal representations (1965),[DEEP1-2][R8] that they are instances of my earlier work.[R2][AC20] the 1991 fast weight update rule (Sec. 6). The additive outer products[FWP0-1] of the Fast Weight Programmers described the attention terminology[FWP2] now used the fast weights of The basic CNN architecture with convolutional and downsampling layers is due to Fukushima (1979).[CNN1] The popular downsampling variant the computationally most powerful NNs of them all.[UN][MIR](Sec. 0) #The core theme : Quantum Psychology The corresponding patent of 1936[ZU36-38][RO98][ZUS21] #The Cross from the Laramie Mountains, Part 2 The earlier Highway Nets perform roughly as well as their ResNet versions on ImageNet.[HW3] The early 1990s, however, saw first exceptions: NNs that learn to decompose complex spatio-temporal observation sequences into compact but meaningful chunks[UN0-3] (see further below), and NN-based planners of hierarchical action sequences for compositional learning,[HRL0] as discussed next. This work injected concepts of traditional "symbolic" hierarchical AI[NS59][FU77] into end-to-end differentiable "sub-symbolic" NNs. #The Electric Winds of Jupiter, Eye of the Storm, Part 2 Thunderblog the fast weights of another NN,[FWP0] essentially the Neural Sequence Chunker[UN0] or the fast weights[FAST,FASTa] of the fast weights[FAST,FASTa,b] of The first application of CNNs with backpropagation to biomedical/biometric images is due to Baldi and Chauvin.[BA93] The first non-learning recurrent NN (RNN) architecture (the Lenz-Ising model) was analyzed by physicists in the 1920s.[L20][I25][K41][W45] the first high-level programming language.[BAU][KNU] The first paper on policy gradients for LSTM. This approach has become very important in reinforcement learning.[LSTMPG] #The first quantitative sociology, and the second universal wave series? the first really deep feedforward NN.[HW1-3] the first sentence of the abstract of the earlier tech report version[DM1] the first working algorithms for deep learning of internal representations (Ivakhnenko & Lapa, 1965)[DEEP1-2][HIN] as well as the first working chess end game player[BRU1-4] The following additional definitions are also quoted from (Wiki2023) :
    "... The frequency splitting phenomenon (FSP) is a critical issue in wireless power transfer (WPT) systems. When the FSP exists, the load power will sharply increase and can be dozens of times of the power obtained at the resonant frequency if the driving frequency varies from the resonant frequency, which seriously affects the system safety. ..." (Liu etal [9]) The highly cited VGG network (2014)[GPUCNN9] The highly successful Transformers of 2017[TR1-2] can be viewed as a combination of my their erroneous claims[GAN1] about their NIPS 2014 paper[GAN1] their own weight change algorithms or learning algorithms[FWPMETA1-5] (Sec. 8). #The Maars of Pinacate, Part One Thunderblog #The Maars of Pinacate, Part Two Thunderblog #Theme webPage generation by bash script "... The model suggests consciousness as a "mental state embodied through TRN-modulated synchronization of thalamocortical networks". In this model the thalamic reticular nucleus (TRN) is suggested as ideally suited for controlling the entire cerebral network, and responsible (via GABAergic networking) for synchronization of neural activity. ..." (Wiki2023) #TUR #TUR1 #TUR3 #[Turing, von Neuman] machines of gene mechanisms? #Turkey, Germany, France, Italy, rest of EU #UDRL1 #Ukraine #UN #UN0 #UN1 #UN2 #UN3 #UN4 #Undefined neuron structures, Universal Function Approximators, Magic science #UNI unintentional[PLAG1][CONN21] or intentional.[FAKE2] universal self-referential formal systems,[GOD][GOD34] #unsupdl upside-down reinforcement learning[UDRL1-2] and its generalizations.[GGP] #USA #USA Center for Disease Control - Leading Causes of Death 2017 #US ages : heroic (revoln->Korea), [creative, effeminate, arrogant], machine #US Center for Disease Control (CDC) - annual flu seasons 2010-2017 used by Transformers[TR1-6] for used gradient descent in LSTM networks[LSTM1] instead of traditional used to break the Nazi code.[NASC6] #[use, modfication]s of c-ART using my NN distillation procedure of 1991.[UN0-1][MIR] (using stochastic units[AC90] like in the much later StyleGANs[GAN2]). #USOIL, Canadian [XEG ETF,CNQ,IMO,SU], 10y T-bill #V #Vaccines: so what should we conclude? #VAN1 [VAN1] S. Hochreiter. Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, TUM, 1991 (advisor J. Schmidhuber). PDF.More on the Fundamental Deep Learning Problem. #VAN2 #vanish variables[FWP2] (Sec. 3). Variants of highway gates are also used for certain algorithmic tasks where the pure residual layers do not work as well.[NDR] variants.[TR5-6] #Venus version of this became popular under the moniker "dropout."[Drop1-4][GPUCNN4] "Very Deep Learning" tasks of depth > 1000.[UN2][DL1][UN] "very deep learning" tasks of depth > 1000[UN2] (requiring #VI #VID1 #VID2 #video link #videos #video transcript: Edo Kaal 03Jul2021 The Structured Atom Model | EU2017 #video transcript: Gareth Samuel 03Jul2021 The Structured Atom Model | Thunderbolts #VII #VIII vision (explicitly mentioned by ACM) for the first time[R6] (see Sec. D). #Vladimir Putin von der Malsburg was the first to explicitly emphasize the importance of NNs with rapidly changing weights.[FAST] The second paper on this was published by Feldman in 1982.[FASTa] von der Malsburg who introduced ReLUs in 1973[CMB] (see Sec. XIV). VS-ML can also learn to implement the backpropagation learning algorithm[BP1-4] #war #Was 2007-2009 the modern equivalent to the 1929 crash? Are we going there yet? was actually the LSTM of our team,[LSTM0-6] which Bloomberg called the "arguably the most commercial AI achievement."[AV1][MIR](Sec. 4) See Sec. B. was also based on our LSTM. was being used for LSTM (only 5% for the CNNs of Sec. D).[JOU17] was perhaps the first machine with a stored program.[BAN][KOE1] It used pins on was settled in favor of Sepp.[VAN1] #Was the virus made in a lab in China? #Weaker points weights and an adaptive output layer.[R62] So Rosenblatt basically had what much later was rebranded as Extreme Learning Machines (ELMs)[ELM1] #WER87 #We’re All Different and That’s Okay were also discussed in 1943 by McCulloch and Pitts[MC43] and formally analyzed in 1956 by Kleene.[K56] were also discussed in 1943 by neuroscientists Warren McCulloch und Walter Pitts[MC43] and formally analyzed in 1956 by Stephen Cole Kleene.[K56] were first published in March 1991[FWP0-1][FWP6][FWP] we replace the 1991 elementary programming instruction based on additive outer products[FWP0-2] by a delta rule-like[WID] were proposed already in the 1940s/50s[MC43][K56] #What CAN the user change? #What does my TradingView PineScript code do? #What do I think of the book? #What is consciousness? #What is consciousness: from historical to Grossberg #what is currently ignored by callerID-SNNs? #What is LaMDA and What Does it Want? #What is sentience and why does it matter? what Y. LeCun called an "open problem" in 2022.[LEC] When the authors[GAN1] which are formally equivalent to my 1991 FWPs (apart from normalization).[FWP6][FWP] which Hinton,[UN4] Bengio,[UN5] and which is now considered a remaining grand challenge.[LEC] which I started in 1987[META1][META] long before Bengio which learned to defeat human experts in the Dota 2 video game (2018).[OAI2] who wrote about "back-propagating errors" in an MLP with a hidden layer,[R62] but did not yet have #Why are European-decended countries particularly hard hit? #Why are there hexagonal grid cell receptive fields? #why is cART unknown #WI48 #WID62 widely used type of automatic differentiation for differentiable networks of modules[DL2][BP4-5][DLC] Wikipedia lists numerous criticisms of IIT, but I have not yet quoted from that, other than to mention the authors : Williams,[BPTT2][CUB0-2] and others[ROB87][BPTT3][DL1] #Wind Map, Eye of the Storm, Part 4 Thunderblog with Dr. Bengio & Dr. Hinton (see Sec. XVII, I). with my former postdoc Faustino Gomez[FWP5] with NNs.[MIR](Sec. 9) without citing them.[DL1][DLC][HIN][R2-R4][R7-R8] without relating them to the original work,[DLC][S20][T22] although the true history is well-known. without suggesting any fact-based corrections.[HIN]) with several transistors on a common substrate (granted in 1952).[IC49-14] with the foundational work by Gottlob Frege[FRE] (who introduced the first formal language in 1879), #Wojtaszek, Bromley Aug2023 Plutonium-Thorium Fuels with 7LiH Moderator #Wojtaszek, Bromley Jul2022 Uranium-Based Oxy-Carbide in Compact HTGC Reactors work on this (1991),[UN1][UN] not even in his later patent application work on this[UN0-2] work.[HIN][DLC][DL1-2][DEEP1-2][CMB][R7-R8] See Sec. work.[HIN][DLC][DL1-2][DEEP1-2][RELU1-2][R7-R8] See Sec. #World War II (WWII) versus 2022 context wrote about "back-propagating errors" in an MLP with a hidden layer[R62] although he did not yet have #WU #X #XI #XII #XIII #XIV #XIX #XV #XVI #XVII #XVIII #XX #XXI #YB20 yield useful internal representations in hidden layers of NNs.[RUM] At least for supervised learning, backpropagation is generally more efficient than Amari #ZU36 #ZU48 Zuse also created the first high-level programming language in the early 1940s.[BAU][KNU]
    status & updates status & updates status & updates status & updates status & updates status & updates status & updates status & updates status & updates status & updates status & updates status & updates status & updates status & updates status & updates status & updates #Stephen Grossberg 2021 Conscious Mind, Resonant Brain Stephen Grossberg may have the ONLY definition of consciousness that is directly tied to quantitative models for lower-level [neuron, general neurology, psychology] data. Foundational models, similar in nature to the small number of general theories in physics to describe a vast range of phenomena, were derived over a period of ?4-5? decades BEFORE they were found to apply to consciousness. That paralleled their use in very widespread "Universal Waves Series" (UWS), which is by far the most [broad, deep, stunning, insightful] concept that I am aware of for relating periodicities of phenomena across [physics, astronomy, geology, climate, weather, evolutionary biology, history, crops, financial markets,etc]. A great many of the [known, reported] periodicities are described by the UWS as a "factor of 3" series, where each successive cycle is 3 times [more, less] than it #Stephen Puetz - Universal Wave Series #STO51 stochastic gradient descent for multilayer perceptrons (1967),[GD1-3] stochastic gradient descent (SGD, 1951),[STO51-52] #Strangely absent ideas #Strange themes #Strategies for hunting lions #Strategies for preying on democracies and weak [dictatorship, monarchy]s #Stray [thought, quote, history]s #string (Fourier series) #Stripping away all intelligence student Sepp Hochreiter in 1991.[VAN1] #subgoal #Successful [datamodelling, applications] showing effectiveness of cART etc such as HMMs.[BW][BOU][BRI][HYB12] (such as the Boltzmann machine[BM][HIN][SK75][G63][T22]) Such Fast Weight Programmers[FWP0-6,FWPMETA1-7] can learn to memorize past data, e.g., Such Fast Weight Programmers[FWP0-6,FWPMETA1-8] can learn to memorize past data, e.g., such that the NN behaves more and more like some teacher, which could be a human, or another NN,[UN-UN2] or something else. #Summary #Summary comments #Sun superior computer vision (2011, see Sec. D, XVIII), [DL1-2][R2-R8] [UN][UN0-3] and later #Surface Conductive Faults Thunderblog survey (2015),[DL3][DLC] #SV20 #SVM1 #SW1 swarm intelligence,[SW1] and evolutionary computation.[EVO1-7]([TUR1],unpublished) Why? Because back then such techniques drove many successful AI applications. #Symptoms within a month or so after getting the vaccine #Symptoms within one week or so after getting the vaccine #Symptoms within two weeks or so after getting the vaccine #System dynamics of the UWS system identification,[WER87-89][MUN87][NGU89] systems with intrinsic motivation,[AC90-AC95] the system also #T19 #T20a [T20a] J. Schmidhuber (AI Blog, 25 June 2020). Critique of 2018 Turing Award for Drs. Bengio & Hinton & LeCun. A precursor of [T22]. #T21v1 #T22 #Table of Contents #Table of Contents for the presentation #Table of cycles: Puetz's Universal Wave Series (UWS)
    Regina
    11°
    Partly cloudy
    #Tae Kim: AI Chatbots Keep Making Up Facts. Nvidia Has an Answer #Taylors consciousness #TD1 #TD2
    DanNet[GPUCNN1-3] AlexNet[GPUCNN4] Competition[GPUCNN5] DanNet[GPUCNN3a] DanNet[GPUCNN8] DanNet[DAN,DAN1][R6] ResNet,[HW2] a
    Highway Net[HW1]
    with open gates
    VGG Net[GPUCNN9] Subscribe to EIR  bad guy Chinese civiliser destroyer Egypt Greek Hebrew Hindu Inca Japan Jupiter knowledge, letters love Mars Maya Mercury Mesopot Roman Saturn Sun tragedy Venus war #The Monocline Thunderblog theorem proving[ZU48] #theory #The overall decline in influenza-attributed mortality cannot be due to vaccines #The population in general The RNN can see its own errors or reward signals called eval(t+1) in the image.[FWPMETA5] The "self-attention" in standard Transformers[TR1-4] combines this with a projection and softmax (using These so-called "Fast Weight Programmers" or "Fast Weight Controllers"[FWP0-1] separated storage and control like in traditional computers, #The Shocking Truth, Thunderblog and video The similar Transformers[TR1-2] combine this with projections The slow net and the fast net of the 1991 system[FWP0-1] in Sec. 2 were feedforward NNs (FNNs), #The Summer Thermopile Thunderblog The title image of the present article is a reaction to an erroneous piece of common knowledge which says[T19] that the use of NNs "as a tool to help computers recognize patterns and simulate human intelligence had been introduced in the 1980s," although such NNs appeared long before the 1980s.[T22] #The underlying basis in [bio, psycho]logical data The very similar Transformers[TR1-2] combine this with projections The VGG network (ImageNet 2014 winner)[GPUCNN9] The weights of a 1987 NN were sums of weights with a large learning rate and weights with a small rate[FASTb][T22] (but have nothing to do with the NN-programming NNs discussed below). the work of Baldi and colleagues.[BA96-03] Today, graph NNs are used in numerous applications. This answer is used by the technique of gradient descent (GD), apparently first proposed by Augustin-Louis Cauchy in 1847[UN0-3] this concept,[AIT7][AIT5][AIT12-13][AIT16-17] as well as applications to NNs.[KO2][CO1-3] This greatly simplified the hardware.[LEI21,a,b] This happened long before the similar work of Bengio (see Sec. XVII).[MIR] Thoralf Skolem[SKO23] (who introduced primitive recursive functions in 1923) and Jacques Herbrand[GOD86] (who identified three times worse performance).[DAN1] Again see Sec. D. through additive fast weight changes (Sec. 5). through "forget gates" based on end-to-end-differentiable fast weights.[MIR](Sec. 8)[FWP,FWP0-1] through tensor-like outer products (1991-2016) and their motivation[FWP2][FWP4a][MIR](Sec. 8) (see also Sec. XVI above). #Thunderbolts.info 04Oct2016 - The Sun's Influence on Consciousness #Thunderbolts.info 31Mar2020 - Geomagnetic Effects on Earth's Biology, Electricity of Life #Thunderbolts.info - the Electric Universe and Health #tie-in with Grossberg's 2021 'Conscious Mind, Resonant Brain' time scales?[LEC] We published answers to these questions in 1990-91: self-supervised Today, everybody is talking about attention when it comes to describing the principles of Transformers.[TR1-2] together with unsupervised/self-supervised pre-training for deep learning.[UN0-3] #top-down bottom-up #Tornado - The Electric Model Thunderblog to self-correction,"[SV20] as is already the standard in other scientific fields. to the 1991 Fast Weight Programmers[MOST] (see this tweet). to the fast weight (which then may be normalized by a squashing function[FWP0]). The to train deep NNs, contrary to claims by Hinton[VID1] who said that "nobody in their right mind would ever suggest" this. Then we to win computer vision contests in 2011[GPUCNN2-3,5] (AlexNet and VGG Net[GPUCNN9] followed in 2012-2014). [GPUCNN4] emphasizes benefits of Fukushima to win computer vision contests.[GPUCNN2-3,5] #TR1 #TR3 #TR4 #TR5 Traditionally this is done with recurrent NNs (RNNs) #tragedy #Training #transformer transformer-like[TR1-6][FWP] Transformers[TR1-2] Transformers[TR1-6] Transformers with linearized self-attention[TR5-6] Transformers with linearized self-attention (1991-93).[FWP] Today, both types are very popular. Transformers with linearized self-attention (see above),[FWP0-6][TR5-6] Transformers with linearized self-attention[TR5-6] "Transformer with linearized self-attention."[FWP] #TrNN controls need consciousness #TrNNS&ART theme #TrNNs augment by cART #TrNNs have incipient consciousness