#
#???
#[?]
#00:20 nuclear decay processes
#00:33 decay process of U235
#00:46 alpha particle emission, problems with the standard explanations
#02:12 SAM basics: nucleus structure, electrostatic, no [neutron, strong force]
#02Mar2022 Howell: more questions
#02Mar2022 msn.com: Will Russia invade Moldova?
#02Mar2022 Rodney Johnson's Take - Guerillas in the Machine
#02Mar2022 Roosevelt treason - atomic bomb [top-secret designs, material, etc] direct to Russia 1943
#03:42 SAM periodic table
#04:06 SAM transmutations confirmed in SAFIRE laboratory
#04:40 Inner nuclear structure dictates chemical properties
#04Mar2022 Howell: Russia preserves some infrastructure? No Ukraine scorched Earth?
#04Mar2022 MktWatch, Opinion: Russia’s invasion of Ukraine: 4 ways this war could end
#04Mar2022 MW/AP: Civilian drone hobbyists in Ukraine join the fight against Russia
#04Mar2022 Rodney Johnson: Saudia Arabia's dilemma
#05:11 Formation of elements doesn't require stars
#05Jan2022 Re: Covid-19 adverse
#05Mar2022 Scenarios by Others
#06:06 Other SAM [assumptions, advantages]
#06Oct2016 Eileen Mckusick - The Sun?s Influence on Consciousness
#07Feb2022 Russia has 70% of military capacity in place for full-scale invasion of Ukraine
#07Jan2022 Robert Malone covax interview
#07Jan2022 Total deaths all causes less historical 5 year average, less deaths by covid
#07Mar2022 18:05 kyivindependent.com: Blinken- Ukraine’s using defense support 'effectively' against Russian aggression
#07Mar2022 MW/AP: War in Ukraine: Zelensky government accuses Putin of resorting to ‘medieval siege’ tactics in Russia’s ongoing invasion
#07Mar2022 MW: How China’s Currency Could Come Out a Big Winner in the Ukraine War
#07Mar2022 PSI, opindia.com: Ukraine - Russian govt had raised 'bioweapons' alarm
#07Mar2022 Rodney's Take: 1920 Jones Act, shipped US goods too expensive for Americans
#08Dec2015 Howell - The Greatest of cycles : Human implications?
#08Dec2021 Re: Covid vaccine Adverse Effects
#08Jan2022 Covax versus excess deaths
#08Jan2022 Re: 220107 Steve - UK excess deaths
#09Jan2022 Excess deaths from all causes compared to projection based on previous years
#09Jan2022 https://stevekirsch.substack.com/p/new-big-data-study-of-145-countries
#09Jan2022 Steve Kirsch - vax negative impacts on covid deaths, OurWorldInData excess deaths
#[1]
#1
#10Jan2022 Yet another end of the world - This time, we really are going to die!!
#11Jan2022 RE: Jessica Rose - mRNA covaxes; Lamarckian versus Mendellian heredity
12:15 17Feb2022 Dmytro Sennychenko submitted a letter of resignation in November, following the scandalous privatization of the Bilshovyk plant in Kyiv. Sennychenko said the two events weren’t connected.
14:59 15Feb2022 NATO Chief Jens Stoltenberg said Russia’s announcement it was pulling back some troops after they finished exercises was reason for cautious optimism, but that the evidence could not yet be seen on the ground. Moscow-based analysis center Conflict Intelligence Team said on Feb. 15 that it had only observed movements towards, not away from, Ukraine.
Find a car
Find an ad
Find a home
Personals
Find a job
Obituaries
#14Jan2022 If I was Fauci - how might I cover my ass? Fun list to watch for
#14Jan2022 Re: If I was Fauci - how might I cover my ass? Fun list to watch for
#14May2020 Howell - COVID-19 : [incomplete, random, scattered] cart [information, analysis, questions]
#15Mar2022 Howell: Quick summary, still more questions
16:28 26Feb2022 The decision to cut Russia off the international payment order system has not been officially issued yet, but the technical preparations are ongoing, according to Ukraine’s Foreign Minister Dmytro Kuleba. “Ukrainian diplomats dedicate this victory to all defenders of Ukraine.”
#16Dec2021 Re: Pilots and atmospheric phenomena
#1917-18 Spanish flu deaths may have been largely due to secondary infections
1945[KNU]
#1958-59 ?among the world's first? systems of non-linear differential equations for NNs
(1965),[DEEP1-2][R8]
(1970),[BP1,2][R7]
(1970),[BP1-2][R7]
#1976 ART Adaptive Resonance Theory
(1982).[BP2]
(1990s, see Sec. A, B; also for RL, 2003-, see Sec. C)
(1990)[AC90,90b][AC20][R2] which he did not cite;
(1990).[PLAN][MIR](Sec. 11) Same for my linear transformer-like
(1991, see Sec. II & III)
(1991)[HRL0-2] were trained through end-to-end-differentiable chains of such modules.[MIR](Sec. 10)
(1991).[PM1-2][AC20][R2][MIR](Sec. 5)
(1992-2022, Sec. 8).
1992. Based on TR FKI-148-91, TUM, 1991.[UN0] PDF.
1992. Based on TR FKI-148-91, TUM, 1991.[UN0] PDF.
1992[FWPMETA1-9][HO1] extended my 1987 diploma thesis,[META1] which introduced algorithms not just for learning but also for meta-learning or learning to learn,[META] to learn better learning algorithms through experience. This became very popular in the 2010s[DEC] when computers were a million times faster.
1997[AC97][AC99][AC02] and 2015-18.[PLAN4-5]
#19Feb2022 Are many covid [case, hospitalization, death]s due to influenza and NOT covid?
1. Journal paper on Long Short-Term Memory, the
#1stdl
1st superhuman result in 2011.[DAN1]
1st superhuman result in 2011.[DAN1] Now everybody is using this approach.
20:02 23Feb2022 Citing anonymous U.S. officials, the newspaper claimed Russia is likely to begin a large-scale invasion of Ukraine within 48 hours. CNN correspondent Katie Bo Lillis confirmed the report, saying that the eastern city of Kharkiv is “at particular risk.”
(2004),[GPUNN][GPUCNN5]
2005 saw the first publication of LSTM with full backpropagation through time and of bi-directional LSTM[LSTM3] (now widely used).
(2010)[MLP1-2]
2018 A.M. Turing Award[R1]
2021-11-02 11:33 Media in Progress Ep. 2: What’s in a name?
2021-11-14 15:25 Golden Gate District to become Kyiv’s cultural hub with own brand
2021-11-16 19:16 Ukraine’s military intercepts Russian drone in Donbas
2021-12-01 19:33 Anti-corruption activists say head of Ukraine’s ‘FBI’ appointed after fake contest
2021-12-02 11:33 Media in Progress Ep. 2: What’s in a name?
2021-12-04 11:37 Q&A with Brian Bonner, ex-chief editor of Kyiv Post
2021-12-04 15:44 Biden: ‘I will not accept Russia’s red lines on Ukraine’
2021-12-04 16:46 Netflix responds to criticism over ‘offensive’ Ukrainian character in ‘Emily in Paris’
2021-12-05 15:13 Ukraine to intensify Covid-19 restrictions in ‘yellow’ zones on Dec. 6
2021-12-06 13:53 Police suspect arson after journalist’s cars found burned
2021-12-07 22:40 Biden, Putin hold talks about Russia’s potential invasion of Ukraine
2021-12-08 14:46 ‘Ukrainians Will Resist’ hashtag trends amid looming Russian invasion
2021-12-08 16:14 Kyiv, 8 oblasts leave ‘red’ quarantine zone
2021-12-08 18:14 Journalist: EU imposes sanctions on Kremlin’s mercenary Wagner Group that fought in Donbas
2021-12-12 14:28 Ukrainian State-Owned Enterprises Weekly – Issue 55
2021-12-12 17:15 NBU head complains of political pressure but isn’t worried about central bank’s independence
2021-12-12 19:02 G7 foreign ministers: Russia will face ‘massive consequences’ if it invades Ukraine
2021-12-14 00:01 Health minister falsely claims Ukraine reached WHO’s target of vaccinating 40% of population
2021-12-14 15:36 One of Lukashenko’s main rivals in 2020 election jailed for 18 years
2021-12-15 16:22 Court of appeal overturns decision favoring Kolomoisky’s company in PrivatBank case
2021-12-15 17:32 Explainer: Why Russia wants autonomy for occupied Donbas (and why Ukraine doesn’t)
2021-12-15 19:08 Infamous Ukrainian judge’s brother released on bail after bribery charge
2021-12-18 20:01 Ukrainian State-Owned Enterprises Weekly – Issue 56
2021-12-20 19:18 Accounting Chamber outlines reasons for Ukrzaliznytsia’s Hr 12 billion in losses in 2020
2021-12-21 00:47 Controversial court’s new ruling might cancel anti-corruption prosecutor contest
2021-12-21 21:59 Poroshenko family’s companies fined Hr 283 million by Anti-Monopoly Committee
2021-12-22 20:52 HBO acquires Ukrainian war drama ‘Bad Roads’
2021-12-22 21:40 Supreme Court rejects prosecutor general’s libel suit against newspaper, anti-graft watchdog
2021-12-22 21:47 Russia has 122,000 troops close to Ukraine’s border
2021-12-23 22:15 Top general: Ukraine’s military will respond to enemy fire
2021-12-27 21:22 Kyiv to create territorial defense headquarters ahead of Russia’s potential invasion
2021-12-28 17:02 Ukrainian documentary ‘Home Games’ available on Netflix in Europe
2021-12-28 19:28 Zelensky’s party lawmaker buys nationwide television channel
2021-12-29 17:17 Year of musical introspection: Ukraine’s best albums of 2021
2021-12-29 21:00 World Bank study reveals effects of global warming on Ukraine’s agriculture, forests
2021-12-30 20:20 Ukraine’s soldiers may soon get better, warmer boots
2022-01-01 19:33 Anti-corruption activists say head of Ukraine’s ‘FBI’ appointed after fake contest
2022-01-04 16:46 Netflix responds to criticism over ‘offensive’ Ukrainian character in ‘Emily in Paris’
2022-01-05 15:43 Statement: 28 Ukrainian NGOs call for action against Russia’s closure of Memorial human rights group
2022-01-07 13:32 Timothy Ash: What Kazakhstan’s protests mean for the global economy
2022-01-07 21:07 Kazakh government regains control with Kremlin’s help amid uprising
2022-01-07 22:20 Who can and can’t join Ukraine’s Territorial Defense Force
2022-01-09 17:01 Robert A. McConnell: Talk won’t deter Putin. Here’s what West can do
2022-01-10 19:01 Court extends Medvedchuk’s house arrest in treason case
2022-01-11 22:26 US Republicans draft bill to designate Ukraine a ‘NATO Plus’ state, sanction Russia
2022-01-12 10:41 How Zelensky’s administration moves to dismantle press freedom in Ukraine
2022-01-12 20:02 Court orders closure of bribery case against top member of Zelensky’s administration
2022-01-13 10:36 Media in Progress Ep. 6: Popular protest, inter-elite feuds or Russian intervention – What’s going on in Kazakhstan?
2022-01-19 21:36 Blinken visits Kyiv, warns Russia might attack ‘at very short notice,’ asks about reforms
2022-01-20 02:03 Biden predicts Russia will ‘move in’ on Ukraine, while Zelensky downplays invasion threat
2022-01-20 18:46 Zelensky responds to Biden: ‘There are no minor incursions’
2022-01-23 12:13 Who is Murayev, the man UK exposes as potential leader of Kremlin’s coup
2022-01-24 02:29 US orders diplomats’ families to leave Kyiv, citing ‘threat of Russian military action’
2022-01-24 18:25 UK begins to withdraw non-essential embassy staff, EU ‘won’t do the same,’ says Borrell
2022-01-25 03:39 James Batchik & Doug Klain: It’s time for Europe to defend Ukraine — and itself
2022-01-25 11:24 Early look at Ukraine’s exhibit at Venice Art Biennale – exploration of world’s exhaustion
2022-01-25 17:31 Transparency International: Ukraine’s fight against corruption stagnated in 2021
2022-01-26 23:58 US, NATO don’t cave in to Russian demands
2022-01-27 08:54 Media in Progress Ep. 7: Company culture – What can make or break a team
2022-01-27 18:41 US shared response to Russia’s security demands with Ukraine before sending
2022-01-27 22:27 Stanislav Aseyev: Russia’s bluff of the century. Will there be a war?
2022-01-28 18:38 Defense minister downplays Russian threat, says it’s similar to that of spring 2021
2022-01-28 21:34 Olena Goncharova: Ukraine is not ‘the Ukraine’ and why it matters now
2022-01-29 15:45 Deputy economy minister: Ukraine’s GDP hit $200 billion for first time in 30 years
2022-01-29 18:48 Ukrainian director detained in Italy at Russia’s request removed from Interpol wanted list
2022-01-30 20:20 Ukraine’s soldiers may soon get better, warmer boots
2022-01-31 10:29 Want to help Ukraine’s military as a foreigner? Here’s what you can do
2022-02-01 18:06 Zelensky issues decree to bolster Ukraine’s military
2022-02-04 11:38 US says closer relations with China will not alleviate economic sanctions imposed on Russia.
2022-02-08 11:51 President’s office denies Macron transition period bill claims.
2022-02-10 11:23 US Senators: Russia’s cyberattacks on Ukraine to prompt sanctions even before potential invasion.
2022-02-10 12:44 Kyiv’s Cold War-era bomb shelters in dire state (PHOTOS)
2022-02-10 12:59 Russia’s war cost Ukraine $280 billion
2022-02-14 16:38 Over 50 IT companies join Ukraine’s ‘special tax regime’ Diia City in first three days
2022-02-14 22:01 Zelensky proclaims Feb. 16, stipulated date of Russian invasion, ‘unity day.’
2022-02-14 23:33 Scholz warns Moscow of ‘wide-reaching’ consequences, stays silent on Nord Stream 2
2022-02-15 21:20 Defense ministry, state banks suffer ‘powerful’ cyberattack
2022-02-18 11:06 Covid-19 in Ukraine: 34,938 new cases, 282 new deaths, and 17,796 new vaccinations.
2022-02-19 23:22 Zelensky’s full speech at Munich Security Conference
2022-02-22 18:48 Ukrainian civilians fearlessly prepare for Russia’s offensive
2022-02-22 21:16 Putin says Russia-backed illegitimate ‘states’ in eastern Ukraine have claim to entire regions of Donetsk, Luhansk
2022-02-23 00:27 Breakdown of Putin’s false narratives to justify aggression against Ukraine
2022-02-24 09:00 Timothy Ash: What Russia’s attack means for the world
2022-02-25 08:59 Ukraine’s military succesfully defending the area near Chernihiv.
2022-02-26 03:29 Kazakhstan denies Russia
2022-02-26 06:17 Russia’s war on Ukraine: Where fighting is on now (Feb. 26 live updates)
2022-02-26 08:00 A warehouse of Kyivenergo, capital’s energy generating company, was set on fire.
2022-02-26 22:44 Russia's attack on Kyiv kills 14 military and 6 civilians, including a child - Klitschko
2022-02-27 01:42 Russia’s war on Ukraine: Where fighting is on now (Feb. 27 live updates)
2022-02-27 04:42 European Commission President: Cutting Russian banks off from SWIFT will effectively block Russia's exports and imports
2022-02-27 05:57 Mykolaiv mayor confirms the city is under Ukraine’s control.
2022-02-27 08:22 Enemy's light armored vehicles break into Kharkiv
2022-02-27 11:19 Let's support the unbreakable: NBU opens special account to raise funds for Ukraine's Armed Forces
2022-02-27 12:18 Ukraine parliament proposes UNGA set up tribunal to investigate Putin's crimes
2022-02-27 15:10 Belarus may join Russia in war against Ukraine – Ukraine's ex-defense chief
2022-02-27 15:20 Japan to put sanctions on Putin, support Russia's disconnection from SWIFT
2022-02-28 02:26 McDonald’s and KFC offer food assistance amid Russian invasion
2022-02-28 04:36 Here’s how to support the Ukrainian military
2022-02-28 14:14 Romania supports Ukraine's membership in EU
2022-02-28 17:34 Czech PM supports Ukraine's accession to EU under special procedure
2022-02-28 18:31 President signs application for Ukraine's membership in EU
2022-03-01 00:54 ICC prosecutor to investigate war crimes in Ukraine.
2022-03-02 03:24 Russian paratroopers landed in Kharkiv and attacked one of the city’s military medical centers,
2022-03-02 15:01 EXCLUSIVE: Voice message reveals Russian military unit’s catastrophic losses in Ukraine
2022-03-02 19:53 ECHR suspends all procedures that require action from Ukraine.
2022-03-03 05:51 Canada sanctions 10 people in Russia’s energy sector, offers further support to Ukraine.
2022-03-03 18:47 Q&A with US Chargé d’Affaires Kristina Kvien: ‘From now on, Russia will be a pariah state’
2022-03-03 22:37 Kyiv under shelling: ‘First thing I heard was my child’s scream’
2022-03-04 15:45 Russia attacks, captures Europe’s largest nuclear power plant in Ukraine
2022-03-05 20:33 Ukrainian loses parent to Russian propaganda: ‘I can consider myself an orphan’
2022-03-05 20:33 Ukrainian loses parent to Russian propaganda: ‘I can consider myself an orphan’
2022-03-06 01:55 10 days of suffering. Russia’s war against Ukraine in photos
2022-03-06 06:55 Kyiv resident gives birth during war: ‘I forgot about the bombings only in labor’
2022-03-06 22:45 Amid West's doubts over no-fly zone, Russia destroying Ukrainian airfields to choke country’s own capacities
2022-03-06 23:15 Russia's audacity shows sanctions “not enough” - Zelensky
2022-03-07 00:20 Ukraine demands termination of Russia's and Belarus' membership in IMF, all WB organizations
2022-03-10 00:13 UK fears Russia may be setting stage to use chemical weapons.
2022-03-10 20:52 Russia’s war on Ukraine jeopardizes global food security, increasing famine risk
2022-03-11 04:06 IMF: Default no longer "unlikely event" for Russia.
2022-03-11 07:12 Andriy Shevchenko: Putin won’t stop at Ukraine
2022-03-8 16:13 NYT: Biden expected to ban Russian oil imports.
2022-03-9 00:28 CIA Chief: Putin is not crazy.
#20Dec2020 update, Youyang Gu's comments on closing his forecasting activity
#20Jan2022 Dan's Ukraine questionquestion, and Korotayev prediction of possible state collapse in Saudi Arabia
#20Jan2022 emto Geoff Cowper - my thoughts on WWI & Ukraine now
21:55 27Feb2022 In a interview with Associated Press, Kyiv’s Mayor Vitali Klitschko said that ‘Kyiv was encircled’ but ready to fight. His spokesperson said that he misspoke, and such information was “a lie and a manipulation.”
#21Feb2022 Do covax deaths account for ~50% of the official reports of covid deaths?
#21Feb2022 Do [influenza, covax] deaths account for MOST of the official reports of covid deaths?
#21Jan2022 Fentanyl versus covid versus covax, and the infamous downtown eastside of Vancouver
#23Feb2022 Dobler: Russia’s Ballistic Missile deployments along Ukraine’s Eastern Border
#24Jan2022 MW: Fiona Hill - Putin wants to evict the United States from Europe
#[2, 4]-value logic for [protein, information]
#25Jan2022 TradingView: Bitcoin and the Ukraine, Russia
#25Nov2021 Re: Some of Sacha Dobler's recent stuff
#26Feb2022 Dobler: 29% of the West’s Wheat Supply is gone – Ice Age Farmer
#26Feb2022 Howell - AM I A MORON OR WHAT?
#26Feb2022 nationalreview.com: Why the Russians Are Struggling
#27Feb2022 Hugo talks - Skeptical view of war and media posing
#28Feb2022 Howell: new OPEC++, aligned with [China, Russia] economic priorities?
#28Feb2022 Russian [plan, action]s, my naive reflections
#28Jan2022 MW/AP: Russia says it won’t start a war as Ukraine tensions mount
#28Jan2022 MW/AP: U.S. has put some 8,500 troops on higher alert for potential deployment
#2nddl
#[3]
#[4]
555+ References (and many more in the survey[DL1])
5 months later, the similar GPU-accelerated AlexNet won the ImageNet[IM09] 2012 contest.[GPUCNN4-5][R6]
5 years earlier, in 1995, we already had a similar, excellent neural probabilistic text model.[SNT] Bengio[NPM] characterizes it only briefly as "related"
#[8]
#83year detrended SP500
→ our Highway Net (May 2015) → ResNet (Dec 2015, see Sec. D).
#[9]
#A
(A2) Connectionist Temporal Classification by my student Alex Graves et al. (2006).[CTC] Our team successfully applied CTC-trained LSTM to speech in 2007[LSTM4] (also with hierarchical LSTM stacks[LSTM14]).
Long Short-Term Memory (LSTM) recurrent neural network[LSTM1-6] overcomes
2015 survey of deep learning[DL1]
Sec. 1: Introduction
abandoned the softmax, essentially resurrecting the original 1991 system.[FWP0-1] Compare Sec. 6.
a beautiful pattern of exponential acceleration in it,[OMG] which I have presented in many talks since then, and which also made it into Sibylle Berg
about deeper adaptive NNs[R61,R62]
#abstract
#AC
#AC20
#AC90
#AC97
#AC99
academia and industry,[DL4]
According to Bremermann (1982),[BRE]
according to Google Brain.[LSTMGRU3])
achieved by our group 2010-2011[MLP1-2][DAN][DAN1][GPUCNN5][R6]
achieve only 10 billion clicks),[FB17][DL4] Apple
#ack
| #
| #
| #
| #
| #
| #
| #
| #
| #
| #
| #
| #
| #
| #
| #
| #
| #
| #
| #
| #
| #
| #
| #
| #
| #
| #
| #
#ACM16
#ACM18
ACM also explicitly mentions speech recognition, speech synthesis,[AM16][DL1]
#Action Plan: What would I do if I was Putin?
#Action Plan: What would I do if I was the Ukraine?
#Adapt my PineScript program to other [symbol, application]s
Additive FWPs[FWP0-2] (Sec. 1 & 2), however, solve the problem through a dual approach,
A disadvantage addressed in Sec. 2 is that the slow net needs many output units if the fast net is large.
#Adrian D'Amico 28Feb2018 - Space Weather and Human Health
A few models of consciousness are summarized on my webPage A quick comparison of Consciousness Theories. Only a few concepts are listed, almost randomly selected except for [Grossberg, Taylor]
After the Executive Summary in Sec. 3, Sec. 4 will split
After the main peer-reviewed publication in 1997[LSTM1][25y97] (now the most cited NN article of the 20th century[MOST]),
#Afterword, list of authors
#AH1
#AH2
John J. Mearsheimer, Uof Chicago, GREAT presentation 04-07Jun2015 : Why is Ukraine the West’s Fault?
Sec. 6: 1965: First Deep Learning
Sec. 7: 1967-68: Deep Learning by Stochastic Gradient Descent
A,
A, B, C, D
A &
(A) speech recognition,
A: Speech Recognition (see also Sec. VI & XI & XV): The first superior end-to-end neural speech recognition
Abstract & Outline (~300 words),
Sec. 21: Acknowledgments
speech recognition (with our CTC, 2007-15, see Sec. A),
Bachelor of Applied
Sec. 8: 1970: Backpropagation. 1982: For NNs. 1960: Precursor.
B).
B,
(B) natural language processing,
machine translation (2016, see Sec. B),
21 comments on 21 claims by ACM (~8,000 words),
into 21 parts
C,
(C) robotics,
Sec. 2: 1676: The Chain Rule For Backward Credit Assignment
Sec. 9: 1979: First Deep Convolutional NN (1969: Rectified Linear Units)
Conclusion (~2,000 words).
Conclusion and Acknowledgments (~2,000 words).
robotics & video game players (2018-19, see Sec. C),
D,
D &
(D) computer vision,
available for downloading from this site
Sec. D above. Back then, the only really
Executive summary of what
whom they did not cite
whom they did not cite, in contrast to
Sec. 3: Circa 1800: First Neural Net (NN) / Linear Regression / Shallow Learning
Finally,
Sec. 20: The Broader Historic Context from Big Bang to Far Future
Sec. 11: Feb 1990: Generative Adversarial Networks / Artificial Curiosity / NN Online Planners
Sec. 10: 1980s-90s: Graph NNs / Stochastic Delta Rule (Dropout) / More RNNs / Etc
Sec. 18: It
Dr. E.P. Scarlett high school, Calgary matriculation:
CVPR paper on DanNet[GPUCNN3]
image caption generation[DL4] &
brand new, improved version[FWP6] of
Recent work of February 2021[FWP6]
work of June 2021[FWP7] (also with Robert Csordas) points out that the original FWP formulation of 1991[FWP0-1] is more general than the one of linear Transformers: a slow NN continually reprograms the weights of a fast NN with
on-device speech recognition[GSR19]
Alphastar whose brain has a deep LSTM core trained by PG.[DM3]
revisionist narrative of ELMs[ELM2][CONN21]
(Wiki2023)
By the 2010s,[DEC] they were
the 2010s,[DEC]
the mid 2010s,[DEC]
Adversarial Artificial Curiosity), and (5) variants of Transformers (linear Transformers are formally equivalent to my earlier Fast Weight Programmers).
Adversarial Artificial Curiosity), and (5) variants of Transformers (Transformers with linearized self-attention are formally equivalent to my earlier Fast Weight Programmers).
Adversarial Artificial Curiosity), and (5) variants of Transformers (Transformers with linearized self-attention are formally equivalent to the much earlier Fast Weight Programmers).
Artificial Curiosity
artificial curiosity
a simple application[AC]
GANs are instances
GANs are variations
principles of generative adversarial NNs and artificial curiosity (1990),[AC][AC90,90b][AC10][AC20]
Compressed Network Search[CO2]
ad hominem attacks[AH2-3][HIN]
my reply to Hinton
recent debate:[HIN] It is true that in 2018,
June 2020 article[T20a][R12]
June 2020 article[T20a][R12]
DanNet[DAN][DAN1][GPUCNN5]
DanNet,[DAN,DAN1][R6]
our CNNs were deep and fast enough[DAN][DAN1][GPUCNN5]
Critique of Paper by self-proclaimed[DLC1-2] "Deep Learning Conspiracy" (Nature 521 p 436).
Critique of Paper by self-proclaimed[DLC2] "Deep Learning Conspiracy" (Nature 521 p 436).
it is not always clear[DLC]
Annus Mirabilis of 1990-1991,[MIR][MOST]
Annus Mirabilis of 1990-1991.[MIR][MOST]
Annus Mirabilis of 1990-1991.[MIR]
Annus Mirabilis of deep learning.[MIR]
artificial neural network (NN)
recurrent NNs (RNNs)
sequence-processing recurrent NNs (RNNs)
adaptive subgoal generators
More.
deep NNs
drove the shift
is not necessary
(Sec. 19)
unsupervised pre-training for deep NNs
First Very Deep NNs, Based on Unsupervised Pre-Training (1991).
unsupervised pre-training
unsupervised pre-training for deep NNs (1991),[UN1-2]
unsupervised pre-training for deep NNs,[UN1-2]
unsupervised pre-training of NNs,
1991 NN distillation procedure,[UN0-2][MIR](Sec. 2)
compressing or distilling
compressing or distilling one NN into another (1991),
neural knowledge distillation procedure
(Sec. 3,Sec. 4)
the vanishing gradient problem (1991)[VAN1] &
vanishing gradient problem
vanishing gradient problem,[MIR](Sec. 3)[VAN1] Bengio published his own,[VAN2] without citing Sepp.
vanishing gradients (1991),
vanishing gradients (1991)[VAN1] &
CTC-LSTM
Long Short-Term Memory
Long Short-Term Memory or LSTM (Sec. A),
LSTM.
LSTMs
solutions to it (Sec. A),
solutions to it (Sec. A),[LSTM0-17][CTC]
supervised LSTM.
More.
(more).
principles of generative adversarial NNs and artificial curiosity (1990),[AC90,90b][AC20]
the GAN principle
Predictability Minimization for creating disentangled representations of partially redundant data, applied to images in 1996.[PM0-2][AC20][R2][MIR](Sec. 7)
fast weight programmers[FWP][FWP0-4a]
fast weights.
learning sequential attention
deep learning survey[DL1]
deep learning survey,[DL1] and can also be seen as a short history of the deep learning revolution, at least as far as ACM
fast weight programmers (1991),[FWP0-2,6]
fast weight programmers (1991).[FWP0-2,6]
fast weight programmers[FWP0-2][FWP][ATT][MIR](Sec. 8) since 1991 (see Sec. XVI)
FWPs of 1991[FWP0-1]
have their roots in my lab (1991);[FWP][FWP0-2,6]
Transformers with linearized self-attention were also first published[FWP0-6] in
"soft" attention in the latent space of Fast Weight Programmers (FWPs),[FWP2][FWP] and "hard" attention (in observation space) in the context of RL[ATT][ATT0-1] (1990).
Transformers with "linearized self-attention"[TR5-6]
Neural History Compressor.[UN1]
published.[FWP0-1]
recurrent NNs that learn to generate sequences of subgoals.[HRL1-2][PHD][MIR](Sec. 10)
Highway Net (May 2015).[HW1-3][R5] The Highway Net (see below) is actually the feedforward net version of our vanilla LSTM (see below).[LSTM2] It was the first working, really deep feedforward NN with hundreds of layers (previous NNs had at most a few tens of layers).
Highway Net (May 2015).[HW1-3][R5] The Highway Net is actually the feedforward net version of vanilla LSTM.[LSTM2] It was the first working, really deep feedforward NN with hundreds of layers (previous NNs had at most a few tens of layers).
NNs with over 100 layers (2015),[HW1-3][R5]
(take all of this with a grain of salt, though[OMG1]).
our CTC-LSTM-based speech recognition (not that of Hinton) had been on most smartphones for years[GSR][GSR15-19][DL4] (see Sec. A, VI, XI, XV). Similarly for machine translation (see Sec. B).
Gottfried Wilhelm Leibniz[L86][WI48] (see above),
LSTM trained by policy gradients (2007).[RPG07][RPG][LSTMPG]
learn to learn was published in 1987.[META][R3]
more citations per year[MOST]
most cited neural network,[MOST] is a version (with open gates) of our earlier
most cited NN of the 20th century.[MOST]
most cited NN of the 21st century.[MOST]
most cited NN,[MOST] is a version (with open gates) of our earlier
were all driven by my lab:[MOST] In 1991, I had the
attention[ATT]
attention[ATT] (Sec. 4)
attention[ATT] (compare Sec. 4).
attention terminology in 1993.[ATT][FWP2][R4]
attention terminology like the one I introduced in 1993[ATT][FWP2][R4]).
our much earlier work on this[ATT1][ATT] although
raw computational power of all human brains combined.[RAW]
LSTM[MIR](Sec. 4)
Turing[TUR] (1936), and Post[POS] (1936).
by myself in 1991[UN][UN0-3] (see below), and later championed by others (2006).[UN4] In fact, it was claimed[VID1]
by ours[UN0-2][UN]
dates back to 1991[UN]
first very deep NNs based on unsupervised pre-training;[UN-UN2]
neural history compressors[UN][UN0-3] learn to represent percepts at multiple levels of abstraction and multiple time scales (see above), while
this class of methods was pioneered in 1991[UN-UN2] (see Sec. II, III).
this type of deep learning dates back to 1991.[UN1-2][UN]
unsupervised pre-training for deep NNs (1991),[UN1-2][UN]
unsupervised pre-training of deep NNs.[UN0-UN2][MIR](Sec. 1)
we had this type of deep learning already in 1991;[UN][UN1-2] see Sec.
1993 paper[FWP2] which
Link.
history of backpropagation
More.[DL2]
automatic email answering[DL4] etc.
I,
I,
I.
I, A, B, C, D, VII, XVIII.
I &
I &
II,
II,
II,
II &
II &
II & XVII (5).
III,
III,
III.
III &
III &
III. Note that
III).[DLC][DEEP1-2][BP1][DL1-2][R7-R8][R2-R4]
Sec. II
Introduction (~300 words),
IV,
IX,
IX &
IX, and
Critique of LBH
I respond to LBH
Sec. 2
Sec. 16: June 1991: Roots of Long Short-Term Memory / Highway Nets / ResNets
Master of Applied Science
Sec. 5: 1958: Multilayer Feedforward NN (without Deep Learning)
Editorials
Letters
Sec. 22: 555+ Partially Annotated References (many more in the award-winning survey[DL1])
Sec. 17: 1980s-: NNs for Learning to Act Without a Teacher
Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture. ~1972: First Learning RNNs
(Sec. 1),
(Sec. 1) are recurrent and identical.
(Sec. 2).[FWP4a][R4][MIR](Sec. 8)[T22](Sec. XVII, item H3)
(Sec. 3).
(Sec. 4)
Sec. 12: April 1990: NNs Learn to Generate Subgoals / Work on Command
[T22] debunks this justification.
Sec. 19: But Don
Sec. 13: March 1991: NNs Learn to Program NNs. Transformers with Linearized Self-Attention
Sec. 14: April 1991: Deep Learning by Self-Supervised Pre-Training. Distilling NNs
Sec. 15: June 1991: Fundamental Deep Learning Problem: Vanishing/Exploding Gradients
VIII,
VIII &
medical diagnosis (2012, see Sec. VII, XVIII), and many other applications.[DEC]
VII,
VII: ACM explicitly mentions medicine and
(VII) medicine, astronomy, materials science.
VI,
V
V,
V,
V.
V &
V &
XIII,
XIII,
XIII &
XIII &
XII,
XII,
XII &
XII &
XII & XIX &
XIV,
XIV,
XIV,
XIV &
XIV &
XI).
XI,
XI, and
XIX,
XIX,
XIX &
XIX &
Fast Weight Programmers (1991, see Sec. XVI) are formally equivalent to linear Transformers (now popular in NLP).
XVIII)
XVIII).
XVIII,
XVIII:
XVIII &
XVIII & XIV & XI & VI)
XVII).
XVII,
XVII &
XVII &
XVI).
XVI,
XVI: ACM
XV,
XV: ACM credits Bengio for hybrids of NNs and probabilistic models of sequences.
X,
X.
X &
X & XVII).
X.[MIR](Sec. 1)[R8]
XXI,
XXI.
XX):
XX,
XX,
XX.
XX &
XX, and 2).
#AI51
#AI, machine intelligence, etc
#AIT1
#AIT20
#AIT7
AlexNet won one;[R6]
All backed up by over 250 references (~9,000 words).
All backed up by over 300 references (over 10,000 words).
#Alleged sentience of artificial intelligence
All of these fields were heavily shaped in the 2010s by our non-CNN methods.[DL1][DL4][AM16][GSR][GSR15][GT16][WU][FB17] See
all possible questions through computation;[WI48]
alpha-beta-pruning (1959),[S59]
Already before ImageNet 2012,[R6]
already in 1965[DEEP1-2][R8] (see Sec. II).
already in 1995.[SNT]
also failed to cite Linnainmaa.[BP1]
also failed to cite Linnainmaa[BP1]
also follow the additive approach.[FWP0-2]
also in the 1970s, especially outside of the Anglosphere.[DEEP2][BP6][CNN1][DL1-2]
also in the 1970s, especially outside of the Anglosphere.[DEEP2][GD1-3][CNN1][DL1-2]
Also see Sec. XIX, II.
alternative[FWP0-1] to recurrent NNs.
although our work[LSTM2] was the one that introduced gated recurrent units.
Although these MLPs did not yet have deep learning, because only the last layer learned,[DL1]
although this work[LSTM2] was the one that introduced gated recurrent units.
#Always-shifting alliances
#AM16
#Amazon Customer - Concerned about vaccines?
#AMH1
#AMH2
#AMH3
a monopoly on winning computer vision competitions.[GPUCNN5] It more than "halved the error rate for object recognition" (ACM
analyzed by physicists Ernst Ising and Wilhelm Lenz in the 1920s.[L20][I24,I25][K41][W45][T22] It settles into an equilibrium state in response to input conditions, and is the foundation of the first learning RNNs (see below).
analyzed ways of implementing gradient descentmany other applications.[DEC]
and contest-winning deep CNNs (2011),[DAN][DAN1][GPUCNN5]
and backpropagation (1960-70)[BPA][BP1] (see Sec. XIX, XII)
and convolutional NNs since 1979[CNN1-4] (see Sec. XVIII, D).
(and also Fukushima[CNN1][DL2]) had long before LeCun.
and Amari[GD1-2]
(and apparently even other award committees[HIN](Sec. I)
(and apparently even other award committees[HIN](Sec. I))
and both of them dating back to 1991, our miraculous year of deep learning.[MIR]
and convolutional NNs (1979),[CNN1]
and convolutional NNs (1979),[CNN1]
and CTC[CTC] (2006), which were applied to speech
and deep learning (e.g., Sec. I), ACM lauds
and Transformers[TR1-6]
and forcefully contradict public figures who promote it."[FAKE]
and more.[DL1-2][R2-R8]
and other exciting stuff. Much of this has become very popular, and improved the lives of billions of people.[DL4][DEC][MOST]
and other foundations.[DL1-2][R2-R8]
and other highly cited CNNs[RCNN1-3]
and other topics.[R2-R6]
and policy gradients.[GD1][PG1-3]
and published the chain rule[LEI07-10] (see above), essential ingredient of deep learning and modern AI.
#Andreas Mayer (Swiss German) and Joe Stachulak (Polish) - post-war generation
and recent renewed interest in such methods.[NAN5][FWPMETA6][HIN22]
Andrej Kolmogorov, he founded the theory of Kolmogorov complexity or algorithmic information theory (AIT),[AIT1-22] going beyond traditional information theory[SHA48][KUL]
#Andrew Hall's work
#AND: simple function of two input RNA strands, one output RNA
and some of the erroneous claims it made about my prior work.[AC20]
and the 1948 upgrade of ENIAC, which was reprogrammed by entering numerical instruction codes into read-only memory.[HAI14b]
And the article[RUM] even failed to mention Linnainmaa, the inventor of this famous algorithm for credit assignment in networks (1970),[BP1]
and the first with an internal memory.[BL16] He
and their co-workers have contributed useful improvements of deep learning methods.[CNN2,4][CDI][LAN][RMSP][XAV][ATT14][CAPS]
and through Highway Net-like NNs (2015),[HW1-3][R5] although the principles of CNNs were invented and developed by others since the 1970s.[CNN1-4] See Sec. D & XVIII & XIV
and to fight plagiarism, collusion rings,[LIT21] and systemic academic corruption in all of their more and less subtle forms.[FAKE]
and to fight plagiarism,[FAKE2]
and universal search techniques (1973).[AIT7]
and were able to greatly improve steel defect detection.[ST]
an important benchmark record,[MLP1-2]
#Anne Rooney - Dangerous and inaccurate nonsense
an old area of research seeing renewed interest. Practical AI dates back at least to 1914, when Leonardo Torres y Quevedo (see below) built
Another idea that has drawn attention for several decades is that consciousness is associated with high-frequency (gamma band) oscillations in brain activity. This idea arose from proposals in the 1980s, by Christof von der Malsburg and Wolf Singer, that gamma oscillations could solve the so-called binding problem, by linking information represented in different parts of the brain into a unified experience.[80] Rodolfo Llinás, for example, proposed that consciousness results from recurrent thalamo-cortical resonance where the specific thalamocortical systems (content) and the non-specific (centromedial thalamus) thalamocortical systems (context) interact in the gamma band frequency via synchronous oscillations.[81] ..." (Wiki2023 - Consciousness#Neural_correlates)
Another milestone of 2006 was the training method "Connectionist Temporal Classification" or CTC[CTC] for simultaneous alignment and recognition of sequences. Our team successfully applied CTC-trained LSTM to speech in 2007[LSTM4] (also with hierarchical LSTM stacks[LSTM14]).
another NN (see Sec. 1).
any type of computation-based AI.[GOD][BIB3][GOD21,a,b]
any type of computation-based AI.[GOD][BIB3][MIR](Sec. 18)[GOD21,21a]
#AOI
In 1972, Shun-Ichi Amari made the Lenz-Ising recurrent architecture adaptive such that it could learn to associate input patterns with output patterns by changing its connection weights.[AMH1] See also Stephen Grossberg
Apart from A, B, C above,
Apart from possible normalization/squashing,[FWP0]
a particular feedforward neural net (NN) called the convolutional NN (CNN).[CNN1-4] The basic CNN architecture with convolutional and downsampling layers is due to Fukushima (1979),[CNN1] who also introduced the now widely used
a particular feedforward NN called the convolutional NN (CNN).[CNN1-4]
#Apparent failures of essentially ALL [medical, scientific] experts, and the mass media?
Apparently the first LSTM journal paper[LSTM1][R5] is now the 20th century
Apparently the first LSTM journal paper[LSTM1][R5] is now the most frequently cited
#Apparent successes of the medical, scientific] experts?
application of LSTM to speech (2004).[LSTM10]
#Arc Blast - Part One Thunderblog
#Arc Blast - Part Three Thunderblog
#Arc Blast - Part Two Thunderblog
#Arches National Monument, Sputtering Canyons Part 1 Thunderblog
architecture [NEU45].
architectures of recurrent NNs (1925-56)[I25][MC43][K56]
architectures of recurrent NNs (1943-56)[MC43][K56]
are actually a variant of the vanilla LSTM architecture[LSTM2] (2000) which the authors did not cite
are actually light beams).[DL2]
are additive (Sec. 1 & 2).
are expected to become even much more important than they are today.[DL2]
are poor indicators of truly pioneering work.[NAT1]
are related to the 1991 paper[UN1][UN] which in many ways started what people now call deep learning, going beyond
#Arnold Toynbee - Challege and response
#ART - Adaptive Resonance Theory
#ART assess theories of consciousness
#ART augmentation of other research
artificial curiosity and generative adversarial NNs for agents that invent their own problems (see above),[AC90-AC20][PP-PP2][SA17]
artificial curiosity and self-invented problems,[PP][PPa,1,2][AC]
artificial evolution (1954),[EVO1-7]([TUR1],unpublished)
artificial evolution,[EVONN1-3]
artificial evolution[TUR1] and
#ARTMAP associate learned categories across ART networks
#art (painting etc)
#ARTPHONE [gain control, working] memory
#ARTSCENE classification of scenic properties
#ARTSTREAM auditory streaming, SPINET sound spectra
As emphasized earlier:[DLC][HIN]
As I have frequently emphasized since 1990,[AC90][PLAN][META]
As mentioned in Sec. B and XVI, the first superior end-to-end neural machine translation was also based on LSTM.
As mentioned in Sec. XII, backpropagation was actually proposed earlier as a learning method for NNs by Werbos (1982)[BP2-4] (see also Amari
As per the first question from the section [OOPS2][ZUS21]
As recently as of 2021, ACM published yet another misleading deep learning "survey" by LBH,[DL3a] again heavily citing LBH without
Assorted training and non-credit courses
#Astrocyctes
#Astrology? You must be joking!!
as well as Sec. 19 of the overview.[MIR]
as well as Sec. 4 & Sec. 19 of the overview.[MIR]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
at ICPR 2012, our DanNet[GPUCNN1-3] won the
at IJCNN 2011 in Silicon Valley, DanNet blew away the competition and achieved the first superhuman visual pattern recognition[DAN1] in an international contest.
at IJCNN 2011 in Silicon Valley, DanNet blew away the competition and achieved the first superhuman visual pattern recognition[DAN1] in an international contest (where LeCun
at IJCNN 2011 in Silicon Valley, our DanNet[DAN][GPUCNN1-3] won the
at multiple levels of abstraction and multiple time scales (see above),[HRL0-2][LEC]
a traditional LSTM domain (see Sec. B).
#ATT
#ATT14
#ATT2
#ATT3
attentional component (the fixation controller)." See [MIR](Sec. 9)[R4].
attention[FWP][ATT] through
at the problem in the 1980s."[S20] However, the 1969 book[M69] addressed a "deep learning problem"
[ATT] J. Schmidhuber (AI Blog, 2020). 30-year anniversary of end-to-end differentiable sequential neural attention. Plus goal-conditional reinforcement learning. We had both hard attention (1990) and soft attention (1991-93).[FWP] Today, both types are very popular.
#auditory continuity illusion
#AUT
#Autonomous systems, networks, robots
#AV1
award can ever change that.[HIN]
#A workable [definition, context, model] for consciousness
#B
2. Maps of Ukraine [war [forecast, battles], losses, oil, gas pipelines, minerals] -
#BA93
#BA96
#Bachelor of Applied Science in Chemical
backing up his denial by any facts; see Sec. XVII.
#backprop
backpropagation by Rumelhart et al. (1985-86)[RUM]
#bad guy
Baldi and Chauvin (1993) had the first application of CNNs with backpropagation to biomedical/biometric images.[BA93]
#BAN
based on LSTM[LSTM0-6] (1990s-2005) and CTC (2006).[CTC]
based on "deep learning" with NNs.[DL1-2][DEC]
Based on TR FKI-126-90 (1990).[AC90]
Basic Long Short-Term Memory[LSTM1] solves the problem by adding at every time step
#Basis of concept
#BAU
#BB2
C. Robotics & RL etc. Since 2003, our team has used LSTM for Reinforcement Learning (RL) and robotics.[LSTM-RL][RPG][LSTMPG]
beat a pro player in the game of Starcraft, which is theoretically harder than Chess or Go[DM2] in many ways, using
because only the last layer learned,[DL1] Rosenblatt basically had what much later was rebranded as Extreme Learning Machines (ELMs) without proper attribution.[ELM1-2][CONN21][T22]
Before the 1990s, however, RNNs failed to learn deep problems in practice.[MIR](Sec. 0)
(before the similar AlexNet won ImageNet 2012[GPUCNN5][R6] and the similar VGG network[GPUCNN9] won ImageNet 2014).
#BEL53
#Ben Davidson 07Apr2020 Space Weather & Pandemics
#Ben Davidson 23Apr2020 Millions Are Being Murdered, The Killer Cure
#Ben Davidson 23Feb2018 - What To Do With Space Weather Health Information
#Ben Davidson & Suspicious Observers - Space Weather and health
Bengio also claims[YB20] that in 1995
Bengio also writes[YB20] that in
Bengio has also heavily used our LSTM (see Sec. A-C),
#Benoit Mandelbrot fractals: The misbehavior of markets
Benoit Manderot stressed that "... the true power of fractals arises when these are used with time as a fractal dimension ..." ([1]. The book provides a wonderful analysis of the fractal nature of [commodity, financial] markets. Note that Puetz also worked with collaborators on a fractal analysis of geological data from a [fractal, UWS] perspective.
#Bernard Baars 1988 global workspace model
besides LeCun have worked "to speed up backpropagation algorithms"[DL1] (ACM
Bill Gates called this a "huge milestone in advancing artificial intelligence".[OAI2a][MIR](Sec. 4)[LSTMPG]
#Biological context
#Biological similies
#biology, evolution, paleontology
#[bio, neuro, psycho]logy data
#BL16
#Blake Lemoine: Is LaMDA Sentient?
Blatant misattribution and unintentional[PLAG1][CONN21] or intentional[FAKE2] plagiarism are still tainting the entire field of deep learning.[T22]
Bloomberg,[AV1]
#BM
#body21
Boltzmann Machine (BM)[BM] a
#bone [shell, fibre]s
#BOO
#BP1
#BP2
#BP5
#BP6
#BPA
#BPTT2
#brain disorders and disease
#Brain is NOT Bayesian?
#Brain regions, neural networks, resonance
#brain rythms & Schuman resonances
#BRE
#Bromley, Alexander Dec2018 thorium and depleted uranium in sub-critical GC-PTR
brought essentially unlimited depth to gradient-based supervised recurrent NNs; Highway Nets[HW1-3] brought it to feedforward NNs.[MOST]
brought essentially unlimited depth to gradient-based supervised recurrent NNs in the 1990s; our Highway Nets[HW1-3] brought it to feedforward NNs in May 2015.[MOST]
brought essentially unlimited depth to gradient-based supervised recurrent NNs;[LSTM0-17]
brought essentially unlimited depth to supervised recurrent NNs; Highway Nets[HW1-3] brought it to feedforward NNs.[MOST]
brought essentially unlimited depth to supervised recurrent NNs in the 1990s; our Highway Nets[HW1-3] brought it to feedforward NNs in May 2015.[MOST]
#BRU1
#BRU4
#Building MindCode from [bio, psycho]logical data
Building on previous work[FWPMETA7] on FWPs
Business Week called LSTM "arguably the most commercial AI achievement."[AV1]
But in 2010, our team showed[MLP1-2]
but in a fully neural way (rather than in a hybrid fashion[PDA1][PDA2][DNC]).
but in an end-to-end-differentiable, adaptive, fully neural way (rather than in a hybrid fashion[PDA1-2][DNC]).
(but see a 1989 paper[MOZ]).
#BW
By 1993, the approach solved problems of depth 1000 [UN2]
(By 2003, LSTM variants successfully dealt with language problems of depth up to 30,000[LSTM17]
by computing fast weight changes through additive outer products of self-invented activation patterns[FWP0-1]
By favoring additive operations yielding non-vanishing first derivatives and error flow,[VAN1]
by my brilliant student Sepp Hochreiter a few months later in his 1991 diploma thesis.[VAN1]
by Sherrington & Kirkpatrick[SK75] & Glauber[G63] nor the first working algorithms for deep learning of internal representations (Ivakhnenko & Lapa, 1965)[DEEP1-2][HIN] nor
by Sherrington & Kirkpatrick[SK75] and Glauber.[G63]
#C
#Calculus of War
called AlexNet,[GPUCNN4] without mentioning that our earlier groundbreaking deep GPU-based DanNet[GPUCNN1-3,5-8][DAN] did not need ReLUs at all to win 4 earlier object recognition competitions and to achieve superhuman results already in 2011[GPUCNN1-8][R5-6] (see Sec. XIV).
called Deep Belief Networks (DBNs).[UN4]
called max-pooling was introduced by Weng et al. (1993).[CNN3]
called max-pooling was introduced by Yamaguchi et al. for TDNNs in 1990[CNN3a] and by Juan Weng et al. for higher-dimensional CNNs in 1993.[CNN3]
called max-pooling was introduced by Yamaguchi et al. for TDNNs in 1990[CNN3a] and by Weng et al. for higher-dimensional CNNs in 1993.[CNN3] Since 1989,
#CAN
can be found at Scholarpedia[DL2] and in my award-winning survey.[DL1]
can compute a direction in program space where one may find a better program,[AC90]
#caption cut off :
#Captioned images [image, link] problems :
#Captioned images - remaining problems
cards (1679),[L79][L03][LA14][HO66]
#[, c]ARTWORD word perception cycle
#Cellular mechanisms for [protein, information]
century[SHA7a][RAU1] by Heron of Alexandria
#chainrule
championed by Hinton;[UN4][VID1] see Sec. D).
#Chapter 10 - Laminar computing by cerebral cortex
#Chapter 11 - How we see the world in depth
#Chapter 12 - From seeing and reaching to hearing and speaking
#Chapter 13 - From knowing to feeling
#Chapter 14 - How prefrontal cortex works
#Chapter 15 - Adaptively timed learning
#Chapter 16 - Learning maps to navigate space
#Chapter 17 - A universal development code
#Chapter 1 - Overview
#Chapter 2 - How a brain makes a mind
#Chapter 3 - How a brain sees: Constructing reality
#Chapter 4 - How a brain sees: Neural mechanisms
#Chapter 5 - Learning to attend, recognize, and predict the world
#Chapter 6 - Conscious seeing and invariant recognition
#Chapter 7 - How do we see a changing world?
#Chapter 8 - How we see and recognize object motion
#Chapter 9 - Target tracking, navigation, and decision-making
#cheat
chemistry, molecular design, lip reading, speech synthesis,[AM16]
#Chinese
#CHU
Church[CHU] (1935),
cite Linnainmaa (1970),[BP1] the true creator.[BP4-5]
#civiliser
#climate
#CMB
#cnn
#CNN1
#CNN1+
[CNN1a] A. Waibel. Phoneme Recognition Using Time-Delay Neural Networks. Meeting of IEICE, Tokyo, Japan, 1987. First application of backpropagation[BP1-5] and weight-sharing
[CNN1a] A. Waibel. Phoneme Recognition Using Time-Delay Neural Networks. Meeting of IEICE, Tokyo, Japan, 1987. First application of backpropagation[BP1][BP2] and weight-sharing
#CNN2
#CNN3
#CNN3a
CNN of 2011[GPUCNN1] known as DanNet[DAN,DAN1][R6]
CNNs (Dan Ciresan et al., 2011).[GPUCNN1,3,5]
CNNs of 2006.[GPUCNN]
CNNs of 2006.[GPUCNN] In 2011, DanNet became the first pure deep CNN
#CO2
#CogEM Cognitive-Emotional-Motor model
collusion rings,[LIT21] and systemic academic corruption in all of their more and less subtle forms.[FAKE]
#Colorado Plateau, Sputtering Canyons part 2 Thunderblog
#Colton, Bromley Jul2018 mixed oxide thorium based fuels
#Colton, Bromley Mar2021 PT Heavy Water Reactor to Destroy Americium and Curium
combined a linear NN as above with an output threshold function to obtain a pattern classifier (compare his more advanced work on multi-layer networks discussed below).
combines two methods from my lab: LSTM (1990s-2005) and CTC (2006), which were
commonsense reasoning[MAR15] and learning to think.[PLAN4-5]
(compare Sec. 2 and Sec. 4 on attention terminology since 1993).
(Compare related work.[H86][H88][S93])
Compare the 1967-68 work of Amari:[GD1-3] to my knowledge the first to propose and implement stochastic gradient descent[STO51-52]
#Comparison of [TradingView, Yahoo finance] data
competitor.[DAN1] This led to massive interest from industry.
#complementary computing
#Computations with multiple RNA strands
#computer
Computer Vision was revolutionized in the 2010s by a particular feedforward NN called the convolutional NN (CNN).[CNN1-4]
#computing with cellular patterns
#conclusion
conditional jump instruction.[RO98]
#Conference webPage & description
#Conrad Black: American destruction of the British Empire
#Conscious mind, resonant brain: sub-section list
#Conscious mind, resonant brain: Table of Contents
#consciousness
#Consciousness: Grossberg's tie-in of [, non] conscious processes
#conscious vs non-conscious
Consult the Executive Summary and Sec. I-XXI of this critique for more.
containing the now popular multiplicative gates).[DEEP1-2][DL1-2] A paper of 1971[DEEP2] already described a deep learning net with 8 layers, trained by their highly cited method which was still popular in the new millennium,[DL2] especially in Eastern Europe, where much of Machine Learning was born.[MIR](Sec. 1)[R8] LBH failed to cite this, just like they failed to cite Amari,[GD1] who in 1967 proposed stochastic gradient descent[STO51-52] (SGD) for MLPs and whose implementation[GD2,GD2a] (with Saito) learned internal representations at a time when compute was billions of times more expensive than today (see also Tsypkin
control theory and system identification (1950s),[KAL59][GLA85]
#cooperative-competitive
copyrights |
copyrights |
copyrights |
copyrights |
copyrights |
copyrights |
copyrights |
copyrights |
copyrights |
copyrights |
copyrights |
copyrights |
copyrights |
copyrights |
copyrights |
copyrights |
copyrights |
#Corona virus models
#Cosmic/Galactic rays at historical high in summer 2019
#COVID-19 data and models
#Covid-19 vaccine shots
#Cracks in Theory
#Credibility from non-[bio, psycho]logical applications of Grossberg's ART
#Crick-Koch 1990 Towards a neurobiological theory of consciousness
#crypto [BTC,ETH,COIN], 10y T-bill
#CTC
CTC-LSTM is end-to-end-neural and thus very different from (and superior to) the hybrid methods since the late 1980s.[BW][BRI][BOU][HYB12]
#D
#Daily cases charts for countries, by region
#DAN
#DAN1
#[data, software] cart [description, links]
date back to 1991-93.[UN0-2][UN]
#?date? cART conciousness ART
#?date? CLEARS [Cognition, Learning, Expectation, Attention, Resonance, Synchrony]
#?date? CogEm Cognitive-Emotional model
date.html#582
#?date? LAMINART Laminar computing ART
#David Spielhalter Risk of dying if you get coronavirus vs normal annual risk
#Deactivate: comparison QM vs SAM
#Deactivate [U,Pu, etc] -> [Pb, Au], is alchemy back?
debunked unsupervised pre-training (introduced by myself in 1991 and later championed by Hinton),
#DEC
(Dec 2015). Residual nets are a version of Highway Nets[HW1]
dedicate an extra section to attention-based Transformers,[TR1-6] citing Bengio
deductively equivalent[LE18] to the later
deductively equivalent[LE18] to the much later
deductively equivalent[LE18] to the much later
#DEEP1
#DEEP2
deep convolutional NN architecture was first introduced in the 1970s;[CNN1] his very popular ReLU already in 1969.[RELU1-2]
deep convolutional NN architecture was proposed in the 1970s.[CNN1]
Deep learning architectures that can manipulate structured data such as graphs[T22] were
deep learning as "moving beyond shallow machine learning since 2006",[DL7]
deep learning multilayer perceptrons (1965),[DEEP1-2][R8]
deep NNs (2010)[MLP1]
#Definitions and data [problem, limitation]s
#[definitions, models] of consciousness
#Definitions: nuclear [material, process, deactivate]s
#Definitions of consciousness
#Definitions of sentience
#Dehaene–Changeux 1986 global neuronal workspace model
#Delightful finds : ideas that were [new, different] to me
depth that really learned.[DEEP1-2][R8]
depth that really learned.[DEEP1-2][R8] Five years later, modern
described in the 1991-93 papers on Fast Weight Programmers and linear Transformers[FWP0-1,6] (see Sec. XVI, XVII-2).
described the principles of binary computers (1679)[L79][L03][LA14][HO66][LEI21,a,b]
#Description of the Universal Wave Series (UWS)
designed the first machine (the step reckoner) that could perform all four arithmetic operations, and the first with a memory.[BL16]
#destroyer
#Detailed [description, specification]
#DIF1
diploma thesis,[VAN1] which I consider one of the most important documents in the history of machine learning. It also
directory |
directory |
directory |
directory |
directory |
directory |
directory |
directory |
directory |
directory |
directory |
directory |
directory |
directory |
directory |
directory |
directory |
[DIST1] J. Schmidhuber, 1991.[UN-UN2]
#DIST2
#distill
distilling teacher NNs into student NNs (see above),[UN][UN0-3]
|
#DL1
#DL2
#DL3
#DL3a
#DL4
#DL6
#DL7
#DLC
#DLC1
#DM1
#DM2
#[DNA, rhibosome, etc] addresses
#DNA transcription to mRNA
doing this).[T22] It was not published until 1970, as discussed below.[BP1,4,5]
do not suffer during sequence learning from the famous vanishing gradient
Dota 2 video game (2018).[OAI2]
#Do these comments have anything to do with consciousness?
#Download
DP and its online variant called Temporal Differences (TD),[TD1-3]
#[, d, p]ARTSCAN attentional shroud, binocular rivalry
#Drop1
#drum (Bessel functions)
dynamic programming (DP, 1953),[BEL53]
earlier fast weights of von der Malsburg (1981) and Feldman (1982).[FAST,FASTa-b][FWP]
early adversarial machine learning settings[S59][H90]
#Easter Egg Hunt video
#Eccles keynote, Pribram post-conference viewpoint
#Egypt
#Eileen Mckusick - Mastering the Human Biofield with Tuning Forks
#Electric Earth & the Cosmic Dragon, Eye of the Storm part 7, 1 of 2 Thunderblog and video
#Electric Earth & the Cosmic Dragon, Eye of the Storm part 7, 2 of 2 Thunderblog and video
#Electricity in Ancient Egypt, video 26Aug2023
#Electromagnetic theories of consciousness
#ELM1
else cites."[LECP]
A blog post describing basic ideas[AC][AC90,AC90b][AC20] of GANs.
A blog post describing the basic ideas[AC][AC90, AC90b][AC20] of GANs.
additive neural activations of LSTMs / Highway Nets / ResNets[HW1-3] (Sec. 5)
additive outer product fast weight principle[FWP0-2]
More on the Fundamental Deep Learning Problem.
A misleading "history of deep learning" goes more or less like this: "In 1969, Minsky & Papert[M69]
A misleading "history of deep learning" which goes more or less like this: "In 1969, Minsky & Papert[M69]
Another "survey" of deep learning that does not mention the pioneering works of deep learning [T22].
A "survey" of deep learning that does not mention the pioneering works of deep learning [T22].
attention-based Transformers[TR1-6] are
Bengio claimed[YB20]
better program-modifying program.[FWP0-2][FWPMETA1-5]
Boolean Algebra (1847).[BOO]
Boolean Algebra of 1847.[BOO]
By 2010, when compute was 100 times more expensive than today, both our feedforward NNs[MLP1]
By 2010, when compute was 100 times more expensive than today, both the feedforward NNs[MLP1]
Compare the earlier Neural Architecture Search of Bayer et al. (2009) for LSTM-like topologies.[LSTM7]
Debunking [T19] and [DL3a] .
Description of GANs that does not cite the original work of 1990[AC][AC90,AC90b][AC20][R2] (also containing wrong claims about
Fast Weight Programmers (FWPs) were published in 1991-93[FWP0-2]
First application of backpropagation[BP1] to NNs (concretizing thoughts in his 1974 thesis).
First application of backpropagation[BP1] to NNs (concretizing thoughts in Werbos
First publication of what was later sometimes called the Hopfield network[AMH2] or Amari-Hopfield Network.
First publication of what was later sometimes called the Hopfield network[AMH2] or Amari-Hopfield Network,[AMH3] based on the (uncited) Lenz-Ising recurrent architecture.[L20][I25][T22]
First publication of what was later sometimes called the Hopfield network[AMH2] or Amari-Hopfield Network.[AMH3]
H. Bruderer[BRU4] calls that the first conference on AI.
"If you cannot dispute a fact-based message, attack the messenger himself."[HIN]
linear Transformers (2020-21)[TR5-6]
linear Transformers or Performers[TR5-6]
Mentions the recurrent Ising model[L20][I25]on which the (uncited) Amari network[AMH1,2] is based.
multilayer perceptrons (MLPs) were discussed by Steinbuch[ST61-95] (1961), Joseph[R61] (1961), and Rosenblatt[R62] (1962),
NN-programmed fast weights (Sec. 5).[FWP0-1], Sec. 9 & Sec. 8 of [MIR], Sec. XVII of [T22]
NN-programmed fast weights (Sec. 5 & 1).
emphasis on topics such as support vector machines and kernel methods,[SVM1-4] Bayesian (actually Laplacian or possibly Saundersonian[STI83-85]) reasoning[BAY1-8][FI22] and other concepts of probability theory and statistics,[MM1-5][NIL98][RUS95] decision trees,e.g.,[MIT97]
Precursor of modern backpropagation.[BP1-4]
Precursor of modern backpropagation.[BP1-5]
Probably the first paper on using stochastic gradient descent[STO51-52]
rectified linear units (ReLUs) for NNs (1969).[RELU1] They are now widely used in CNNs and other NNs.
rectified linear units (ReLUs) in 1969.[RELU1]
reddit.com/r/MachineLearning[R1-R12] (the largest machine learning forum with back then over 800k subscribers),
reinforcement learning through neuroevolution[FWP5] (2005-, Sec. 7),
Residual Net or ResNet[HW2] (Dec 2015).
second order tensor products.[FWP0-3a]
single adaptive layer learned in 1958[R58] (Joseph[R61]
slow NN that learns by backpropagation[BP1-4] to rapidly modify
Synthetic Gradients.[NAN1-5]
The comment under reference[UN4] applies here as well.
The first paper on long-term planning with reinforcement learning recurrent neural networks (NNs) (more) and on generative adversarial networks
The first paper on online planning with reinforcement learning recurrent neural networks (NNs) (more) and on generative adversarial networks
The first paper on planning with reinforcement learning recurrent neural networks (NNs) (more) and on generative adversarial networks
The Hopfield network or Amari-Hopfield Network was first published in 1972 by Amari.[AMH1] [AMH2] did not cite [AMH1].
The Hopfield network or Amari-Hopfield Network was published in 1972 by Amari.[AMH1]
This experimental analysis of backpropagation did not cite the origin of the method,[BP1-4] also known as the reverse mode of automatic differentiation.
This experimental analysis of backpropagation did not cite the origin of the method,[BP1-5] also known as the reverse mode of automatic differentiation.
This work did not cite the earlier LSTM[LSTM0-6] trained by Connectionist Temporal Classification (CTC, 2006).[CTC] CTC-LSTM was successfully applied to speech in 2007[LSTM4] (also with hierarchical LSTM stacks[LSTM14]) and became the first superior end-to-end neural speech recogniser that outperformed the
Turing Machine.[TUR] He rederived the above-mentioned result,[CHU][TUR][HIN][GOD21,21a][TUR21][LEI21,21a]
Turing Machine.[TUR] He rederived the above-mentioned result.[CHU][TUR][HIN][GOD21,21a][TUR21][LEI21,21a]
unsupervised pre-training for deep NNs[UN4] (2006) although
With a brief summary of the generative adversarial neural networks of 1990[AC90,90b][AC20]
#Endocrine system
end-to-end differentiable NN-based subgoal generators for Hierarchical Reinforcement Learning (HRL).[HRL0] Soon afterwards, this was also done with
end-to-end differentiable NN-based subgoal generators[HRL3][MIR](Sec. 10) learn hierarchical action plans through gradient descent (see above). More sophisticated ways of learning to think in abstract ways were published in
end-to-end fashion from scratch by stochastic gradient descent (SGD),[GD1] a method proposed in 1951 by Robbins & Monro.[STO51-52]
English version: [CNN1+]. More in Scholarpedia.
#ENS1
ensemble methods,[ENS1-4]
environments.[AIT20,22] He also derived the asymptotically fastest algorithm for all well-defined computational problems,[AIT21]
#equations of the [brain, mind]
equipped with artificial curiosity[SA17][AC90-AC20][PP-PP2][R1]
equipped with artificial curiosity[SA17][AC90-AC20][PP-PP2]
Ernst Ising and Wilhelm Lenz in the 1920s.[L20][I25][K41][W45][T22] It settles into an equilibrium state in response to input conditions, and is the foundation of the first well-known learning RNNs.[AMH1-2]
Even later surveys by the authors[DL3,3a] failed to cite the prior art.[T22]
Even later surveys by the authors[S20][DLC] failed to cite the prior art.[T22]
#EVO1
#EVONN1
excellent 1995 neural probabilistic text model.[SNT] See also Nakamura and Shikano
#exec
expands material in my Critique of the 2019 Honda Prize[HIN] (~3,000 words).
expected cumulative reward signals.[DL1]
#Explainable AI
(explicitly mentioned by ACM) were actually dominated by LSTM and CTC of our team.[LSTM1-4][CTC]
#Eye of the Storm, Part 1 Thunderblog
#FAKE
#FAKE2
#family of ART.....base, linguistic
#family of ART ....visual, auditory
famous vanishing gradient
#Far beyond the bounds of Wilson's book
#FAST
#FASTb
Fast Weight Programmers.[FWP2][ATT]
#FB17
#[Fibonacci, Fourier, Elliot, Puetz] series comparisons
Finally, my own team showed in 2010[MLP1]
First he implemented the Neural History Compressor above but then did much more:
first introduced to Machine Learning by Dechter (1986), and to NNs by Aizenberg et al (2000).[DL2] To my knowledge, LBH have never cited them.
first introduced to Machine Learning much later by Dechter (1986), and to NNs by Aizenberg et al (2000).[DL2]
#firstnn
for a book by Rumelhart & McClelland[R5]).
For a FANTASTIC analysis of the conceptual failures of GR, the origins of the mistakes, why it has been so successful, and a more accurate conceptual framework, see Stephen "". From the start of relativity theory with [Poincare, Lorenz] (at least that
for a variant of our vanilla LSTM architecture[LSTM2] (2000) which he did not cite
for deep NNs.[UN0-4][HIN](Sec. II)[MIR](Sec. 1)
for feedforward NNs in 2010 → our DanNet (2011) → AlexNet (2012); VGG Net (2014) (see Sec. D).
for image synthesis[GAN1] (also mentioned by ACM in Sec. XVIII).
formal Algebra of Thought (1686)[L86][WI48] was
#Formulae for Puetz UWS
formulated in the general RL framework.[UNI]
for recurrent NNs in the 1990s → our LSTM (see Sec. A-C) and
for such systems.[S80] See also
for synthesis of realistic images,[GAN1,2]
#For whom the bell tolls
#fractal [dendrite, axon]s
#Fractional Order Calculus (FOC)
#FRE
#Frequency beats
from 1990[AC90,90b][AC20] (see also surveys[AC09-10]). This principle
from 2007[LSTM4,14]
from the section [HIN](Sec. II)[MIR]
Fukushima and Waibel (see Sec. D).
Fukushima who introduced ReLUs in 1969[RELU1-2] (see Sec. XIV).
#full video transcript
#Functionalism
functions of two variables[HO1] (more on LSTM and fast weights in Sec. 5).
further extended the DanNet of 2011.[MIR](Sec. 19)[MOST]
further extended the work of 2011.[MIR](Sec. 19)
#future
#Future objectives
#Future related work
#FWP
#FWP0
#FWP1
#FWP2
#FWP3
#FWP4a
#FWP4b
#FWP5
#FWP6
#FWPMETA1
#FWPMETA5
#FWPMETA6
#FWPMETA7
#FWPMETA8
#gan
#GAN0
#GAN1
GANs[GAN0-1] (2010-2014) are actually
#Gary Marcus: Current LLMs do NOT possess 'Artitifial General Intelligence' (AGI)
"gated recurrent units (GRU)"[LSTMGRU]
#GD'
#GD1
#GD2
#GDa
#Gems from my recent reading, ~2015-2020
#General [limitation, constraint]s
#generation
Generative Adversarial Networks (GANs) have become very popular.[MOST] They were first published in 1990 in Munich under the moniker Artificial Curiosity.[AC90-20][GAN1]
Germany and Switzerland (LSTM & CTC; see Sec. A) long before Hinton
#Germany's hesitance : Petroleum then, natural gas now
#GGP
#Giulio Tononi 2004 Integrated information theory
given set.[AC20][AC][T22](Sec. XVII)
#Glenn Borchardt
Glenn Borchardt collaborated with Puetz in [3]. He emphasizes vortex motion and his "concept of infinity" from 2004 to partially explain Puetz
goal-conditioned policy generators (2022),[GGP]
#GOD
#GOD56
#Gods and plants - summary
#Going further - themes, videos, presentations, courses
Goodfellow eventually admitted that PM is adversarial (his paper[GAN1] still claims the opposite), but emphasized that it
#Go through image number sequence for missing images
#GoTo
GPU-accelerated NNs (2004),[GPUNN][DAN][DAN1][GPUCNN5]
GPU-accelerated NNs (2004),[GPUNN][GPUCNN5]
#GPUCNN
#GPUCNN1
#GPUCNN2
#GPUCNN3a
#GPUCNN4
#GPUCNN5
#GPUCNN8
#GPUCNN9
#GPUNN
gradient descent procedure[BP1-4][BPA][R7])
#graph
#Greek
#Grossberg 2021: cellular evolution and top-down-bottom-up mechanisms
#Grossberg OR[anticipated, predicted, unified] the [experimental result, model]s
#Grossberg: other consciousness theories
#Grossberg part of webSite
#Grossbergs ART- Adaptive Resonance Theory
#Grossbergs cellular patterns computing
#Grossberg's comments for some well-known consciousness theories
#Grossbergs complementary computing
#Grossbergs Consciousness: neural [architecture, function, process, percept, learn, etc]
#Grossbergs cooperative-competitive
#Grossbergs [core, fun, strange] concepts
#Grossbergs equations of the mind
#Grossbergs laminar
#Grossbergs list of [chapter, section]s
#Grossbergs list of [figure, table]s
#Grossbergs list of index
#Grossbergs modal architectures
#Grossbergs modules (microcircuits)
#Grossberg's [non-linear DEs, CogEm, CLEARS, ART, LAMINART, cART] models
#Grossberg's other comments
#Grossbergs overview
#Grossbergs paleontology
#Grossbergs quoted text
#Grossbergs what is consciousness
#Grossberg: why ART is relevant to consciousness in Transformer NNs
#Ground Currents and Subsurface Birkeland Currents - How the Earth Thinks? Eye of the Storm part 9, 2 of 2 Thunderblog and video
#GSR
#GSR15
YYG infections,
Amazon Customer - Concerned about vaccines? Make an informed decision with this book. Classic on the debate which has raged for a century.
#H86
#HAB1
Haber-Bosch process for creating artificial fertilizer, without which the world could feed at most 4 billion people.[HAB1-2]
had just become accessible in wealthier academic labs. An experimental analysis of the known method[BP1-2]
#HAI14b
#hardware
#Harmonics of [,D]UWS
has been widely used for exploration in Reinforcement Learning[SIN5][OUD13][PAT17][BUR18]
#Haunting implications of a possible relation between flu and the Kp index
have "LSTM" in their title.[DEC]
have their conceptual and technical roots in my labs in Munich and Lugano
,[MOST]
#HE49
#Hebrew
He later reused our end-to-end neural speech recognizer[LSTM4][LSTM14] as a postdoc in Hinton
Heron of Alexandria[RAU1] in the 1st century). The telephone (e.g., Meucci 1857, Reis 1860, Bell 1876)[NASC3]
he was both reviewer and editor of my summary[ATT2] (1990; see Sec. XVI above).
He was the reviewer of my 1990 paper[ATT2]
highly cited method which was still popular in the new millennium,[DL2] especially in Eastern Europe, where much of Machine Learning was born. Ivakhnenko did not call it an NN, but that
highly cited method which was still popular in the new millennium,[DL2] especially in Eastern Europe, where much of Machine Learning was born.[MIR](Sec. 1)[R8]
#high_school
#highway
Highway Nets perform roughly as well as ResNets[HW2] on ImageNet.[HW3] Highway layers are also often used for natural language processing, where the simpler residual layers do not work as well.[HW3]
Highway Nets perform roughly as well as ResNets[HW2] on ImageNet.[HW3] Variants of highway gates are also used for certain algorithmic tasks, where the simpler residual layers do not work as well.[NDR]
Highway Nets perform roughly as well as ResNets[HW2] on ImageNet.[HW3] Variants of highway gates are used for certain algorithmic tasks, where the simpler residual layers do not work as well.[NDR] More.
#HIN
#Hindu
[HIN] J. Schmidhuber (AI Blog, 2020). Critique of Honda Prize for Dr. Hinton. Science must not allow corporate PR to distort the academic record. See also [T22].
Hinton (2012) and Bengio (XV)
Hinton (2012)[GPUCNN4] characterizes
Hinton[AOI]
Hinton[ATT3] (2010)
Hinton[DIST2] (2006) did not cite my much earlier original
#hippocampus IS a cognitive map!
his diploma thesis which I had the pleasure to supervise.[VAN1]
His formal Algebra of Thought (1686)[L86][WI48] was
his own work:[ATT3]
His patent application of 1936[ZU36-38][Z36][RO98][ZUS21]
#Historical pandemics
#Historical thinking about quantum [neurophysiology, consciousness]
history of previous inputs, our combinations of RL algorithms and LSTM[LSTM-RL][RPG] have become standard, in particular, our
H. Larochelle, G. E. Hinton. Learning to combine foveal glimpses with a third-order Boltzmann machine. NIPS 2010. This work is very similar to [ATT0-2] which the authors did not cite.
#HO07
#HO1
#home
/home/bill/web/Neural nets/TrNNs_ART/[definitions, models] of consciousness.html#Antonio Damasio 1999 Body and Emotion in the making of consciousness
/home/bill/web/Neural nets/TrNNs_ART/[definitions, models] of consciousness.html#Bernard Baars 1988 global workspace model
/home/bill/web/Neural nets/TrNNs_ART/[definitions, models] of consciousness.html#Crick-Koch model
/home/bill/web/Neural nets/TrNNs_ART/[definitions, models] of consciousness.html#Dehaene–Changeux model
/home/bill/web/Neural nets/TrNNs_ART/[definitions, models] of consciousness.html#Electromagnetic theories of consciousness
/home/bill/web/Neural nets/TrNNs_ART/[definitions, models] of consciousness.html#Functionalism
/home/bill/web/Neural nets/TrNNs_ART/[definitions, models] of consciousness.html#Giulio Tononi 2004 Integrated information theory
/home/bill/web/Neural nets/TrNNs_ART/[definitions, models] of consciousness.html#Historical thinking about consciousness
/home/bill/web/Neural nets/TrNNs_ART/[definitions, models] of consciousness.html#Llinas 1998 Recurrent thalamo-cortical resonance
/home/bill/web/Neural nets/TrNNs_ART/[definitions, models] of consciousness.html#Multiple drafts
/home/bill/web/Neural nets/TrNNs_ART/[definitions, models] of consciousness.html#quantum processes in neuron microtubules
/home/bill/web/Neural nets/TrNNs_ART/[definitions, models] of consciousness.html#Selected_models_of_consciousness:
/home/bill/web/Neural nets/TrNNs_ART/[definitions, models] of consciousness.html#Stanislas Dehaene 2014 neural global workspace model
/home/bill/web/Neural nets/TrNNs_ART/[definitions, models] of consciousness.html#Thalamic reticular networking
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#AI, machine intelligence, etc
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#ART - Adaptive Resonance Theory
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#ARTMAP associate learned categories across ART networks
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#art (painting etc)
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#ARTPHONE [gain control, working] memory
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#ARTSCENE classification of scenic properties
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#ARTSTREAM auditory streaming, SPINET sound spectra
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#auditory continuity illusion
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#behavior-mind-brain link
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#biology, evolution, paleontology
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#[bio, neuro, psycho]logy data
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#brain disorders and disease
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#Brain is NOT Bayesian?
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#brain rythms & Schuman resonances
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#[, c]ARTWORD word perception cycle
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#classical mind-body problem
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#CogEM Cognitive-Emotional-Motor model
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#complementary computing
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#computing with cellular patterns
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#Consciousness
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#conscious vs non-conscious
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#cooperative-competitive
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#Credibility from non-[bio, psycho]logical applications of Grossberg's ART
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#[, d, p]ARTSCAN attentional shroud, binocular rivalry
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#equations of the [brain, mind]
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#Explainable AI
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#Grossberg: other consciousness theories
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#hippocampus IS a cognitive map!
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#informational noise suppression
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#[intra, inter]-cellular process
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#laminar computing
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#LAMINART vison, speech, cognition
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#learning and development
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#LIST PARSE [linguistic, spatial, motor] working memory
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#logic vs connectionist
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#modules & modal architectures ([micro, macro]-circuits)
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#Navigation: [menu, link, directory]s
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#neurotransmitter
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#[, n]START learning & memory consolidation
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#on-center off-surround
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#Principles, Principia
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#see-reach to hear-speak
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#SMART synchronous matching ART, mismatch triggering
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#[software, engineering, other] applications
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#top-down bottom-up
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#What is consciousness?
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs [core, fun, strange] concepts.html#Why are there hexagonal grid cell receptive fields?
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 10 - Laminar computing by cerebral cortex
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 11 - How we see the world in depth
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 12 - From seeing and reaching to hearing and speaking
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 13 - From knowing to feeling
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 14 - How prefrontal cortex works
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 15 - Adaptively timed learning
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 16 - Learning maps to navigate space
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 17 - A universal development code
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 1 - Overview
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 2 - How a brain makes a mind
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 3 - How a brain sees: Constructing reality
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 4 - How a brain sees: Neural mechanisms
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 5 - Learning to attend, recognize, and predict the world
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 6 - Conscious seeing and invariant recognition
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 7 - How do we see a changing world?
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 8 - How we see and recognize object motion
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Chapter 9 - Target tracking, navigation, and decision-making
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs list of [chapter, section]s.html#Preface
/home/bill/web/Neural nets/TrNNs_ART/Grossbergs overview.html#The underlying basis in [bio, psycho]logical data
/home/bill/web/Neural nets/TrNNs_ART/Introduction.html#Credibility from non-[bio, psycho]logical applications of Grossberg's ART
/home/bill/web/Neural nets/TrNNs_ART/Introduction.html#Grossberg's c-ART, Transformer NNs, and consciousness?
/home/bill/web/Neural nets/TrNNs_ART/Introduction.html#Questions: Grossberg's c-ART, Transformer NNs, and consciousness?
/home/bill/web/Neural nets/TrNNs_ART/opinions- Blake Lemoine, others.html#Blake Lemoine: Is LaMDA Sentient?
/home/bill/web/Neural nets/TrNNs_ART/Pribram 1993 quantum fields and consciousness proceedings.html#Historical thinking about quantum [neurophysiology, consciousness]
/home/bill/web/Neural nets/TrNNs_ART/Pribram 1993 quantum fields and consciousness proceedings.html#Howells questions about 1993 conference proceedings
/home/bill/web/Neural nets/TrNNs_ART/Quantum consciousness.html#Historical thinking about quantum [neurophysiology, consciousness]
/home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopCopyright TrNNs_ART.html#Grossberg
/home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopCopyright TrNNs_ART.html#GrossVideo
/home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopCopyright TrNNs_ART.html#home
/home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopCopyright TrNNs_ART.html#TrNN_ART
/home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopCopyright TrNNs_ART.html#TrNNs_ART
/home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopHelp TrNNs_ART.html#Grossberg
/home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopHelp TrNNs_ART.html#GrossVideo
/home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopHelp TrNNs_ART.html#home
/home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopHelp TrNNs_ART.html#TrNN_ART
/home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopHelp TrNNs_ART.html#TrNNs_ART
/home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopMenu TrNNs_ART.html#Grossberg
/home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopMenu TrNNs_ART.html#GrossVideo
/home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopMenu TrNNs_ART.html#home
/home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopMenu TrNNs_ART.html#TrNN_ART
/home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopMenu TrNNs_ART.html#TrNNs_ART
/home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopStatus TrNNs_ART.html#Grossberg
/home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopStatus TrNNs_ART.html#GrossVideo
/home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopStatus TrNNs_ART.html#home
/home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopStatus TrNNs_ART.html#TrNN_ART
/home/bill/web/Neural nets/TrNNs_ART/webWork/pMenuTopStatus TrNNs_ART.html#TrNNs_ART
/home/bill/web/Neural nets/TrNNs_ART/What is consciousness: from historical to Grossberg.html#Consciousness: table of comparisons
/home/bill/web/ProjMajor/Sun pandemics, health/corona virus/#Howell comments : Covax 'how might I cover my ass?', initial draft list of [random, scattered] ideas
/home/bill/web/ProjMajor/Sun pandemics, health/corona virus/#Howell comments : Jessica Rose's analysis of VAERS Data, increase in Deaths Following covax Shots
/home/bill/web/ProjMajor/Sun pandemics, health/corona virus/#Howell comments : Kyle Beattie's Bayesian analysis of covax - ~30% increases in [case, death]s
/home/bill/web/ProjMajor/Sun pandemics, health/corona virus/#Howell comments : Pardekooper's videos are handy to get started with database usage<
/home/bill/web/ProjMajor/Sun pandemics, health/corona virus/Howell - corona virus.html#Corona virus models
/home/bill/web/ProjMajor/Sun pandemics, health/corona virus/Howell - corona virus.html#Cosmic/Galactic rays at historical high in summer 2019
/home/bill/web/ProjMajor/Sun pandemics, health/corona virus/Howell - corona virus.html#COVID-19 data and models
/home/bill/web/ProjMajor/Sun pandemics, health/corona virus/Howell - corona virus.html#Daily cases charts for countries, by region
/home/bill/web/ProjMajor/Sun pandemics, health/corona virus/Howell - corona virus.html#Howells blog posts to MarketWatch etc
/home/bill/web/ProjMajor/Sun pandemics, health/corona virus/Howell - corona virus.html#Is the cure worse than the disease?
/home/bill/web/ProjMajor/Sun pandemics, health/corona virus/Howell - corona virus.html#Jumping off the cliff and into conclusions
/home/bill/web/ProjMajor/Sun pandemics, health/corona virus/Howell - corona virus.html#New corona virus cases/day/population for selected countries
/home/bill/web/ProjMajor/Sun pandemics, health/corona virus/Howell - corona virus.html#Questions, Successes, Failures
/home/bill/web/ProjMajor/Sun pandemics, health/corona virus/Howell - corona virus.html#Spreadsheet for generating the charts
/home/bill/web/ProjMajor/Sun pandemics, health/influenza/Howell - influenza virus.html#Astronomical correlates of pandemics
/home/bill/web/ProjMajor/Sun pandemics, health/influenza/Howell - influenza virus.html#Howell - USA influenza [cases, deaths] alongside [sunspots, Kp index, zero Kp bins]
/home/bill/web/ProjMajor/Sun pandemics, health/influenza/Howell - influenza virus.html#Influenza pandemics - Tapping, Mathias, and Surkan (TMS) theory
/home/bill/web/ProjMajor/Sun pandemics, health/influenza/Howell - influenza virus.html#Is the effectiveness of vaccines over-rated?
/home/bill/web/ProjMajor/Sun pandemics, health/influenza/Howell - influenza virus.html#Quite apart from the issue of the benefits of vaccines
/home/bill/web/ProjMajor/Sun pandemics, health/influenza/Howell - influenza virus.html#Rebuttals of the [solar, disease] correlation
/home/bill/web/ProjMajor/Sun pandemics, health/_Pandemics, health, and the sun.html#Robert Prechter - Socionomics, the first quantitative sociology?
/home/bill/web/ProjMini/Kaal- Structured Atom Model/Kaal SAM vs QM: deactivation.html#Definitions: nuclear [material, process, deactivate]s
/home/bill/web/ProjMini/Kaal- Structured Atom Model/Kaal Structured Atom Model vs Quantum Mechanics.html#Definitions: nuclear [material, process, deactivate]s
/home/bill/web/pubinfo.html#EIR
/home/bill/web/webOther/Wickson website/webWork/pMenuTopCopyright TrNNs_ART.html#Grossberg
/home/bill/web/webOther/Wickson website/webWork/pMenuTopHelp TrNNs_ART.html#Grossberg
/home/bill/web/webOther/Wickson website/webWork/pMenuTopMenu TrNNs_ART.html#Grossberg
/home/bill/web/webOther/Wickson website/webWork/pMenuTopStatus TrNNs_ART.html#Grossberg
/home/bill/web/webWork/pMenuTopCopyright.html#career
/home/bill/web/webWork/pMenuTopCopyright.html#<:class:>
/home/bill/web/webWork/pMenuTopCopyright.html#computer
/home/bill/web/webWork/pMenuTopCopyright.html#home
/home/bill/web/webWork/pMenuTopCopyright.html#hosted
/home/bill/web/webWork/pMenuTopCopyright.html#market
/home/bill/web/webWork/pMenuTopCopyright.html#myBlogs
/home/bill/web/webWork/pMenuTopCopyright.html#neural
/home/bill/web/webWork/pMenuTopCopyright.html#personal
/home/bill/web/webWork/pMenuTopCopyright.html#project
/home/bill/web/webWork/pMenuTopCopyright.html#projects
/home/bill/web/webWork/pMenuTopCopyright.html#projMajr
/home/bill/web/webWork/pMenuTopCopyright.html#projmajr
/home/bill/web/webWork/pMenuTopCopyright.html#projMini
/home/bill/web/webWork/pMenuTopCopyright.html#projmini
/home/bill/web/webWork/pMenuTopCopyright.html#reviews
/home/bill/web/webWork/pMenuTopCopyright.html#videos
/home/bill/web/webWork/pMenuTopHelp.html#career
/home/bill/web/webWork/pMenuTopHelp.html#<:class:>
/home/bill/web/webWork/pMenuTopHelp.html#computer
/home/bill/web/webWork/pMenuTopHelp.html#home
/home/bill/web/webWork/pMenuTopHelp.html#hosted
/home/bill/web/webWork/pMenuTopHelp.html#incorporate reader questions into theme webPage
/home/bill/web/webWork/pMenuTopHelp.html#incorporate reader questions into theme webPages
/home/bill/web/webWork/pMenuTopHelp.html#market
/home/bill/web/webWork/pMenuTopHelp.html#myBlogs
/home/bill/web/webWork/pMenuTopHelp.html#Navigation: [menu, link, directory]s
/home/bill/web/webWork/pMenuTopHelp.html#neural
/home/bill/web/webWork/pMenuTopHelp.html#Notation for [chapter, section, figure, table, index, note]s
/home/bill/web/webWork/pMenuTopHelp.html#personal
/home/bill/web/webWork/pMenuTopHelp.html#project
/home/bill/web/webWork/pMenuTopHelp.html#projects
/home/bill/web/webWork/pMenuTopHelp.html#projMajr
/home/bill/web/webWork/pMenuTopHelp.html#projmajr
/home/bill/web/webWork/pMenuTopHelp.html#projMini
/home/bill/web/webWork/pMenuTopHelp.html#projmini
/home/bill/web/webWork/pMenuTopHelp.html#reviews
/home/bill/web/webWork/pMenuTopHelp.html#Theme webPage generation by bash script
/home/bill/web/webWork/pMenuTopHelp.html#videos
/home/bill/web/webWork/pMenuTopMenu.html#career
/home/bill/web/webWork/pMenuTopMenu.html#<:class:>
/home/bill/web/webWork/pMenuTopMenu.html#computer
/home/bill/web/webWork/pMenuTopMenu.html#home
/home/bill/web/webWork/pMenuTopMenu.html#hosted
/home/bill/web/webWork/pMenuTopMenu.html#market
/home/bill/web/webWork/pMenuTopMenu.html#myBlogs
/home/bill/web/webWork/pMenuTopMenu.html#neural
/home/bill/web/webWork/pMenuTopMenu.html#personal
/home/bill/web/webWork/pMenuTopMenu.html#project
/home/bill/web/webWork/pMenuTopMenu.html#projects
/home/bill/web/webWork/pMenuTopMenu.html#projMajr
/home/bill/web/webWork/pMenuTopMenu.html#projmajr
/home/bill/web/webWork/pMenuTopMenu.html#projMini
/home/bill/web/webWork/pMenuTopMenu.html#projmini
/home/bill/web/webWork/pMenuTopMenu.html#reviews
/home/bill/web/webWork/pMenuTopMenu.html#videos
/home/bill/web/webWork/pMenuTopStatus.html#career
/home/bill/web/webWork/pMenuTopStatus.html#<:class:>
/home/bill/web/webWork/pMenuTopStatus.html#computer
/home/bill/web/webWork/pMenuTopStatus.html#home
/home/bill/web/webWork/pMenuTopStatus.html#hosted
/home/bill/web/webWork/pMenuTopStatus.html#market
/home/bill/web/webWork/pMenuTopStatus.html#myBlogs
/home/bill/web/webWork/pMenuTopStatus.html#neural
/home/bill/web/webWork/pMenuTopStatus.html#personal
/home/bill/web/webWork/pMenuTopStatus.html#project
/home/bill/web/webWork/pMenuTopStatus.html#projects
/home/bill/web/webWork/pMenuTopStatus.html#projMajr
/home/bill/web/webWork/pMenuTopStatus.html#projmajr
/home/bill/web/webWork/pMenuTopStatus.html#projMini
/home/bill/web/webWork/pMenuTopStatus.html#projmini
/home/bill/web/webWork/pMenuTopStatus.html#reviews
/home/bill/web/webWork/pMenuTopStatus.html#videos
#Home TrNN&ART Status:
#hosted
#How can the Great Pricing Waves be correlated with
#Howell 2011: the need for machine consciousness
#Howell comments : Covax 'how might I cover my ass?'
#Howell comments : Jessica Rose's analysis of VAERS Data, increase in Deaths Following covax Shots
#Howell comments : Kyle Beattie's Bayesian analysis of covax - ~30% increases in [case, death]s
#Howell : comments on selected [paper, presentation]s
#Howell comments : Pardekooper's videos are handy to get started with database usage.
#Howell - FAR MORE Americans will die from the recession than corona virus
#Howell: questions about SAM
#Howells blog posts to MarketWatch etc
#Howells questions about 1993 conference proceedings
#Howell's TradingView chart - USOIL snakes, ladders, Tchaichovsky
#Howell - USA influenza [cases, deaths] alongside [sunspots, Kp index, zero Kp bins]
However, it became really deep in 1991 in my lab,[UN-UN3] which has
However, even after a common publication,[VAN3] Bengio published papers[VAN4][XAV]
However, "hierarchical feature representation" in deep learning networks is what Ivakhnenko & Lapa (1965)[DEEP1-2]
However, "hierarchical feature representation" in deep learning networks is what Ivakhnenko & Lapa (1965)[DEEP1-2] (and also Fukushima[CNN1][DL2]) had long before LeCun.
However, Section 2 of the same 1991 paper[FWP0]
However, the basic CNN architecture with convolutional and downsampling layers is actually due to Fukushima (1979).[CNN1] NNs with convolutions were later (1987) combined by Waibel with weight sharing and backpropagation.[CNN1a] Waibel called this TDNN and
#HRL0
https://abruptearthchanges.com/2022/02/24/another-tv-anchor-collapses-while-pushing-vaccine-propaganda-coincidence/comment-page-1/#comment-52689
https://books.google.com/books?id=8vPGDwAAQBAJ&printsec=frontcover&vq=corrective#v=onepage&q&f=false
https://covid19-projections.com/#view-projections
https://en.wikipedia.org/wiki/Consciousness#Neural_correlates
https://en.wikipedia.org/wiki/Consciousness#The_problem_of_definition
https://en.wikipedia.org/wiki/Electromagnetic_theories_of_consciousness#Objections
https://en.wikipedia.org/wiki/Integrated_information_theory#Criticism
https://en.wikipedia.org/wiki/Models_of_consciousness#Dehaene–Changeux model
https://en.wikipedia.org/wiki/Models_of_consciousness#Electromagnetic_theories_of_consciousness
https://en.wikipedia.org/wiki/Models_of_consciousness#Functionalism
https://en.wikipedia.org/wiki/Models_of_consciousness#Multiple_drafts_model
https://en.wikipedia.org/wiki/Models_of_consciousness#Neural_correlates_of_consciousness
https://en.wikipedia.org/wiki/Models_of_consciousness#Orchestrated_objective_reduction
https://en.wikipedia.org/wiki/Models_of_consciousness#Sociology
https://en.wikipedia.org/wiki/Models_of_consciousness#Thalamic_reticular_networking_model_of_consciousness
https://en.wikipedia.org/wiki/Sentience#Digital_sentience
https://en.wikipedia.org/wiki/Sentience#Philosophy_and_sentience
https://en.wikipedia.org/wiki/Sentience#sentience
https://hudoc.echr.coe.int/eng-press#{
https://i0.wp.com/principia-scientific.com/wp-content/uploads/2021/08/Scientists-in-a-lab-UN.png?resize=520%2C223&ssl=1
https://people.idsia.ch/~juergen/2010s-our-decade-of-deep-learning.html#Sec.%201
https://people.idsia.ch/~juergen/artificial-curiosity-since-1990.html#sec1
https://people.idsia.ch/~juergen/critique-honda-prize-hinton.html#reply
https://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html#Sec.%200
https://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html#Sec.%201
https://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html#Sec.%2010
https://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html#Sec.%2011
https://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html#Sec.%2019
https://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html#Sec.%202
https://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html#Sec.%203
https://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html#Sec.%204
https://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html#Sec.%205
https://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html#Sec.%207
https://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html#Sec.%208
https://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html#Sec.%209
https://people.idsia.ch/~juergen/fast-weight-programmer-1991-transformer.html#sec2
https://people.idsia.ch/~juergen/lecun-rehash-1990-2022.html#addendum2
https://people.idsia.ch/~juergen/onlinepub.html#secBooks
https://principia-scientific.com/fda-just-called-out-faucis-cdc-for-massive-vax-coverup/#comment-67192
https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57422
https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57426
https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57453
https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57454
https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57462
https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57468
https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57469
https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57474
https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57475
https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57477
https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57488
https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57546
https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57547
https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57548
https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57551
https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57555
https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57556
https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57640
https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57642
https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57650
https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57652
https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57653
https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57654
https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57660
https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57712
https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57737
https://principia-scientific.com/sociology-of-scientific-knowledge-normal-science/#comment-57804
https://scc-usc.github.io/ReCOVER-COVID-19/#/
https://secure.gravatar.com/avatar/0c2fccca91c22cf7754f7bd7afa9230e?s=50&r=pg
https://secure.gravatar.com/avatar/0d324a6f32f9a50594f33382f78a1d93?s=50&r=pg
https://secure.gravatar.com/avatar/365f8feba620843947e6b6947d6ebe9d?s=50&r=pg
https://secure.gravatar.com/avatar/587d6cf972f60c3ec770e1b3696bd2db?s=50&r=pg
https://secure.gravatar.com/avatar/8d1db00b30979a475738df489d3c5280?s=50&r=pg
https://secure.gravatar.com/avatar/9487f5841aed2acb6ab4a47291ee43e1?s=50&r=pg
https://secure.gravatar.com/avatar/ac7eb79eb6e256227edeabe1bb7219d5?s=50&r=pg
https://secure.gravatar.com/avatar/e951141bc13ad8a29b5316564b224163?s=50&r=pg
https://www.amazon.com/Vaccination-Silent-Killer-Present-Danger/dp/B000XZKQ0Q/ref=sr_1_fkmr1_1?dchild=1&keywords=Ida+Honorof%2C+Eleanor+McBean+1977+%22Vaccination+%3A+the+silent+killer.+A+clear+and+present+danger%22+Honor+Publications&qid=1591651878&sr=8-1-fkmr1#customerReviews
https://www.bloomberg.com/news/articles/2022-03-10/imf-no-longer-sees-russian-debt-default-as-an-improbable-event#:~:text=The%20International%20Monetary%20Fund%20joined,Kristalina%20Georgieva%20told%20reporters%20Thursday.
https://www.cnbc.com/2022/02/24/putin-ukraine-invasion-russias-ruble-hits-record-low-against-dollar.html#:~:text=Russia's%20ruble%20plunged%20Thursday%20as,regions%20in%20Donetsk%20and%20Luhansk.
https://www.faz.net/aktuell/feuilleton/forschung-und-lehre/die-welt-von-morgen/juergen-schmidhuber-will-hochintelligenten-roboter-bauen-13941433.html?printPagedArticle=true#pageIndex_2
https://www.forbes.com/sites/ericmack/2020/03/16/see-how-coronavirus-compares-to-other-pandemics-through-history/#152cfca37d1e
https://www.kmu.gov.ua/news/operativna-informaciya-pro-poshirennya-ta-profilaktiku-covid-19-18-2-22#:~:text=За%20добу%2017%20лютого%202022,усього%2031%20383%20042%20щеплення.
https://www.nature.com/articles/468760a#article-comments
https://www.nbcnews.com/news/world/live-blog/russia-ukraine-live-updates-n1289976/ncrd1289985#liveBlogCards
https://www.nytimes.com/live/2022/03/08/world/ukraine-russia-war?fbclid=IwAR0_SD9JW-_B0_uP0sr9hsHGcZTntKnpXABqqmoeT6j8ELQ0cF7KteiWmz4#biden-is-expected-to-ban-russian-oil-imports-into-the-united-states
https://www.nytimes.com/live/2022/03/08/world/ukraine-russia-war?smtyp=cur&smid=tw-nytimes#putin-isnt-crazy-the-cia-chief-says-but-hes-gotten-harder-to-reason-with
https://www.state.gov/briefings/department-press-briefing-february-3-2022/#post-311330-RussiaChina
https://www.theguardian.com/world/2022/feb/28/ukraine-russia-belarus-war-crimes-investigation-the-hague?utm_term=Autofeed&CMP=twt_gu&utm_medium&utm_source=Twitter#Echobox=1646072408
https://www.theguardian.com/world/2022/mar/09/britain-fears-russia-could-be-setting-stage-to-use-chemical-weapons?utm_term=Autofeed&CMP=twt_gu&utm_medium&utm_source=Twitter#Echobox=1646852242
http://www.scholarpedia.org/article/Deep_Learning#Backpropagation
#Human [psychology, sociology] - better concepts from technical market analyis?
#HW1
#HW2
#HW3
#Hypocrisy
#Hypothesized causes of the UWS (Puetz, Borchardt, Condie)
#I
#I25
..." (Wiki2023)
..." (Wiki2023)
..." (Wiki2023)
..." (Wiki2023)
#Ibn Khaldun
I built NNs whose outputs are changes of programs or weight matrices of other NNs[FWP0-2]
#IC14
#IC49
#I can't comment, as I have no knowledge
"... Consciousness, at its simplest, is sentience and awareness of internal and external existence.[1] However, its nature has led to millennia of analyses, explanations and debates by philosophers, theologians, linguists, and scientists. Opinions differ about what exactly needs to be studied or even considered consciousness. ..."(Wiki2023)
"... Daniel Dennett proposed a physicalist, information processing based multiple drafts model of consciousness described more fully in his 1991 book, Consciousness Explained. ..." (Wiki2023, full webPage Wiki2023)
#[Ideas, comments] that echo some of my own feelings
"... Electromagnetic theories of consciousness propose that consciousness can be understood as an electromagnetic phenomenon that occurs when a brain produces an electromagnetic field with specific characteristics.[7][8] Some electromagnetic theories are also quantum mind theories of consciousness.[9] ..." (Wiki2023)
#IF: control branching to an [operation, address]
"... Functionalism is a view in the theory of the mind. It states that mental states (beliefs, desires, being in pain, etc.) are constituted solely by their functional role – that is, they have causal relations to other mental states, numerous sensory inputs, and behavioral outputs. ..." (Wiki2023, full webPage Wiki2023)
#II
#III
I like the description in Wikipedia (Wiki2023):
#IM09
#image-caption is way too tall :
#Images : covax drives covid [case, death]s, plus it's own adverse effects
#Immediate borderlands : Poland, Romania, Bulgaria
in 1925.[LIL1-2]
in 1948.[ZU48]
In 1959, Robert Noyce presented a monolithic IC.[IC14]
In 1960, Kelley already had a precursor of the algorithm.[BPA] Furthermore, many
In 1964, Ray Solomonoff combined Bayesian (actually Laplacian[STI83-85]) probabilistic reasoning and theoretical computer science[GOD][CHU][TUR][POS]
In 1972, Amari reused the Lenz-Ising model to build a learning RNN, later sometimes called the Hopfield network or Amari-Hopfield Network.[AMH1-3]
In 1987, NNs with convolutions were combined by Waibel with weight sharing and backpropagation.[CNN1a] Waibel did not call this CNNs but TDNNs.
in 1987[META1][META] long before Bengio
In 1991, one of them[FWP0-1]
In 1995, we already had an excellent neural probabilistic text model[SNT] whose basic concepts were
In 2001, we showed that LSTM can learn languages unlearnable by traditional models such as HMMs,[LSTM13]
in 2007.[LSTM4][LSTM14]
In 2020, Imanol et al. augmented an LSTM with an associative fast weight memory.[FWPMETA7]
In addition, our team automatically evolved lots of additional LSTM variants and topologies already in 2009[LSTM7] without changing the name of the basic method.
In 1673, the already mentioned Gottfried Wilhelm Leibniz (called "the smartest man who ever lived"[SMO13])
in Sec. 2 and Sec. 3
in a row (15 May 2011, 6 Aug 2011, 1 Mar 2012, 10 Sep 2012).[GPUCNN5]
In both cases, learning fails (compare[VAN2]). This analysis led to basic principles of what
#Inca
Includes variants of chapters of the AI Book.
#incorporate reader questions into theme webPages
In ad hominem style,[AH2-3]
in Neural Computation.[FWP1]
In fact, Hinton was the reviewer of a 1990 paper[ATT2]
#Influenza pandemics - Tapping, Mathias, and Surkan (TMS) theory
#informational noise suppression
#Initial observations
#Initial questions
#Initial setup
In particular, as mentioned in Sec. A,
in popular science venues without peer review? For example, the narrator of a popular 2018 Bloomberg video[VID2]
In references[FWPMETA1-5] since 1992, the slow NN and the fast NN
in space and time.[BB2][NAN1-4][NHE][HEL]
#Inspirations for this webPage
#Instructions
In summation, LBH have repeatedly chosen to ignore the previous well-known critiques[DLC][HIN][T20a] and deep learning surveys,[DL1-2]
In summation, LBH have repeatedly chosen to ignore the previous well-known critiques[DLC][HIN][T20a] and deep learning surveys,[DL1-2] and ACM
#Interest rates, currency [DXY,CNYUSD]
internal representations in hidden layers of NNs.[RUM] But this was essentially just an experimental analysis of a known method.[BP1-2] And
#International market indexes [SP500, NASDAQ, SHCOMP, 10y T-bill]
intervals: just a few decades or centuries or at most millennia.[OMG1]
in the 1960s-70s, especially outside of the Anglosphere.[DEEP1-2][GD1-3][CNN1][DL1-2][T22]
In the same year of 1936, Emil Post published yet another independent universal model of computing,[POS]
In the same year of 1936, Emil Post published yet another independent universal model of computing.[POS]
in this context[ATT] (Sec. 4), and
in this context,[ATT] and
#intra-Birkeland current, radial to axis of current (Donald Scott)
#[intra, extra]-cellular processes, [neuron, astrocyte]s
#[intra, inter]-cellular process
#intro
#Introduction
#Introduction: what does quantum physics add to our understanding to consciousness?
I offered the FWPs of 1991[FWP0-1] as an
I published one myself in the hopes of correcting the annals of history.[AC20]
is all about NN depth.[DL1]
is dominated by artificial neural networks (NNs) and deep learning,[DL1-4]
#Is LaMDA Sentient? — an Interview
is mirrored in the LSTM-inspired Highway Network (May 2015),[HW1][HW1a][HW3] the first working really deep
is now widely used for exploration in RL (e.g., Sec. C) and
"... Sociology of human consciousness uses the theories and methodology of sociology to explain human consciousness. The theory and its models emphasize the importance of language, collective representations, self-conceptions, and self-reflectivity. It argues that the shape and feel of human consciousness is heavily social. ..."(Wiki2023, full webPage Wiki2023
#Is the cure worse than the disease?
#Is the effectiveness of vaccines over-rated?
#Is there any biological plausibility?
it at the 1951 Paris AI conference.[AI51][BRO21][BRU4]
It did not cite the much earlier 1991 unsupervised pre-training of stacks of more general recurrent NNs (RNNs)[UN0-3]
"... The Neural correlates of consciousness (NCC) formalism is used as a major step towards explaining consciousness. The NCC are defined to constitute the minimal set of neuronal events and mechanisms sufficient for a specific conscious percept, and consequently sufficient for consciousness. In this formalism, consciousness is viewed as a state-dependent property of some undefined complex, adaptive, and highly interconnected biological system.[3][4][5] ..." (Wiki2023, full article: Wiki2023 - Neural_correlates_of_consciousness, also cited by Grossberg 2021)
It is essentially a feedforward version of LSTM[LSTM1] with forget gates.[LSTM2]
it). More on this under [T22].
it,[ACM16][FA15][SP16][SA17]
it used outer products between key patterns and value patterns (Sec. 2) to manipulate
It was published in 1991-92[UN1] when compute was about 1000 times more expensive than in 2006.
#IV
Ivakhnenko and Lapa in 1965[DEEP1-2][R8] (see Sec. II).
Ivakhnenko and Lapa in 1965[DEEP1-2][R8] (see Sec. II).
#IX
#Japan
#Jerry Tennant 16Jun2020 - Voltage and Regeneration, Electricity of Life
J. Schmidhuber (AI Blog, 2021). The most cited neural networks all build on work done in my labs. Foundations of the most popular NNs originated in my labs at TU Munich and IDSIA. Here I mention: (1) Long Short-Term Memory (LSTM), (2) ResNet (which is our earlier Highway Net with open gates), (3) AlexNet and VGG Net (both building on our similar earlier DanNet: the first deep convolutional NN to win
J. Schmidhuber (AI Blog, 2021). The most cited neural networks all build on work done in my labs. Foundations of the most popular NNs originated in my labs at TU Munich and IDSIA. Here I mention: (1) Long Short-Term Memory (LSTM), (2) ResNet (which is our earlier Highway Net with open gates), (3) AlexNet and VGG Net (both citing our similar earlier DanNet: the first deep convolutional NN to win
J. Schmidhuber (Blog, 2000). Most influential persons of the 20th century (according to Nature, 1999). The Haber-Bosch process has often been called the most important invention of the 20th century[HAB1]
John Atanasoff (the "father of tube-based computing"[NASC6a]).
#John Taylor 2006 The Mind: A users manual
Joseph[R61]
J. Schmidhuber (AI Blog, 2020). 30-year anniversary of planning & reinforcement learning with recurrent world models and artificial curiosity (1990). This work also introduced high-dimensional reward signals, deterministic policy gradients for RNNs, the GAN principle
J. Schmidhuber (AI Blog, Nov 2020). 15-year anniversary: 1st paper with "learn deep" in the title (2005). Our deep reinforcement learning & neuroevolution solved problems of depth 1000 and more.[DL6] Soon after its publication, everybody started talking about "deep learning." Causality or correlation?
#Jumping off the cliff and into conclusions
Jung & Oh in 2004[GPUNN]). A reviewer called this a
#Jupiter
#Jupiter's The Great Red Spot, Eye of the Storm, Part 6 Thunderblog
#KAE96
#KAL59
Kelley already had a precursor thereof in the field of control theory;[BPA] see also later work of the early 1960s.[BPB][BPC][R7]
#Key files
#Key [results, comments]
#knowledge, letters
#KNU
#KO0
#KOH82
#L20
#L79
#L84
#L86
#LA14
lab for decades[AC][AC90,AC90b]) will quickly improve themselves, restricted only by the fundamental limits of computability and physics.
#Lamarckian versus Mendellian heredity, spiking MindCode as special case
#laminar computing
#LAMINART vison, speech, cognition
language modeling tasks.[FWP6]
languages;[LSTMGRU2] they
#Large Scale Wind Structures, Eye of the Storm, Part 5 Thunderblog
later in 1982[BP2] and
later our Highway Nets[HW1-3] brought it to feedforward NNs.
layers (already containing the now popular multiplicative gates).[DEEP1-2][DL1-2] A paper of 1971[DEEP2]
layers of neurons or many subsequent computational stages.[MIR]
layers.[DEEP1-2] Their activation functions were Kolmogorov-Gabor polynomials which include the now popular multiplicative gates,[DL1-2]
#lbhacm
LBH & co-authors, e.g., Sejnowski[S20] (see Sec. XIII). It goes more or less like this: "In 1969, Minsky & Papert[M69]
LBH and their co-workers have contributed certain useful improvements of existing deep learning methods.[CNN2,4][CDI][LAN][RMSP][XAV][ATT14][CAPS]
LBH claim to "briefly describe the origins of deep learning"[DL3a] without even mentioning the world
LBH started talking about "deep learning ... moving beyond shallow machine learning since 2006",[DL7] referring to their unsupervised pre-training methods of 2006.
LBH[DL3,DL3a] did not cite either.
LBH, who called themselves the deep learning conspiracy,[DLC][DLC1-2]
LBH, who called themselves the deep learning conspiracy,[DLC]
#LE18
#learning and development
learning RNNs. This, however, was first published many decades later,[TUR1] which explains the obscurity of his thoughts here.[TUR21]
learning.[HIN]
learn to count[LSTMGRU2] nor learn simple non-regular
#LEC
#LECP
LeCun also listed the "5 best ideas 2012-2022" without mentioning that
LeCun et al. neither cited the origins[BP1] (1970) of this
#LEI07
#LEI21
Leonardo Torres y Quevedo (mentioned in the introduction) became
#Let the machines speak
[1] Ralph Nelson Elliott. "The Wave Principle," Pages 3-4. Lula Press, 2019
Apparent failures of essentially ALL [medical, scientific] experts, and the mass media?
Apparent successes of the medical, scientific] experts?
Part 1 Thunderblog 11May2016
Part 3 Thunderblog 28May216
Part 2 Thunderblog 21May2016
Part 1 Arches National Monument, Thunderblog 12Feb2018