#] #] ********************* #] "$d_web"'My sports & clubs/neural- VVTNS neurosci/0_VVTNS notes.txt' # www.BillHowell.ca 05Jun2024 initial # view in text editor, using constant-width font (eg courier), tabWidth = 3 : #] Author Title... #48************************************************48 #24************************24 # Table of Contents, generate with : # $ grep "^#]" "$d_web"'My sports & clubs/neural- VVTNS neurosci/0_VVTNS notes.txt' | sed "s|^#\]|\t|" >"$d_web"'My sports & clubs/neural- VVTNS neurosci/0_VVTNS notes TblOfContents.txt' # 24************************24 #] [toRead, revisit]*[paper, email, vid,etc]s, use |*| anywhere in doc - pulls out line # $ grep "\(|\*|\)" "$d_web"'My sports & clubs/neural- VVTNS neurosci/0_VVTNS notes.txtt' | sed 's|^#\] ||' ************************** #24************************24 # Setup, ToDos, David Hansel, organiser, #24************************24 # Howell questions for future presentations : Stephen Grossberg (work 1956 to present at high school, Boston U, etc) : 1) strive for successful behaviour in [noisy, surprising] environments : stability vs plasticity [memory, parameter]s are robust over lifetime where they continue to be valid, but can [learn, intentionally] forget if no longer relevant? cooperative-competitive fast learning (sometimes one-shot) 2) resonance- important to many Grossberg's concepts (Adaptive Resonance Theory (ART), consciousness, etc, etc). 3) limitations of statistical approaches (eg [Machine Learning (ML), Information Theoretics) for understanding : system [mechanisms, cause-effect, identification, prediction, control theory, optimisation] [architecture, function, process]s but overall very helpful, and perhaps essential for "lighting up" research path options 4) what is a spike? this is one of the key starting problems for several of my projects 5) How can one use large systems (eg Large Language Models) built from [Neural Net, statistical (eg [ML, information theoretic])] approaches to [build, back-construct] large systems of mechanistic models of neuron [, small group, module, system]s? #08********08 #] ??Nov2024 #08********08 #] ??Nov2024 #08********08 #] ??Nov2024 #08********08 #] ??Nov2024 #08********08 #] ??Nov2024 #08********08 #] ??Nov2024 #08********08 #] ??Nov2024 #08********08 #] 24Nov2024 VVTNS Memming Park, Champalimaud Foundation "Back to the Continuous Attractor" 09:00 27Nov2024 #08********08 #] 30Oct2024 Karel Svoboda, Allen Institute, Illuminating synaptic learning Allen Institute for Neural Dynamics, Seattle Abstract: How do synapses in the middle of the brain know how to adjust their weight to advance a behavioral goal (i.e. learning)? This is referred to as the synaptic ‘credit assignment problem’. A large variety of synaptic learning rules have been proposed, mainly in the context of artificial neural networks. The most powerful learning rules (e.g. back-propagation of error) are thought to be biologically implausible, whereas the widely studied biological learning rules (Hebbian) are insufficient for goal-directed learning. I will describe ongoing work focused on understanding synaptic learning rules in the cortex in a brain-computer interface task. 3-factor learning - I didn't understand this conditioned neuron (CN) for a task - most active other neurons very sparsely active!!!! CN selected because inactive - can adapt easily? use calcium images rather than spikes to follow learning also [photostim, cholostim] indicators for delta(weights) = f(fi)*f(rj)*P*Si changes in synaptic learning are consistent with 3-factor learning arrive at [, anti-]Hebbian learning system to explore synaptic learning ... Mechanisms underlyig 3-factor learning actively looking at this, exciting, also neuromodulators [dopamin, NE- neuro(?nur?)-epinephrims] Locus Corerueus axon activity correlates to ??? +-----+ Questions (Ran Darshan moderator): Raoul-Martin Mersheim... I didn't catch question... Artem optical stiumulation cells - criterion to [+, -] modulate adjacent cells? how much are results applicable to different parts of brain? Karel - less sensitive to ???, will work on? Finklestein is participating, could say more beautiful work in hippocampus, non-Hebbian, much faster action eg place fields at cellular hippocampus similar Hsu Ching-Ling, Taiwan have looked at non-linear response? feedback signal is a third factor, but is mapping more than that, artifact of experiment design? Karel non-linearies always there, layer [2,3] ?in hippocampus? revoluting in [V, gaba, ???] measurements V may lead to whole new range of experiments Huriye Atilgan how specific to motor cycle? inter-hemispherical communication Karel don't know of inter-hemispherical - but there is an effect? cellular substrates are there for other areas, will look at prefrontal cortex Rainer Engelken huge number of other learning rules Karel also many versions for each learning rule must figure out antecedents - better methods required, signal/noise Fereshteh Lagzi equations (r - r_bar) Karel activity change versus performance bar Bharath Talluri changes as task changed? LC (?Local Cirius?) chronic fluctuatuations? Karel interesting history - old days difficult to see changes with learning now changes in movement of particular body part, accentuated haven't seen large tonic changes, difficult to see David Hansel changes in connections at which level possible to do at level of synapse biochemistry is 3 parts - very complicated Karel synapses, but related to connection, not synapse "effective connectivity" going well with activity of individual synapse, maybe 1000 synapses [pre, post]-synaptic, exciting may role out over next couple of years biochemistry - only see averaging correlation, changes tuning of post-synaptic #08********08 #] 03Sep2024 Paul Cisek 13Mar2024 Rethinking behavior in the light of evolution Paul Cisek, University of Montreal https://www.wwtns.online/past-seminars-2023-2024 https://www.youtube.com/watch?v=TBr-eSpUdIU In theoretical neuroscience, the brain is usually described as an information processing system that encodes and manipulates representations of knowledge to produce plans of action. This view leads to a decomposition of brain functions into putative processes such as object recognition, working memory, decision-making, action planning, etc., inspiring the search for the neural correlates of these processes. However, neurophysiological data do not support many of the predictions of these classic subdivisions. Instead, there is divergence and broad distribution of functions that should be unified, mixed representations combining functions that should be distinct, and a general incompatibility with the conceptual subdivisions posited by theories of information processing. In this talk, I will explore the possibility of resynthesizing a different set of functional subdivisions, guided by the growing body of data on the evolutionary process that produced the human brain. I will summarize, in chronological order, a proposed sequence of innovations that appeared in nervous systems along the lineage that leads from the earliest multicellular animals to humans. Along the way, functional subdivisions and elaborations will be introduced in parallel with the neural specializations that made them possible, gradually building up an alternative conceptual taxonomy of brain functions. These functions emphasize mechanisms for real-time interaction with the world, rather than for building explicit knowledge of the world, and the relevant representations emphasize pragmatic outcomes rather than decoding accuracy, mixing variables in the way seen in real neural data. I suggest that this alternative taxonomy may better delineate the real functional pieces into which the brain is organized, and can offer a more natural mapping between behavior and neural mechanisms. 08Sep2024 Howell: This seems so much like a watered-down version of Stephen Grossberg's work over the last 60+ years. ?Control theory as per Paul Werbos, and John Taylor's application of that to consciousness.? images saved to "$d_web"'My sports & clubs/neural- VVTNS neurosci/240313 Cisek: behavior & evolution/' presentation has a transcript my nots are full of [error, omission, incomplete]s Functional decomposition of behavior: [perception, cognition, action] Classical [function decomposition, modularity, serial process, etc] fail when looking at brain 06:31 unified vs distributed, distinct vs mixed, serial vs parallel 07:07 Conceptual challenges binding problem grounding problem eg : framing problem of classical AI (McCarthy & Hayes 1969) Chinese room argument (Searle 1980) symbol grounding problem (Mamad 1990) lack of understanding in LLMs (Pezzulo, Parr, Cisek, Clark, Friston 2024) 07:55 Possible conclusions not looking at data correctly need more elaborate theories traditional model is just wrong - functional decomposition is not the right way vocaulary remains (Howell: a trap) what is alternative to functional decomposition? 10:40 Cisek makes 2 controversial claims : claim 1. The brain is NOT best described as an "information processing system" it is a "feedback control system" The task of the brain is not "transform input into output", but "control the input via output through the environment" it's a special case, a more precise description. actually, claim #1 is not controversial at all claim 2. Accepting claim 1 changes everything changes functional decomposition representations and mechanisms the questions we ask 12:19 How to subdivide the problem? may give rise to solutions to problems instead of defining the subdivisions, let's first consider a strategy for doing so (12:28) Howell: many [classic, stat, infTheor, CompIntel] clustering approaches, including [SOM, SVM, ART, k-means, Deep Learning, etc, etc] we don't know real functional distinctions, we know they came from evolution "Nothing in biology makes sense except in the light of evolution" Theodosius Dobzhansky 1973 proposal: follow in the footsteps of evolution 13:14 Dispelling some myths myth: Evolution is nature's way of inding optimal solutions to problems posed by the world reality: evolution doesn't identify the problems at all. Instead, it produces variations of a functioning ancestor, and favors those which happen to accomplish something that used to be a problem. for a variation to be selected, it must first be possible the genome is not a blueprint, it is a recipe that is limited, can [eleborate, elongate, differentiate] mechanisms are highly constrained by their history <- Good news!! gives us a view of antecedets, taxonomy, ... 15:51 Following in the footsteps of evolution Instead of defining putative systems like "cognition", "attention", "decision-making" etc, based on ideas about the human mind, we can consider how mechanisms were differentiated and elaborated over evolutionary time Strategy: "Phylogenetic refinement" (Cisek 2019) applying the comparative method of biology to theoretical neuroscience infer the sequence of innovations using the comparative method to understand some [structure, function] "X" always ask 2 questions: 1. what was X before it became X? 2. How did X expand behavioral capacity? always follow a chronological sequence 16:48 A brief walk through evolutionary history 240313 Cisek 16:48 phylogenetic tree path to human.png general consensus that life started with closed autocatlytic systems (enzymes, ?) Origins of life: autocatalytic systems [Eigen, Kauffman, Steele, Copely, Joyce].png controls [internal, external] to membrane "Behavioral control system" task of behaviour: to complement the dynamics of the environment such that the organism-environment system flows toward desirable states [inhibition, excitation, feedback] systems "What we have is a circuit ... the motor response detrmines the stimulus, just as truly as sensory stimulus determines movement." (Dewey 1896 p363) 20:32 neurons first developed from exterior layer of cells 240313 Cisek 20:32 Mobile filter feeding [Jekely, Tosches, Arendt, Humphries, Sims, Hills].png early organisms [apical, blastoporal] nervous systems, moved via "Levy walks" dopamine regulated "exploit versus explore" 22:05 240313 Cisek 22:05 slightly more elaborate behavioral control system.png without external environment, lose adaptive structure of behavior 240313 Cisek 22:05 Dewey 1896 circuit - motor response detrmines stimula, sensory stimulus determines movement.png 22:37 nervous system development from cnidarians [jelly fish, anemones, etc] to humans 25:09 Chordates next step elongated, dorsal structure: hypothalamus connected to spinal chord 240313 Cisek 25:09 hypothalamus attached to spinal chord [Lacalli, Holland, Vopalensky].png 25:22 also have "escape circuit" - visual shadow ?tectum? then bilateral eyes then developed two systems : can see in lamprey, zebra fish, even mammals ipso-lateral avoidance, if @left turn right, if @right turn left, both - average contra-lateral approach - winner-take-all because averaging doesn't work 28:58 Conclusions, so far different model of brain function not knowledge acquisition, but control of interaction not serial computations, but nested feedback control loops representations are not "descriptive", but "pragmatic" diffeerent conceptual taxonomy not BEHAVIOR -> [perception, cognition, action] and down tree instead life -> replication -> metabolism -> physiology -> BEHAVIOR -> [state, sensorimotor] control 240313 Cisek 29:46 conclusion of different [model of brain function, conceptual taxonomy].png maps well to the nervous system 29:46 Vertebrates 240313 Cisek 29:46 vertebrates [Puelles, Grillneer, Dubuc, Ginsburg, Jablonka].png 30:04 Telencephalon - kind of extension of hypothalamus consists of [, sub]pallium, new type of dopamine in vertebrates -> learn state-driven actions 240313 Cisek 30:04 telencephalon [Puelles, Grillneer, Dubuc, Ginsburg, Jablonka].png 31:14 pallium -> ventro-lateral systems local [learn, exploit] local exploitation 240313 Cisek 31:14 local exploitation, key stimuli [Puelles, Grillneer, Dubuc, Ginsburg, Jablonka, Butler, Jacobs, Salas, Rodriguez, Swanson].png 31:35 Long range exploitation, medial pallium eg [odor gradient, landmark]s to move to new locations will become the hippocampus, spatial maps of tectum 13Sep2024 ended at 31:59 31:59 Predictive control 240313 Cisek 31:37 Long range exploration [Puelles, Grillneer, Dubuc, Ginsburg, Jablonka, Butler, Jacobs, Salas, Rodriguez, Swanson].png [odour gradients, landmarks] to move to new locations this part will become the hippocampus work together with spatial maps of the tectum which govern moment-to-moment to govern behaviour 240313 Cisek 31:54 approach & avoidance, spatial maps [Puelles, Grillneer, Dubuc, Ginsburg, Jablonka].png ?draw invertebrates added a cerebellum, which 240313 Cisek 32:03 cerebellum predictive control (forward model, consequences of action) [Bell, Northcutt, Niewenhuys, Montgomery, Bodznick].png 240313 Cisek 32:40 water-to-land transition, expansion of visual range [Puelles, Aboitiz, Swanson, McIver, Finlay, Gonzales, Striedler, Jacobs].png +-----+ Has Grossberg cited Cisek? $ find "$d_neural" -type f \( -name "*.html" -o -name "*.txt" \) | grep --invert-match "z_Old" | grep --invert-match "z_Archive" | sort | tr \\n \\0 | xargs -0 -IFILE grep --with-filename --line-number 'Cisek' "FILE" >"$d_web"'My sports & clubs/neural- VVTNS neurosci/240908 find Cisek in d_neural.txt' >> zero hits!! (note that this file is Note in d_neural) # not used : | sed "s|$d_Qroot||;s|:.*||" | sort -u >"$d_temp"'find-grep-sed temp.txt' didn't work, try Grossberg : $ find "$d_neural" -type f \( -name "*.html" -o -name "*.txt" \) | grep --invert-match "z_Old" | grep --invert-match "z_Archive" | sort | tr \\n \\0 | xargs -0 -IFILE grep --with-filename --line-number 'Grossberg' "FILE" >"$d_web"'My sports & clubs/neural- VVTNS neurosci/240908 find Cisek in d_neural.txt' >> 2,288 hits: so the find is working 08********08 #] 26Jun2024 Eve Marder: [long-term, hidden] changes in bio-NNs Eve Marder, Brandeis U, 09:00 Cryptic (hidden) changes that result from perturbations and climate change shape future dynamics of degenerate neurons and circuits >> I registered, but forgot - look at later? - nyet, not posted on webSite, just a blurb Abstract: A fundamental problem in neuroscience is understanding how the properties of individual neurons and synapses contribute to neuronal circuit dynamics and behavior. In recent years we have done both computational and experimental studies that demonstrate that the same physiological output can arise from multiple, degenerate solutions, and that individual animals with similar behavior can nonetheless have quite different sets of underlying circuit parameters. Most recently, we have been studying the resilience of individual animals to perturbations such as temperature and high potassium concentrations. This has revealed that extreme environmental experiences can produce long-term changes in circuit performance that can be hidden, or “cryptic” unless the animals are again challenged or perturbed. Our present experimental and computational work is designed to understand differential resilience in natural, wild-caught animals in response to climate change, and shows long-lasting influences of the animals’ temperature history. #08********08 #] 19Jun2024 Yasaman Bahri, Google DeepMind: scaling, data manifolds, and universality Learning and prediction in artificial deep neural networks: scaling, data manifolds, and universality Abstract: Developing scientifically-grounded theories for representation learning and generalization in artificial deep neural networks remains a grand challenge of fundamental interest to theoretical neuroscience and machine learning. I will discuss our work on one facet of this challenge — namely understanding generalization or “scaling laws” in learned neural networks as a function of basic control variables. I’ll discuss a taxonomy we develop that classifies different regimes of scaling behavior. We identify regimes where generalization exhibits universal scaling behavior and others where it can be traced back to properties of the data and neural architecture. The theoretical analysis is enabled by leveraging exactly solvable models of deep neural networks that arise naturally in the limit of large hidden layers. Along the way, I’ll also discuss our work on these theoretical models, which have been a useful starting point for theoretical descriptions of neural network dynamics. Finally, I’ll discuss our findings connecting generalization in neural networks to properties of the learned data manifold. I’ll close by discussing future directions and new hypotheses that emerge from our findings >> see dir "$d_web"'My sports & clubs/neural- VVTNS neurosci/240619/' width (shallowNN) versus depth (deepNN) &&&&&&&& Howell questions (text to Ran Darshan): Does your work help to explain the very [broad, shallow?] 6-layer cortex structure as a general architecture suitable for higher level brain function? (see slide "Interlude: emergent linear models of NNs in the large-N limit") ... www.BillHowell.ca retired, Natural Resources Canada I dropped this part of the question, which was prescient given her presentation : Is there a point (depth of complexity) at which Width of NN becomes "universal", making commonality an exploitable advantage? No time to ask this question : Your comments that DNNs behave as kernels reminds me of work by Johannes Suykens' "duality" between DNNs and kernels. Do you have any comments on his work, or conditions where one approach (kernel or connectionist) is preferable to another? I think you addressed that in your presentation in some ways, but it is not clear to me. ... www.BillHowell.ca retired, Natural Resources Canada 08********08 #] 05Jun2024 Stephanie Palmer: ML tools in neuroscience, complex natural behaviour David Hansel (organize/run the zoom?) Moderator : Ran Darshan +-----+ Presentation : Black box optimization Diffusion equation Gaussian analysis - but often inadequate? small groups of retinal cells get "close to the bound" How does optimal prediction work in ?[chaotic, non-linear, something]? environment VIB (Variational Informational Bottleneck) info - like auto-encoder Problems where they couldn't compute anything +-----+ Questions Nasim Bolhasani- Why did u apply the Ising model? Markovian chain? Answer- yes Sarah Solla- predictive entropy ideas from past? Answer- exactly right: built on Fishbie, Dobiolic going to self-motivate? or something, multiplexed problems weights keep strong, lose weaker (Fisher info add-in?) wonderful Alexander Rivkind- neural data any ways to get 2nd component of behaviour etc? Answer- ?missed answer?? haven't directly modified data to play with it Ran Darshan- can you get "off the line of optimality"? Answer- yes, make non-Markovian, fun to see what happens and conjecture why Jennifer Fenton- predictions in other sensory modalities? Answer- that is our long-term objective David Hansel- Jij don't know real connections Answer- correct, question keeps us up at night we ask if correlations map onto a real neural [connection, mechanism] can experiment pharmacologically &&&&&&&& Howell: +--+ Your introduction resonates with work since before 1970s to present of Stephen Grossberg: for example 1) drive for successful behaviour in [noisy, surprising] environments 2) resonance- Palmer mentions, but this is very core to Grossberg's concepts (Adaptive Resonance Theory (ART), consciousness, etc, etc). You are looking at the challenge as a ML (stidstical) challenge, but have you worked on going beyond that, to [architecture, function, process] in a manner that is [similar to, different from] Grossberg and others? +--+ Fascinating talk, thank-you. www.BillHowell.ca, retired, member of INNS, IEEE-CIS 08********08 #] 22May2024 Yu Hu "random & covariance spectrum of RNNs" Yu Hu, Hong Kong UofS&T, 11:00EDT "How random connections and motifs shape the covariance spectrum of recurrent network dynamics" The lecture will be held on zoom on May 22, 2024, at 11:00 am EDT https://u-paris.zoom.us/j/87296214746?pwd=V3oyZ1JaQUxWVWp1TS9LS0ZiM0xiQT09 Id: 872 9621 4746 Passwd: 688274 Abstract: Theoretical neuroscience aims to understand the relationship between neuron dynamics and connectivity in recurrent circuits. This has been intensively studied at the local level, where dynamics is described by pairwise correlations. Recent advances in simultaneous recordings of many neurons have allowed researchers to address the question at the global level, such as for the dimensionality of population dynamics. Our work contributes to this effort by analyzing the impact of connectivity statistics, including certain motifs, on the bulk and outlier covariance eigenvalues. By considering linearized dynamics around a steady state, we obtained analytically the covariance spectrum which exhibits a signature long tail robust to model variants and matches zebrafish calcium imaging data. This provides a local circuit mechanism for shaping the geometry of population dynamics and a quantitative benchmark for interpreting data. # enddoc