Howell: priority NN projects
Table of Contents
Introduction
My current priorities are the following projects, which are closely inter-related, and will hopefully lead in the direction that I am interested in.
Note that apart from Grossberg's work, which is very well supported by decades of [psychology, neuro-biological] data, my projects are strongly unconventional "what if?" speculations. They don't even qualify as hypothesis yet.
The ideas are not my own, they came from others over the years. I'm convinced that all of my projects below have already been researched by others for decades, but I will continue to think a lot more about them before I do a serious literature review. That's one (sometimes embarassing) way to learn fast - to see what others have been able to do with ideas that interest me.
The real hope is that young researchers will come up with new ideas (Tsvi Achler 12Feb2024).
images :
- overall image of themes supporting each other
[why, how] does a neuron spike? (or not)
The main objective is to explore an [alternative, biologically-plausible] "filtering" of spikes seen by a given neuron, that differs from, for example “integrate-and-fire” LIF SNNs.
The idea here is to add a fast "filtering" stage to provide the ability to detect the [caller identity, content-in-the-noise] of a spike train from another neuron. In that way, within the overall spike train it is known : WHICH "upstream" neuron, has sent WHICH spikes, at WHICH precise time. Un[identified, allocated] spikes are ignored as "noise" (for now, probably for [learning, phase changes, etc] later).
- membrane potential threshold is the conventional spike-driving mechanism that is assumed.
- the "callerID" approach would also provide a basis for assuming that a neuron will fire when incoming spike trains confirm a firing condition based on multiple input neurons, rather than only when a threshold potential is reached.
Advantages, characteristics
- simple bit-shifts can do a lot of the initial work for spiking time series
- implements one speculative function (filtering of spike streams) for dendritic trees. I have not yet focussed on other possible functions, as provided by literature.
- noise reduction in addition to identifying the "callerID" neuron that has sent a specific spike train, independent of the type type of spiking, for example :
Itzhikevich's [regular (RS), instrinsically bursting (IB), chattering (CH), fast spiking (FS), thalomo-cortical (TC), resonator (RZ), low-threshold spiking (LTS)]
Izhikevich Nov2003 Known types of neurons
- No gradient is used for a "pure" application of "callerID-SNNs" for [filtering, neuronID].
- Rather than calculate spiking in "longer-than-synaptic-blurb" timescales, simple binning is being considered. Other thought will come later, but likely only after resolving challenges of
- May help to explain one of the functions of "bushy" axonal trees?
- possible reduction of the computational load of training SNNs (downstream of CallerID-SNNs)
- more [reliable, robust, time-precise, source-neuron-specific] identification of incoming spike sequences would be a huge opportunity for using the spikes for "downstream" [calculation, model]s, such as [number representation, arithmetic, function approximation, calculus].
images :
- video of multiple input neuron (neurInn) spike trains + variable noise + sudden noise -> "Post-Synaptic Blurb" (PSB) streams for each neurInn -> combined PSB stream -> filtering -> neurInn(i, t_start) -> back to video start point
- de-convoluted noise, potential for learning [new, changed, damaged] input neurons
Disadvantages
- genetic-based approaches for driving neuron spikes are not an accepted concept, and its biological plausibility has not been established. This makes it a no-go for [math, science, engineer] professionals, maybe not essentially all of them, but at least almost all. Overall, standards of practice based on years of applications expereience make things safer for us all.
- Spike filtering would occur at timescales 10-100 time faster than higher level SNN [intra, extra]-cellular processs. Normal [genetic, neuron] processes may be far too low for what is required?
- Learning by "callerID-SNNs" has not yet been addressed. The intent is to look into a combination of :
- genetic mechanisms within-the-neuron (intra-cellular), as well as interactions between neurons (extra-cellular)
- conventional gradient descent, and [evolutionary computation, particle swarm, etc] non-gradient methods
- Grossberg's concepts for [neuron, [micro, macro]-circuit]s. Grossberg also discusses [intr, extra]-cellular process, and are obvious concepts as tie-ins to [DNA, RNA] (evolutionary) coding?
When used with the standard SNN models, gradients would normally be required.
Tuning instead of training
"Tuning" of callerID-SNNs will be required to adapt the synapses of the dendritic trees to provide sufficient "separation" of the spikes coming in from connected neurons. This has not yet been addressed in any depth.
callerID-SNNs are not (yet) intended to "learn", as required for the overall SNN. That will be handled by traditional extra-cellular SNN techniques, plus a new layer of highly speculative intra-cellular process.
- Gradients are normally required for training, but for now I am simply assuming that I will start with SNN gradient methods, rather than converting [Deep Learning [CNN, TrNN], other continuous NN] to SNN.
- "callerID-SNNs" has not yet been addressed in any depth. The intent is to look into a combination of :
- genetic mechanisms within-the-neuron (intra-cellular), as well as interactions between neurons (extra-cellular)
- conventional gradient descent, and [evolutionary computation, particle swarm, etc] non-gradient methods
- Grossberg's concepts for [neuron, [micro, macro]-circuit]s. Grossberg also discusses [intr, extra]-cellular process, and are obvious concepts as tie-ins to [DNA, RNA] (evolutionary) coding?
- [number representation, arithmetic, function approximation, calculus] are subjects for later consideration.
When used with the standard SNN models, gradients would normally be required.
The obvious first step is to build on known mechanisms for protein coding, including [DNA -> tRNA transcription, tRNA -> mRNA changes, mRNA -> protein] in rhibosomes etc. As my own current priority is in looking for potential links between [neuron, brain] function and genetic "program coding", I also require "missing" mechanisms, plus more explanation of basic processes. More specifically at this time :
- electro-mechanical [real, biological] mechanisms for "transporting" bio-chemical material around (eg microtubules etc)
- [gel, bulk] water phase changes
- most importantly right now - mechanisms for coprocessing multiple mRNA strands
images :
- DNA -> tRNA -> mRNA
-> [protein, program] code
|---> protein image
|---> program code image
"MindCode" is a hobby project to link spiking to [DNA, RNA]*[mechanisms, "program" code] ("program code" as opposed to "protein code"].
My own current priority is in looking for potential links between [neuron, brain] function and genetics. More specifically at this time :
- non-protein coding parts of the DNA - searching for coding-like polypeptice "keywords" equivalent to various classical programming [copy, remove, assignment, control [if, while, done, etc].
- the [role, processes of] of [program, protein] coding in [DNA, RNA] - if program coding exists (I assume it must in some form). Bilogical research since ?I don't know when?, and Deep Learning research since ~2020? is generating "interactions" between protein and non-coding mRNA that may be a good starting point?
images :
- somehow show Mindcode program tying to Grossberg's ART
-
Stephen Grossberg 2021 'Conscious Mind, Resonant Brain'
Grossbergs list of [figure, table]s
Overview of Grossberg's work
Grossberg's [core, fun, strange] concepts
Should [callerID, MindCode] progress to a practical stage, then my priority is to tie them to
Grossberg's book. Grossberg's work provides the only [basis, context, framework] that I am aware of that I feel can answer many of my interests. It's the only framework beyond basic neuroscience that I am comfortable with (sort of understand) to explain how neurons and the brain might function, beyond the basic spiking (or continuous output - most of Grossberg's work). That doesn't necessarily mean that he's right, but he is solid, and he provides something to work with.
Grossberg's work (1957 to present) has a huge basis of experimental support from [psychology, ??neuroscience??].
To make it much easier to read Grossberg's book, ~620 captioned images are available on the webPage "Grossbergs list of [figure, table]s" as per a link below (posted with Grossberg's permission). Multiple images can be opened at a time.
My webPages have been moved, but links to older pages (retained for now) will still hopefully work, Note that my webPages are p[incomplete, changing as I work].