#] #] ********************* #] "$d_MindCode"'03_MindCode programming notes.txt' # www.BillHowell.ca 26Feb2020 initial - did I lose the original file? # view this file in a text editor, with [constant width font, tab = 3 spaces], no line-wrap 17Nov2023 copied d_Qndfs: from d_MindCode (2020) to d_MindCode [architecture, data, functions, processes, operating systems, conciousness, behaviours, personalities, creativity] 48************************************************48 24************************24 # Table of Contents : # $ grep "^#]" "$d_MindCode"'03_MindCode programming notes.txt' | sed 's|^#\] | |' # ********************* "$d_Qndfs"'MindCode/03_MindCode programming notes.txt' description 30Aug2021 current builds : 10Aug2021 clusBranch2Bit_create, neur_connectStd 03Aug2021 Quick explanations: Base tools for MindCode; 0-spiking = non-spiking neurons 15Apr2021 split ndfs for [series, integer] clusters 10Apr2021 integer cluster - toy with idea of "universal integer" of variable length, circular stack 06Apr2021 Programming Languages- The Early Years; Bit LevelArithmetic Architectures 03Apr2021 diff "clusters/logic gates.ndf" "neurons/logic gates.ndf" 01Apr2021 clusInt_add IS OP clusIdent 02Apr2020 I MUST have McCulloch & Pitts logic-type neurons!!! 01Apr2021 'integer add.ndf' & 'symbols.ndf' 31Mar2021 create a first example of a net, link d_MindCode 'symbols.ndf' 10Sep2020 continue with symbols problem 10Sep2020 Recent peer review that I did has great pertinence! : 18Aug2020 WCCI20202 "newish highlights" for me : 25May2020 symbols - data vs operators 15May2020 from ToDos : Current work : create a system to handle "formulae" 28Feb2020 [Split, tiny clean-up] of "The MindCode Manifesto" 24************************24 #] description This is for programmming-related work see "$d_PROJECTS""MindCode/0_MindCode reports.txt" for [writeups, documentation, web] thinking Generate a list of [data, optr]s $ bash "$d_bin""MindCode list of [data, optr]s.sh" To search for [variable, optr, comment]s : $ find "$d_MindCode" -type f -name "*.ndf" | grep --invert-match "z_Archive" | tr \\n \\0 | xargs -0 -ILINE grep --with-filename "<[variable, optr, comment]s>" LINE To search for [,NON]locals (1st line only) : $ bash "$d_bin""MindCode list of [data, optr,[,NON]local 1st line]s.sh" # operators : find "$d_PROJECTS""Qnial/MY_NDFS/MindCode" -type f -name "*.ndf" | tr \\n \\0 | xargs -0 -ILINE grep --with-filename "^#]" LINE | grep --invert-match 'z_Archive' "$d_temp""5_MindCode globals temp.txt" | grep --invert-match 'z_Old' | sed 's/.*#]//' # transpose mix neurIdentL neurClassL dendIdentL dendDelayL axonIdentL axonDelayL # transpose mix clusIdentL clusClassL clusNeurnL Quick copy NONLOCALS NONLOCAL clusIdentL clusClassL clusNeurnL ; NONLOCAL neurIdentL neurClassL neurStateL dendIdentL dendDelayL dendInhibL axonIdentL axonDelayL axonInhibL ; 24************************24 08********08 #] 30Aug2021 current builds : clusStackFIFO clusSeriesClock clusBranch_2In3Out qnial> loaddefs link d_Qndfs 'QNial list [,NON]LOCALs.ndf' >>> loading start : QNial list [,NON]LOCALs.ndf loading list_LOCALs_or_NONLOCALs loading list_NONLOCALS_dirMindCode <<< loading ended : QNial list [,NON]LOCALs.ndf qnial> list_NONLOCALS_dirMindCode list of NONLOCALs in pexp, but not pout) : fieldL glialL solarL typeL trigPatL trigTypeL list of NONLOCALs in pout, but not pexp) : # "$d_MindCode""2_MindCode list of NONLOCALs QNial no-explanations.txt" # reads [,NON]LOCAL lists in .ndf files of d_MindCode & its subDirs # see also - "$d_bin""MindCode list of [data, optr,[,NON]local 1st line]s.sh" # - "$d_Qndfs""MindCode/5_MindCode global variable explanations.txt" # - uses QNial standard optr to list [all, user-defined] symbols : symbols [1,0] # generated by : link d_Qndfs "QNial list [,NON]LOCALs.ndf" # date generated : 30Aug2021 15h04m20s bitCnt_sign bits_base bits_expt bits_sign clusBranch_2In3Out_inn clusBranch_2In3Out_out inhibL inHistL intg_high intg_loww nameL n_clus netIDL net_rows neurClassL neurFiredL neurIdentL neurInFireL neur_last neurList_increment neurStateL n_neur NONLOCAL numberL optrL out outL outs resCalcL ResCalcLtTitles sensIDL subTemp trgrPatL trgrPatNmIDL trgrTyp trgrTypL trigTitles XrefNmIDL Xref_rows >> these were added to pexp 08********08 #] 10Aug2021 clusBranch2Bit_create, neur_connectStd +-----+ olde code neur_connectStd removed, do connections in optrs specific to a [neur, clus] # loaddefs link d_MindCode 'basics - neurons.ndf' IF flag_debug THEN write 'loading neur_connectStd' ; ENDIF ; # neur_connectStd IS OP neurIdent ([dend, axon] cart [Idents, Delays, Inhibs]) # standard setup of connections for neuron # 10Aug2021 NOT used! # calling optr to put in defaults at start if undefined? # 03Apr2021 initial # 10Aug2021 what about [new, dead] connections? neur_connectStd IS OP neurIdent dendIdents dendDelays dendInhibs axonIdents axonDelays axonInhibs { LOCAL i ; NONLOCAL dendIdentL dendDelayL dendInhibL axonIdentL axonDelayL axonInhibL ; update "dendIdentL neurIdent (dendIdentL@i link null ) ; update "dendDelayL neurIdent (dendDelayL@i link (1 + random 1) ) ; update "dendInhibL neurIdent (dendInhibL@i link null ) ; update "axonIdentL neurIdent (axonIdentL@i link null ) ; update "axonDelayL neurIdent (axonDelayL@i link (1 + random 1) ) ; update "axonInhibL neurIdent (axonInhibL@i link null ) ; } 08********08 #] 03Aug2021 Quick explanations: Base tools for MindCode; 0-spiking = non-spiking neurons # Base tools for MindCode - [0,1,2,3,burst] spiking, http://www.scholarpedia.org/article/Bursting Bursts encode different features of sensory input than single spikes (Gabbiani et al. 1996, Oswald et al. 2004). For example, neurons in the electrosensory lateral-line lobe (ELL) of weakly electric fish fire network induced-bursts in response to communication signals and single spikes in response to prey signals (Doiron et al. 2003). # 0-spiking = non-spiking neurons https://en.wikipedia.org/wiki/Non-spiking_neuron Through studying these complex spiking networks in animals, a neuron that did not exhibit characteristic spiking behavior was discovered. These neurons use a graded potential to transmit data as they lack the membrane potential that spiking neurons possess. +-----+ Abbott, Dayan 2012 "Chapter 1, Neural Encoding I, Firing rates and spike statistics" www.dam.brown.edu/people/elie/NEUR_1680_2012/Abbott%20Dayan%20Chapter%201.PDF Dayan 2012 Neural Encoding I, Firing rates and spike statistics.pdf >> good background for me! p1h0.4 Neurons are remarkable among the cells of the body in their ability to propagate signals rapidly over large distances. >> i.e. spiking may not be important for short-range info-comm? p4h0.7 Axons terminate at synapses where the voltage transient of the action potential opens ion channels producing an influx of Ca 2 + that leads to the release of a neurotransmitter (figure 1.2B). The neurotransmitter binds to receptors at the signal receiving or postsynaptic side of the synapse causing ion-conducting channels to open. Depending on the nature of the ion flow, the synapses can have either an excitatory, depolarizing, or an inhibitory, typically hyperpolarizing, effect on the postsynaptic neuron. >> MindCode - could use this for ["zero", inhibitory] spike? >> Gerald Pollock's EZ water rather than membrane potential? (or perhaps explains membrane potential) p5h0.4 The middle trace is a simulated extracellular recording. Action potentials appear as roughly equal positive and negative potential fluctuations with an amplitude of around 0.1 mV. This is roughly 1000 times smaller than the approximately 0.1 V amplitude of an intracellularly recorded action potential. (Neuron drawing is the same as figure 1.1A.) p6h0.6 The complexity and trial-to-trial vari- ability of action potential sequences make it unlikely that we can describe and predict the timing of each spike deterministically. Instead, we seek a model that can account for the probabilities that different spike sequences are evoked by a specific stimulus. >> Howell's "roving calculations?" p6h0.8 In this chapter, we introduce the firing rate and spike-train correlation functions, which are basic measures of spiking probability and statistics. We also discuss spike-triggered averaging, a method for relating action potentials to the stimulus that evoked them. Finally, we present basic stochastic descriptions of spike generation, the homogeneous and inhomogeneous Poisson models, and discuss a simple model of neural responses to which they lead. In chapter 2, we continue our discussion of neural encoding by showing how reverse-correlation methods are used to construct estimates of firing rates in response to time-varying stimuli. These methods have been applied extensively to neural responses in the retina, lateral geniculate nucleus (LGN) of the thalamus, and primary visual cortex, and we review the resulting models. p7h0.3 The spike sequence can also be represented as a sum of infinitesimally narrow, idealized spikes in the form of Dirac δ functions (see the Mathematical Appendix), (1.1) ρ(t) = sum{i=1 to n; δ(t − t(i))} We call ρ(t) the neural response function and use it to re-express sums over spikes as integrals over time. For example, for any well-behaved function h(t) , we can write (1.2) sum{i=1 to n; h(t − t(i))} = int{tau=0 to T; h(tau)*rho(t - tau)*d(tau)} where the integral is over the duration of the trial. The equality follows from the basic defining equation for a δ function, (1.3) int{??tau=0 to T??; del_Dirac(t - tau)*h(tau)*d(tau)} = h(t) provided that the limits of the integral surround the point t (if they do not, the integral is zero). >> My [,epi]genetic base is supposed to do this!! prescient! Best to use formal math, though, as others will relate better. +-----+ https://www.usna.edu/Users/physics/tank/Other/MathMethods/EandM/DiracDelta.pdf The Dirac Delta: Properties and Representations Concepts ... [Search domain usna.edu] https://www.usna.edu/Users/physics/tank/Other/MathMethods/EandM/DiracDelta.pdf The Dirac delta function is introduced to represent a finite chunk packed into a zero width bin or into zero volume. To begin, the defining formal properties of the Dirac delta are presented. A few applications are presented near the end of this handout. The most significant example is the identification of the 08********08 #] 15Apr2021 split ndfs for [series, integer] clusters - toy with idea of "universal integer" of variable length, circular stack +-----+ # olde code % not done at creation - only later ; % clusIntStartL := link clusIntStartL 0 ; % clusIntStoppL := link clusIntStoppL 0 ; # logic-only diagram : -carryIn-|-NOR---|-input2- -input1--|-NEQ---| |-NAND2--> carryover ||-NEQ-| | |-NAND1-|-NOR- |--> intBit |-NAND0---------------------|-NEQ- | |-NAND2---> carryOut overflow bit # count-neuron diagram : -overIn-|-ZERO--> 0 to intBit -input1-|-ONE---> 1 to intBit -input2-|-TWO---| 0 to intbit | 1 to overout |-THREE-| 1 to intBit | 1 to overout # set clusInt= = 0, add bits directly to clusInt one by one, at level of bit 08********08 #] 10Apr2021 integer cluster - toy with idea of "universal integer" of variable length, circular stack +-----+ olde code clusInt_create IS { LOCAL neurIdent clusIdent clusClass clusNeurn i n_neurons ; NONLOCAL neurIdentL clusIdentL clusClassL clusNeurnL ; % ; % Create cluster & add to [global, local, etc] lists ; clusIdent := clus_create "int ; neur0_to5 := clusNeurs_create clusIdent 6 ; % ; clusIdent } 08********08 #] 06Apr2021 Programming Languages- The Early Years; Bit LevelArithmetic Architectures https://www.informit.com/articles/article.aspx?p=31670&seqNum=2 Programming Languages: The Early Years May 2, 2003 At their lowest level, computers cannot subtract, multiply, or divide. Neither can calculators. The world's largest and fastest supercomputer can only add—that's it. It performs the addition at the bit level. Binary arithmetic is the only means by which any electronic digital computing machine can perform arithmetic. http://people.ece.umn.edu/users/parhi/SLIDES/chap13.pdf Chapter 13: Bit LevelArithmetic Architectures Keshab K. Parhi +-----+ olde code % "fire" back-connection ; % dendInhib := axonInhibL@clusIdent2 ; % axonInhib := axonInhibL@clusIdent2 ; 08********08 #] 03Apr2021 diff "clusters/logic gates.ndf" "neurons/logic gates.ndf" $ diff "$d_MindCode""clusters/logic gates.ndf" "$d_MindCode""neurons/logic gates.ndf" --suppress-common-lines https://www.computerhope.com/jargon/l/logioper.htm >> good list & description ofogic operators AND OR NOT * NAND = not and * NOR = not or * NEQ -> returns true if any of its inputs differ, and false if they are all the same. * EQ -> returns true if all of its inputs are the same, and false if any of them differ # stupid convention to call this XOR!!! jackass stupid and obscure! # stupid convention to call this XNOR!!! jackass stupid and obscure! [NAND, NOR] have the distinction of being the two "universal" logic gates because any other logic operation can be created using only one of these gates. +-----+ olde code # +-----+ # Connections - dendIdents axonIdents # IF flag_debug THEN write 'loading checkConnects' ; ENDIF ; # checkConnects IS - checkConnects IS { LOCAL ; NONLOCAL ; % ; % Add this cluster's ID to connected [dend, axon] lists ; % This requires that cables have suitable [number, ordering] of synpses to connect to this cluster ; % must check for "type" consistency ; IF (~= null dendIdents) THEN FOR ID WITH dendIdents DO IF (= neurClass neurClassIDL@ID) THEN link dendIdentL@ID clusID ; ELSE write link 'addInteger clusAdd = ' neurClass ', dendIdent = ' ID ', classDend = ' dendIdentL@ID ; ENDIF ; ENDFOR ; ENDIF ; % ; IF (~= null axonIdents) THEN FOR ID WITH axonIdents DO IF (= class classIDL@ID) THEN link axonIdentL@ID clusID ; ELSE write link 'addInteger clusAdd = ' class ', axonIdent = ' ID ', classAxon = ' axonIdentL@ID ; ENDIF ; ENDFOR ; ENDIF ; } % add neurons, using neurClass="intBit, null for [axonIdents axonDelays dendIdents dendDelays] ; % [dendIdents upstream, axonIdents, all delays] to be defined later but are null initial code here ; n_neurons := 6 ; clusNeurn := null ; FOR i WITH (tell n_neurons) DO neurIdent := neur_create ; clusNeurn := clusNeurn link neurIdent ; ENDFOR ; clusNeurnL@clusIdent := clusNeurn ; 08********08 #] 01Apr2021 clusInt_add IS OP clusIdent # should I add a "fire" connection to [neuron, cluster]s to invoke output? [attention, vigilance] some may simply change state without firing? # 02Apr2021 Note that 2 synapses per neuron-to-non connections are defined # - (maybe robustness?), # - with inital random time delays # - only one connection for intra-cluster neurons? # 02Apr2021 I need distict firing values for [0,1], how about double firing? : # - if true : fire 1 , then no fire at second delay OR double fire # - if false : no fire, then fire 1 at second delay OR single fire # - use "fire" back-connection for this? (sensory - automatic?) # lowest bit is first #] 02Apr2020 I MUST have McCulloch & Pitts logic-type neurons!!! neural networks can NEVER work without them!!! +-----+ olde code # 01Apr2021 not needed? % At time of creation, do NOT add external [dend, axon]IDs to this cluster's lists ; % neur_create adds null values ; % Do I need these clus variables? or simply neuron assigns are OK? ; clusNeurL clusDenIDL clusDelayL clusAxoIDL clusAelayL := clusNeurL clusDenIDL clusDelayL clusAxoIDL clusAelayL EACHBOTH link null null null null null ; 08********08 #] 01Apr2021 'integer add.ndf' & 'symbols.ndf' Maybe feedback to confirm fire? Could help with [0,1] states of integers to set downstream neurons. First is a reset to 0. If there is a second, sets to 1. cluster of [cluster,neuron]s -> may need [super-cluster, system]s etc? probably redundant? treats conections as bundles of neurons, with connections to other clusters ?and individual neurons? some neurons are "captive" with connections ONLY within the cluster other neurons connect to external [cluster, neuron]s assumptions : neurons can store a state - but how? each needs a "fire line" to compel it to fire without other inputs! non-myelinated axons allow indirect stimulation of neurons - need inputs for that neuron state right now, only 0 or 1 later - bursting, pattern etc (multi-state) +-----+ olde code IF flag_debug THEN write 'loading clus_intAdd_create' ; ENDIF ; # clus_intAdd_create IS - initially, just for two input integers, but could generalize # 01Apr2021 initial clus_intAdd_create IS { LOCAL clusIdent0 clusClass0 clusNeurn0 clusIdent1 clusClass1 clusNeurn1 clusIdent2 clusClass2 clusNeurn2 ; NONLOCAL clusIdentL clusClassL clusNeurnL ; % ; clusIdent0 := clus_integer_create ; clusIdent1 := clus_integer_create ; clusIdent2 := clus_integer_create ; % ; % bit_add neurons ; } 08********08 #] 31Mar2021 create a first example of a net, link d_MindCode 'symbols.ndf' create a first example of a net /media/bill/Dell2/Website - raw/Qnial/MY_NDFS/MindCode/nets/integer add.ndf a zero state of a neuron still produces a spike pattern for zero!!! non-firing states don't output 23Mar2021 link d_MindCode 'symbols.ndf' # These "get" optrs aren't terribly useful - just use short form! # get_neuronClass IS OP neuronID - res ipsa loquitor get_neuronClass IS OP neuronID { classIDL@neuronID } # get_neuronAxonIDs IS OP neuronID - res ipsa loquitor get_neuronAxonIDs IS OP neuronID { axonL@neuronID } # get_neuronDendIDs IS OP neuronID - res ipsa loquitor get_neuronDendIDs IS OP neuronID { dendL@neuronID } 08********08 #] 10Sep2020 continue with symbols problem then go back to [finders, taskers, formulators, commanders] 08********08 #] 10Sep2020 Recent peer review that I did has great pertinence! : NEUNET-D-20-00790 p Garcia etal - Small Universal Spiking Neural P Systems with dendritic-axonal delays and dendritic trunk-feedback.pdf scattered thoughts from WCCI2020 : ART? : !6 24 I-SS55 Xiaobo Liu, Graph Convolutional Extreme Learning Machine [#21160] #] 18Aug2020 WCCI20202 "newish highlights" for me : Plenary : Kay Chen Tan, Evolutionary transfer optimisation Keynote : Johan Suykens, Deep Learning NNs and Kernel machines, new synergies (awesome path forward!!!) Keynote : Michael Bronstein, Geometric deep learning, going beyond Euclidean data [#24057]: Santiago Gonzalez, Risto Miikkulainen, Improved Training Speed, Accuracy, and Data Utilization via Loss Function Optimization Keynote : Alexander Gorban, Augmented AI- correctors of errors and social networks of AI (blessing of dimensionality) Tutorial 30: Claudio Gallicchio, Simone Scardapane, Deep randomized neural networks Workshop 9: Artificial Intelligence for Mental Disorders (fascinating - after years of questionable claims, the quality of research seems to be improving, but this is hard!) Workshop 6: Design, Implementation, and Applications of Spiking Neural Networks and Neuromorphic Systems Extensions of solid historical career work - good context : Keynote : Stephen Grossberg, From designs for autonomous adaptive agents to clinical disorders- Linking cortically-mediated learning to Alzheimer’s disease, autism, amnesia, and sleep Keynote : Kunihiko Fukushima - Deep CNN Neocognitron for Artificial Vision Not so much, even though it was interesting : Yoshua Bengio, Deep Learning 2 - kind of nice, and I really like Bengio's work over the years, but : - I think he's splashing around a bit (like everyone else), and seems to be "cornered" by demands to show the way forward at perhaps too early a stage? - he doesn't refer to a lot of thinking going back 70+ years (consciousness etc) - to me he has completely missed the base (as has almost everyone else so far) I do agree with his emphasis on consciousness, but not right now for Deep Learning (especially CNNs), as they need to be concpetually upgraded big time. Consciousness doesn't need to wait for Deep Learning - it's past time to go forward after John Taylor's work & others. I'm lukewarm on Bernard Baars' "Global workspace theory" (GWT), but it plus other authors provide good context. Of course - hundreds of other great papers. 08********08 #] 25May2020 symbols - data vs operators see ""MindCode/code develop/symbols - data vs operators notes.txt"" 08********08 #] 15May2020 from ToDos : Current work : create a system to handle "formulae" see "MindCode/code develop/symbols - data vs operators notes.txt" big issue - Robert Hecht-Nielson, Asim Roy 08********08 #] 28Feb2020 [Split, tiny clean-up] of "The MindCode Manifesto" It is much easier to work with a number of sub-documents, rather than endlessly wander around a big one. Table of Contents Summary 1 Introduction - Conceptual pseudo-basis for MindCode 8 MindCode components 11 [Neurological, biological] basis for epiDNA coding 13 Historical [DNA, Protein, Evolutionary Computing, ANN] hybrid basis for epiDNA-NNs 16 MindCode - arbitrary selections from "Multiple Conflicting Hypothesis" 18 Assumed "rules of the game" 21 Static epiDNA-neuron coding [architecture, data, functions, processes, operating systems, conciousness, behaviours, personalities] 23 Static epiDNA-NN coding of [architecture, data, functions, processes, operating systems, conciousness, behaviours, personalities] 30 Static MindCode coding of [architecture, data, functions, processes, operating systems, conciousness, behaviours, personalities] 32 Dynamic epiDNA-NN coding of [architecture, data, functions, processes, operating systems, conciousness, behaviours, personalities] 34 Ontogeny 36 Specialized epiDNA-NNs for MindCode 37 Hybrids of [algorithms, conventional computing, ANNs, MindCode] 39 Questions, not answers 40 ­ # enddoc