#]
#] *********************
#] "$d_webRawe"'Neural nets/5_Neural Nets keep in mind.txt'
www.BillHowell.ca 30Oct2019 initial
+-----+
MindCode : as per "Howell 150225 - MindCode Manifesto.odt"
[architecture, data, functions, processes, operating systems, conciousness, behaviours, personalities, creativity]
***************************************************
24************************24
# Table of contents, generated with :
# $ grep "^#]" "$d_webRawe"'Neural nets/5_Neural Nets keep in mind.txt' | sed 's/^#\] / /'
#
*********************
"$d_webRawe"'Neural nets/5_Neural Nets keep in mind.txt'
03Mar2022 Jonty Sinai 18Jan2019 - Understanding Neural ODE's
07Apr2020 GANs evolved for Pareto set approximations Neural Networks
19Aug2019 Suykens papers on NN modules, architectures
22Dec2018 Medical diagnosis
20Mar2018 search "Ted Berger and hippocampal prosthesis company"
26Feb2018 search "Neural networks and performance measures"
02Feb2018 Pao, Takefuji: Functional-Link Net Computing: Theory, System Architecture, and Functionalities
01Feb2018 Regularisation vesus ordered derivatives
18Jan2018 Kenneth O. Stanley Neuroevolution
01Dec2016 wireless optogenetic tools - AWESOME for neuroscience & neural networks!!!
15Feb2015 Schmidhuber interview comment that I posted
24Oct2014 IEEE Spectrum interview of Michael Jordan re: BigData etc
28Apr2014 Schidhuber's Overview of Deep Learning
24************************24
08********08
#] ??Mar2022
08********08
#] ??Mar2022
08********08
#] ??Mar2022
08********08
#] ??Mar2022
08********08
#] 03Mar2022 Jonty Sinai 18Jan2019 - Understanding Neural ODE's
https://jontysinai.github.io/jekyll/update/2019/01/18/understanding-neural-odes.html
Understanding Neural ODE's
Posted by Jonty Sinai on January 18, 2019 · 39 mins read
"... Based on a 2018 paper by Ricky Tian Qi Chen, Yulia Rubanova, Jesse Bettenourt and David Duvenaud from the University of Toronto, neural ODE’s became prominent after being named one of the best student papers at NeurIPS 2018 in Montreal. ..."
"... I’ll introduce ODE’s as an alternative approach to regression and explain why they may hold an advantage.
..."
What would be really fun would be the extension to "Fractional Order Calculus" (FOC).
08*****08
10Mar2021
+-----+
https://www.memphis.edu/clion/publications/journal16-20.php
Hazan, H., D. Saunders, H. Khan, D.T. Sanghavi, H.T. Siegelmann, R. Kozma (2018) BindsNET: A machine learning-oriented spiking neural networks library in Python, Frontiers in Neuroinformatics. 12, 89. DOI: 10.3389/fninf.2018.00089 https://www.frontiersin.org/articles/10.3389/fninf.2018.00089/full
Bressler, S., Kay, L., Kozma, R., Liljenstrom, H., Vitiello, G. (2017) Freeman Neurodynamics – The Past 25 Years, J. Consciousness Studies, 25(1-2), 131-150.
Lee, M., S., Bressler, R. Kozma (2017) "Advances in Cognitive Engineering Using Neural Networks," Neural Networks, 92, pp. 1-2.
Kozma, R., R. Ilin, H. T. Siegelmann (2018) Evolution of Abstraction Across Layers in Deep Learning Neural Networks, in Proc. 3rd INNS Conference on Big Data and Deep Learning 2018 (BDDL2018), April, 2018, Bali, Indonesia, Procedia in Computer Science, Elsevier, Vol. 144, pp. 203-213, Best Paper Award.
Saunders, D.J., H. T. Siegelmann, R. Kozma (2018) STDP Learning of Image Patches with Convolutional Spiking Neural Networks, IEEE/INNS Int. Joint Conf, Neural Networks (IJCNN2018), World Congress on Computational Intelligence, July 8-13, 2018, Rio de Janeiro, Brazil, pp. 4906-4912, IEEE Press.
+-----+
https://zhangtemplar.github.io/gnn/
Qiang Zhang - Experienced Computer Vision and Machine Learning Engineer
Blog Deep Learning System Design Investment About
Graph Convolutional Neural Network
[ deep-learning graph convolution neural network gcn gat spectral-convolution dcnn cnn4g dcrnn gat-lstm ]
Many important real-world datasets come in the form of graphs or networks: e.g., social networks. Graph Convolutional Neural Network (GCN) is a generalization of convolution neural network over the graph, where filter parameters are typically shared over all locations in the graph.
>> P/E ratios vs interest rates, Black Scholes model of derivatives
08*****08
02Mar2021
+-----+
https://en.wikipedia.org/wiki/Gated_recurrent_unit
Gated recurrent units (GRUs) are a gating mechanism in recurrent neural networks, introduced in 2014 by Kyunghyun Cho et al.[1] The GRU is like a long short-term memory (LSTM) with a forget gate,[2] but has fewer parameters than LSTM, as it lacks an output gate.[3] GRU's performance on certain tasks of polyphonic music modeling, speech signal modeling and natural language processing was found to be similar to that of LSTM.[4][5] GRUs have been shown to exhibit better performance on certain smaller and less frequent datasets.[6][7]
Cho, Kyunghyun; van Merrienboer, Bart; Gulcehre, Caglar; Bahdanau, Dzmitry; Bougares, Fethi; Schwenk, Holger; Bengio, Yoshua (2014). "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation". arXiv:1406.1078.
https://arxiv.org/abs/1406.1078
+-----+
http://azimadli.com/vibman/thehanningwindow.htm
The Hanning window, after its inventor whose name was Von Hann, has the shape of one cycle of a cosine wave with 1 added to it so it is always positive. The sampled signal values are multiplied by the Hanning function, and the result is shown in the figure. Note that the ends of the time record are forced to zero regardless of what the input signal is doing.
While the Hanning window does a good job of forcing the ends to zero, it also adds distortion to the wave form being analyzed in the form of amplitude modulation; i.e., the variation in amplitude of the signal over the time record. Amplitude Modulation in a wave form results in sidebands in its spectrum, and in the case of the Hanning window, these sidebands, or side lobes as they are called, effectively reduce the frequency resolution of the analyzer by 50%. It is as if the analyzer frequency "lines" are made wider. In the illustration here, the curve is the actual filter shape that the FFT analyzer with Hanning weighting produces. Each line of the FFT analyzer has the shape of this curve -- only one is shown in the figure.
If a signal component is at the exact frequency of an FFT line, it will be read at its correct amplitude, but if it is at a frequency that is one half of delta F (One half the distance between lines), it will be read at an amplitude that is too low by 1.4 dB.
+-----+
https://www.sciencedirect.com/topics/engineering/hamming-window
Hamming window versus Hann window
**********
#] 07Apr2020 GANs evolved for Pareto set approximations Neural Networks
from 'NEUNET-D-19-01140 r Garciarena, Mendiburu, Santana - Analysis of the transferability and robustness of GANs evolved for Pareto set approximations Neural Networks.txt'
"... Despite the success EAs have had in this area, they remain a costly approach, as they require many evaluations of structures, which commonly involves training networks. Therefore, agility and transferability are two key aspects to take into account when designing NE algorithms. ..."
generative adversarial network (GAN) - GANs have arisen as one of the top performing DNN-based generative models. Their impressive results have garnered them great popularity, mainly in the generation of realistic images (Radford et al., 2015).
**************
#] 19Aug2019 Suykens papers on NN modules, architectures
Singaravel S., Suykens J.A.K., Geyer P., ``Deep-learning neural-network architectures and methods: using component-based models in building-design energy prediction'', Advanced Engineering Informatics, vol. 38, Oct. 2018, pp. 81-90., Lirias number: 1693175.
Karevan Z., Suykens J. A. K., ``Spatio-temporal Stacked LSTM for Temperature Prediction in Weather Forecasting'', Internal Report 18-136, ESAT-STADIUS, KU Leuven (Leuven, Belgium), 2018., Lirias number: x.
***********************
Good papers! N-0028.pdf, N-0017.pdf (quaternion! IJCNN2015 as well)
***********************
#] 22Dec2018 Medical diagnosis
http://sites.ieee.org/futuredirections/2018/09/05/humans-vs-machines-whos-winning/
Humans vs machines: Who’s winning? -I
Roberto Saracco September 5, 2018 277 Views
https://theconversation.com/digital-diagnosis-intelligent-machines-do-a-better-job-than-humans-53116
Digital diagnosis: intelligent machines do a better job than humans
January 17, 2016 2.17pm EST
https://spectrum.ieee.org/the-human-os/biomedical/diagnostics/ai-diagnostics-move-into-the-clinic
AI Diagnostics Move Into The Clinic
Some algorithms assist specialists; others may take their place
16 Feb 2018 | 16:00 GMT
https://www.newyorker.com/magazine/2017/04/03/ai-versus-md
A.I. Versus M.D. : What happens when diagnosis is automated?
Siddhartha Mukherjee 03Apr2017 Annals of Medicine
In January, 2015, the computer scientist Sebastian Thrun became fascinated by a conundrum in medical diagnostics. Thrun, who grew up in Germany, is lean, with a shaved head and an air of comic exuberance; he looks like some fantastical fusion of Michel Foucault and Mr. Bean. Formerly a professor at Stanford, where he directed the Artificial Intelligence Lab, Thrun had gone off to start Google X, directing work on self-learning robots and driverless cars. But he found himself drawn to learning devices in medicine.
*****************
#] 20Mar2018 search "Ted Berger and hippocampal prosthesis company"
https://spectrum.ieee.org/the-human-os/biomedical/bionics/new-startup-aims-to-commercialize-a-brain-prosthetic-to-improve-memory
The Human OSBiomedicalBionics
16 Aug 2016 | 18:00 GMT
New Startup Aims to Commercialize a Brain Prosthetic to Improve Memory
Kernel wants to build a neural implant based on neuroscientist Ted Berger's memory research
By Eliza Strickland
telepathic rats
*****************
#] 26Feb2018 search "Neural networks and performance measures"
RMSE, NMI, Purity
https://www.frontiersin.org/articles/10.3389/fncom.2014.00043/full
Sirko Straube* and Mario M. Krell, Robotics Group, University of Bremen, Bremen, Germany
10Apr2014 How to evaluate an agent's behavior to infrequent events?—Reliable performance estimation insensitive to class distribution
Front. Comput. Neurosci., 10 April 2014 | https://doi.org/10.3389/fncom.2014.00043
For classification - imbalance, confusion matrices ([positive, negative] versus [positive, negative])
5. Conclusions: Metrics Insensitive to Imbalanced Classes
https://papers.nips.cc/paper/548-benchmarking-feed-forward-neural-networks-models-and-measures.pdf
Leonard G.C. Harney, Computing Discipline Macquarie University NSW2109 AUSTRALIA
?date? Benchmarking Feed-Forward Neural Networks: Models and Measures
Too esoteric! Mainly for long epoch times...
http://eng.auburn.edu/sites/personal/aesmith/files/publications/journal/PerformanceMeasures.pdf
J. M. TWOMEY AND A. E. SMITH+ Department of Industrial Engineering, Uof Pittsburgh
Performance Measures, Consistency, and Power for Artificial Neural Network Models*
Mathl. Comput. Modelling Vol. 21, No. l/2, pp. 243-258, 1995
Too old RMSE...
*********************************
#] 02Feb2018 Pao, Takefuji: Functional-Link Net Computing: Theory, System Architecture, and Functionalities
14. Yoh-Han Pao, Yoshiyasu Takefuji: Functional-Link Net Computing: Theory, System Architecture, and Functionalities. IEEE Computer 25(5): 76-79 (1992)
***************************
#] 01Feb2018 Regularisation vesus ordered derivatives
http://cs231n.github.io/neural-networks-2/
very good overall discussion about setting up NNs and improving results
of course, no mention of ordered derivatives
********************************
#] 18Jan2018 Kenneth O. Stanley Neuroevolution
I posted to Facebook :
https://www.oreilly.com/ideas/neuroevolution-a-different-kind-of-deep-learning
Neuroevolution: A different kind of deep learning
Is this the new BIG THING in AI (actually CI - Computational Intelligence), well beyond Deep Learning neural nets, and much more profound? I think Simone Scardapane posted this, important for me as the area targets much of my thinking for my "MindCode" project that I never work on since the mid-to-late 1990s.
Kenneth O. Stanley's article is extremely well done, and mentions many great researchers like Dario Floreano, Andrea Soltoggio, Xin Yao, Risto Mikkilainenen, and David Fogel (Blondie24!!).
18Jan2018 NEURO-EVOLUTION - this is MindCode concepts!! from 20 years ago
https://www.oreilly.com/ideas/neuroevolution-a-different-kind-of-deep-learning
Neuroevolution: A different kind of deep learning
The quest to evolve neural networks through evolutionary algorithms.
By Kenneth O. Stanley, July 13, 2017
When I first waded into AI research in the late 1990s, the idea that brains could be evolved inside computers resonated with my sense of adventure. At that time, it was an unusual, even obscure field, but I felt a deep curiosity and affinity. The result has been 20 years of my life thinking about this subject, and a slew of algorithms developed with outstanding colleagues over the years, such as NEAT, HyperNEAT, and novelty search. In this article, I hope to convey some of the excitement of neuroevolution as well as provide insight into its issues, but without the opaque technical jargon of scientific articles. I have also taken, in part, an autobiographical perspective, reflecting my own deep involvement within the field. I hope my story provides a window for a wider audience into the quest to evolve brains within computers.
myself and my co-author Joel Lehman wrote the book, Why Greatness Cannot Be Planned: The Myth of the Objective.
In other words, as we crack the puzzle of neuroevolution, we are learning not just about computer algorithms, but about how the world works in deep and fundamental ways.
Mentions :
Dario Floreano - plastic neural networks is influenced by the early works
Andrea Soltoggio - later ideas on neuromodulation, which allows some neurons to modulate the plasticity of others
Xin Yao, Risto Mikkilainenen, David Fogel's Blondie24!!
This is dealing with MindCode stuff!!
Key concepts :
indirect coding
novelty search - sometimes better-faster than selecting best candidates
quality diversification :
“quality diversity” and sometimes “illumination algorithms.” This new class of algorithms, generally derived from novelty search, aims not to find a single optimal solution but rather to illuminate a broad cross-section of all the high-quality variations of what is possible for a task, like all the gaits that can be effective for a quadruped robot. One such algorithm, called MAP-Elites (invented by Jean-Baptiste Mouret and Jeff Clune), landed on the cover of Nature recently (in an article by Antione Cully, Jeff Clune, Danesh Tarapore, and Jean-Baptiste Mouret) for the discovery of just such a large collection of robot gaits, which can be selectively called into action in the event the robot experiences damage.
Open-endedness
Another interesting topic (and a favorite of mine) well suited to neuroevolution is open-endedness, or the idea of evolving increasingly complex and interesting behaviors without end. Many regard evolution on Earth as open-ended, and the prospect of a similar phenomenon occurring on a computer offers its own unique inspiration. One of the great challenges for neuroevolution is to provoke a succession of increasingly complex brains to evolve through a genuinely open-ended process. A vigorous and growing research community is pushing the boundaries of open-ended algorithms, as described here. My feeling is that open-endedness should be regarded as one of the great challenges of computer science, right alongside AI.
Players
For example, Google Brain (an AI lab within Google) has published large-scale experiments encompassing hundreds of GPUs on attempts to evolve the architecture of deep networks. The idea is that neuroevolution might be able to evolve the best structure for a network intended for training with stochastic gradient descent. In fact, the idea of architecture search through neuroevolution is attracting a number of major players in 2016 and 2017, including (in addition to Google) Sentient Technologies, MIT Media Lab, Johns Hopkins, Carnegie Mellon, and the list keeps growing. (See here and here for examples of initial work from this area.)
http://eplex.cs.ucf.edu/neat_software/
Getting involved
If you’re interested in evolving neural networks yourself, the good news is that it’s relatively easy to get started with neuroevolution. Plenty of software is available (see here), and for many people, the basic concept of breeding is intuitive enough to grasp the main ideas without advanced expertise. In fact, neuroevolution has the distinction of many hobbyists running successful experiments from their home computers, as you can see if you search for “neuroevolution” or “NEAT neural” on YouTube. As another example, one of the most popular and elegant software packages for NEAT, called SharpNEAT, was written by Colin Green, an independent software engineer with no official academic affiliation or training in the field.
https://conferences.oreilly.com/artificial-intelligence/ai-ny?intcmp=il-data-confreg-lp-ainy18_20171215_new_site_neuroevolution_ken_stanley_end_cta
Learn more about developments in AI and machine learning at the AI Conference in New York, April 29 to May 2, 2018. Hurry—best price ends February 2.
xxx
********************
#] 01Dec2016 wireless optogenetic tools - AWESOME for neuroscience & neural networks!!!
http://spectrum.ieee.org/biomedical/devices/neuroscientists-wirelessly-control-the-brain-of-a-scampering-lab-mouse?bt_alias=eyJ1c2VySWQiOiAiOTU5NzQ0YzUtZTY4MS00OTcyLTkzZWUtMTMyMjAxNWU5NjYyIn0%3D
Neuroscientists Wirelessly Control the Brain of a Scampering Lab Mouse
With wireless optogenetic tools, neuroscientists steer mice around their cages
*************************
#] 15Feb2015 Schmidhuber interview comment that I posted
"... Most current commercial interest is in plain pattern recognition, while this theory is about the next step, namely, making patterns (related to one of my previous answers). Which experiments should a robot’s RL controller, C, conduct to generate data that quickly improves its adaptive, predictive world model, M, which in turn can help to plan ahead? ..."
Does this response lead into the questions of at least a simple form of machine consciousness, and beyond that self definition? For example, if the system goes beyond simply presenting interesting patterns (or conclusions) as part of a specifically assigned or designed task, can it lead into the next step of [pririotizing "hits", notifying "pertinent" people, recommending actions, helping to form teams for discussion]? In an ill-defined, open system like social media, this might involve having the system generalize its initial "marching orders" and perhaps defining new roles and targets? False positives could be a big problem, but could Deep Learning itself help to reduce "false positive advice" at a more abstract level? Measuring what is there is one thing, publishing a note and seeking reactions and actions is a process where some primitive level of consciouness may be necessary? I'm thinking of John Taylor's model of concept - sort of going beyond consciousness to a point where a system begins to understand the implications of its actions and the effect on the external environment.
*********************
#] 24Oct2014 IEEE Spectrum interview of Michael Jordan re: BigData etc
http://spectrum.ieee.org/robotics/artificial-intelligence/machinelearning-maestro-michael-jordan-on-the-delusions-of-big-data-and-other-huge-engineering-efforts/?utm_source=techalert&utm_medium=email&
My response :
"... it seems like every 20 years there is a new wave that involves them [neural nets] ..."
Strange - I had been thinking of the same thing a few months ago after looking up pandemic cycles (flu, malaria, cholera, bubonic plague, but not smallpox) in relation to the ebola outbreaks (for which we'll have to wait 100 years and see), which is a "passive interest" after seeing a remarkable analysis in the early 2000's. Perhaps NN "outbreaks" are more of a generational thing, related to somehow to having to wait for "the old dogs to die" (not so funny any more, but ringing more true, now that I am an old dog). Assuming many generations of revolutions before we get anywhere near to a "more solid" understanding of the brain, 200 years to human-like intelligence seems even optimistic barring some form of machine hyper-evolution pushing the humans along. Robert Hecht-Nielson had suggested a similar number some decades ago, and at WCCI 2014 Don Wunsch suggested something like "... if you expect to see human-like intelligence in our lifetime ..." then you have to assume that human lifetimes will have to be dramatically extended.
******************
#] 28Apr2014 Schidhuber's Overview of Deep Learning
"... Sometimes we also speak of the depth of an architecture: SL FNNs with fixed topology imply a
problem-independent maximal problem depth, typically the number of non-input layers. Similar for certain
SL RNNs (Jaeger, 2001; Maass et al., 2002; Jaeger, 2004; Schrauwen et al., 2007) with fixed weights for
all connections except those to output units—their maximal problem depth is 1, because only the final links
in the corresponding CAPs are modifiable. In general, however, RNNs may solve problems of potentially
unlimited depth. ..."
>> Howell : Is this correct? Does credit assignment REQUIRE weight changes? Schmidhuber specifies that "modifiable weights" are the key point.
End of Section 3 :
"... It is possible
to model and replace such unmodifiable environmental PCCs through a part of the NN that has already
learned to predict (through some of its units) input events from former input events and actions (Sec. 6.1).
Its weights are frozen, but can help to assign credit to other, still modifiable weights used to compute
actions (Sec. 6.1). This approach may lead to very deep CAPs though. ..."
>> Howell : This provides an escape, leading to DNA-specified, evolved RNNs (MindCode)
*********************
NEUNET-D-13-00401 p He etal - Anti-Windup for time-varying delayed CNNs subject to Input Saturation
http://web.mit.edu/braatzgroup/33_A_tutorial_on_linear_and_bilinear_matrix_inequalities.pdf
Theorem provers :
1. http://en.wikipedia.org/wiki/LCF_theorem_prover
2. Successors include HOL (Higher Order Logic) - http://en.wikipedia.org/wiki/HOL_(proof_assistant)
3. and Isabelle. http://en.wikipedia.org/wiki/Isabelle_(proof_assistant)
Isabelle theorem prover is an interactive theorem prover, successor of the Higher Order Logic (HOL) theorem prover. It is an LCF-style theorem prover (written in Standard ML), so it is based on a small logical core guaranteeing logical correctness.
http://isabelle.in.tum.de/
OCaml - seems like QNial successor
LMI software :
1. Gahinet and Nemirovskii wrote a software package called LMI-Lab [59] which evolved into the Matlab's
LMI Control Toolbox [60].
2. Vandenberghe and Boyd produced the code SP [23]
which is an implementation of Nesterov and Todd's
primal-dual potential reduction method for semideÆnite
programming (this is an interior point algorithm). SP
can be called from within Matlab [94].
3. Boyd and Wu
extended the usefulness of the SP program by writing
SDPSOL [25,26], which is a parser/solver that calls SP.
The advantages of SDPSOL are that the problem can be
speciÆed in a high level language, and SDPSOL can run
without Matlab. SDPSOL can, in addition to linear
objective functions, handle trace and convex determi-
nant objective functions.
4. LMITOOL is another software package for solving
LMI problems that uses the SP solver for its computa-
tions [50]. LMITOOL interfaces with Matlab, and there
is an associated graphical user interface known as
TKLMITOOL [49]. The Induced-Norm Control Tool-
box [17] is a Matlab toolbox for robust and optimal
control based on LMITOOL
5. The solvers that call SP are the easiest to use and can handle
bigger problems than the other software. As of the
publication of this tutorial, none of the above LMI sol-
vers exploit matrix sparsity to a high degree
enddoc