Subject: Epigenetics, reincarnation, and MindCode
From: "Bill Howell. Hussar. Alberta. Canada" <>
Date: Tue, 19 Jun 2018 15:22:04 -0600
To: Sarah "Howell." Director-Actor-Freelance "E-Journalist." Panama Columbia
Cc:

I'm waiting around for the last stages of cleanup following Hussar SummerDaze, so here's is some background information.  As I mentioned, I seem to remember that there has been quite a bit of fiction around these themes, but I can't think of specific movies and books.

Reincarnation -  My old blog posting on this subject is below, written up as a joke to try on others.  My posting is duplicated, with one having a curiously incomplete sentence, but my website has been [broken, messed up] over time and I haven't taken the week or two to fix it up.  More recently, experiments with [bacteria,microbes,parasites] or whatever have recently suggested that epigenetic traces seem to be present in lower lifeforms for as much as 10 generations.  Be very careful about [genetics, epi-genetics], as we clearly do NOT understand these themes well - in spite of the great research and advances in the area.  ?John Mattick's? group at the university of Queensland in Australia (I visited their lab when visiting Catherine in Australia a long time ago) has the war cry "...   The objective of our group is to destroy the central dogma of  all biology - genetics.   ..."  (or something like that).  I have not tried to keep u[ with fascinating advances - I'm, spread way to thin on themes.

Old paper :  My [genetic, epigentic] concepts are key concepts at an early stage published as a conference paper (embarrassingly - I just didn't have the time to do the math while I was working, so this is mostly blah-blah) :
  • William Neil Howell 2006 "Genetic specification of recurrent neural networks: Draft - with errors and incomplete, not peer reviewed, unpublished" http://www.billhowell.ca/Neural%20nets/Howell%202006%20-%20Genetic%20specification%20of%20neural%20networks,%20draft%20concepts%20and%20implications.pdf
  • William Neil Howell 2006 "Genetic specification of recurrent neural networks: Initial thoughts", Proceedings of WCCI 2006, World Congress on Computational Intelligence. Vancouver, paper#2074, pp 9370-9379, 16-21 July 2006
  • Bill Howell 2006, "Genetic Specification of Recurrent Neural Networks: Initial Thoughts", Presentation in the Special Session on Neural Network Models and Applications in Bioinformatics, Neuroscience, and Neuro-Genetics, World Congress on Computational Intelligence 2006, Vancouver, 21-27Jul06  (available from author)
The "Draft" version is best, although [rough, with spelling grammatical errors]. 


MindCode -  my long-neglected high-priority "MindCode" project, which isn't just about the mind but also refers to changes in concepts surrounding neural network architectures etc.  Hopefully, in 2019 when I finish (or abandon) my physics project on Bill Lucas's "Universal Force", I will be able to focus on this!??

I had forgotten that I posted the [VERY incomplete, draft, point-form document] in 2015 when I spent a few days on the project before being buried in other projects and in conference work.  The blog paragraph and link are provided below.  I'm really most interested in the [math,NN architectures,etc] AND the biology.  I'm not interested in the blah-blah and although it's fun (once in a while) I don't pay serious attention to the sci-fi (stories and reality are only glancingly connected).


"Neuro-evolution" of Risto Miikkulainen  & colleagues -   One of the reasons that I DIDN'T work on MindCode from the late 1990's to 2015 is that I always expected more formal, mathematically interesting work to come out in this area.  However, the process of science [fashions -> cults -> religions], and the distractions of hype cycles (currently Deep Learning Neural Nets) and the great discouragements of deep research by simplistic concepts like "universal function approximation" have probably retarded progress in many areas like this.  My guess is that "special people" (most likely NOT scientists who will pick up on amateur or sci-fi thinking later) are required to drive conceptual "revolutions".  (This is NOT like Thomas Kuhn's Structure of Scientific Revolutions", but it can be thought as a very different compliment to that). 

However, Mikalainen's work IS going in this direction (has been for a long time), which is very exciting for me.  Doubtless there are other [amateurs, scientists] doing similar work.  Kenneth O. Stanley's article below is extremely powerful!!!  I forget most of the details (some points I wrote down are posted below), but one key point is that this work is FUNDAMENTALLY changing our understanding of the process of evolution!!   That shouldn't be surprising, knowing your Dad, and also zeroing in on the "law of survival of the fittest" as an important concept, but far from an adequate description of evolution (you should have spotted that, or at least I've felt that way forever...).
Kenneth O. Stanley 13Jul2018 "Neuroevolution: A different kind of deep learning. The quest to evolve neural networks through evolutionary algorithms."  https://www.oreilly.com/ideas/neuroevolution-a-different-kind-of-deep-learning
Another paper that I saved with a different perspective :
Gene E. Robinson, Andrew B. Barron 07Apr2017 Epigenetics and the evolution of instincts, Science. Vol. 356, Issue 6333, pp. 26-27; Apr 07, 2017  https://www.life.illinois.edu/robinson/storage/pdfs/EpiInstinct.pdf

Risto Miikkulainen. INNS BoG & Diversity. UTexas at Austin. USA, was on the www.INNS.org Board of Governors at the same time as me, but stupidly I didn't take the time to discuss his work with him!  (Actually - that's the same as for almost all other BoG members and the thousands of scientists at www.ijcnn.org and http://www.ecomp.poli.br/~wcci2018/ (this year's WCCI, which I cannot attend financially).


Hopefully this stuff "will help a bit ..."

Love,
Dad


********************************
http://www.billhowell.ca/Howell%20-%20blog.html
http://www.billhowell.ca/Crazy%20ideas/A%20Riddle%20by,%20for,%20of,%20on,%20and%20with%20the%20Mind.pdf

27Apr2015 Howell's Reincarnation : Speculations on a possible mechanistic context - From recent discussions, my old concept related to a possible explanation of, and a mechanistic context for, reincarnation has again surfaced. A short "joke" on this theme, "A Riddle by, for, of, on, and with the Mind" was posted to my website circa 31Jan2007, and emails (hard to track in my old drives, possibly lost) relate to an earlier Toastmaster presentation or discussion in ~26Jun2006.
While the linked document is presented as a joke that you might want to pull on your friends, it actually is an interesting concept that I may some day address, given that I am again trying to re-start work on my old "MindCode" project that was its inspiration. MindCode ("Given that computer code is used to program computers, then Mindcode ...") deals with DNA and epi-DNA (I am careful to avoid use of the over-extended and incorrect term "genetics") and how that might be used for Artificial Neural Networks (ANNs), with a little whistful reference to real neuroscience (beyond the holy grail in a sense, but this is the great foundation of ANNs). Starting from my old resulting side joke, I hope some day to elaborate "speculations on a possible mechanistic context" for reincarnation.
No, I'm not a "believer" in re-incarnation, but this is just too fun to skip over, and to a limited sense is possibly quite real - think of instinct and heredity as a starting point.
(re-posted 27Apr2015, background document from 31Jan2007)


********************************
My recent blog posting :
http://www.billhowell.ca/Neural%20nets/Howell%20150225%20-%20MindCode%20Manifesto.odt

27Apr2015 Howell - MindCode Manifesto.odt This document is a rush point-form job to re-start work on my old "MindCode" project that was its inspiration. MindCode ("Given that computer code is used to program computers, then Mindcode ...") deals with DNA and epi-DNA (I am careful to avoid use of the over-extended and incorrect term "genetics") and how that might be used for Artificial Neural Networks (ANNs), with a little whistful reference to real neuroscience (beyond the holy grail in a sense, but this is the great foundation of ANNs).

Starting from my old resulting side joke, I hope some day to elaborate "speculations on a possible mechanistic context" for reincarnation. No, I'm not a "believer" in re-incarnation, but this is just too fun to skip over, and to a limited sense is possibly quite real - think of instinct and heredity as a starting point.

27May2018 Note : I guess I never finished this posting, nor was I able to get back to the MindCode project due to other pressing priotities (until at least 31Dec2018 I will be working on my fundamental theoretical physics project "Bill Lucas's Universal Force). It will be at least 6 to 9 months before I can get back to it.
(re-posted 27Apr2015, background document from 31Jan2007, 27May2018 fixed this posting)


********************************
18Jan2018 I posted to Facebook :
https://www.oreilly.com/ideas/neuroevolution-a-different-kind-of-deep-learning
Neuroevolution: A different kind of deep learning
        Is this the new BIG THING in AI (actually CI - Computational Intelligence), well beyond Deep Learning neural nets, and much more profound? I think Simone Scardapane posted this, important for me as the area targets much of my thinking for my "MindCode" project that I never work on since the mid-to-late 1990s.
Kenneth O. Stanley's article is extremely well done, and mentions many great researchers like Dario Floreano, Andrea Soltoggio, Xin Yao, Risto Mikkilainenen, and David Fogel (Blondie24!!).
18Jan2018 NEURO-EVOLUTION - this is MindCode concepts!! from 20 years ago
https://www.oreilly.com/ideas/neuroevolution-a-different-kind-of-deep-learning
Neuroevolution: A different kind of deep learning

The quest to evolve neural networks through evolutionary algorithms.
By Kenneth O. Stanley, July 13, 2017
        When I first waded into AI research in the late 1990s, the idea that brains could be evolved inside computers resonated with my sense of adventure. At that time, it was an unusual, even obscure field, but I felt a deep curiosity and affinity. The result has been 20 years of my life thinking about this subject, and a slew of algorithms developed with outstanding colleagues over the years, such as NEAT, HyperNEAT, and novelty search. In this article, I hope to convey some of the excitement of neuroevolution as well as provide insight into its issues, but without the opaque technical jargon of scientific articles. I have also taken, in part, an autobiographical perspective, reflecting my own deep involvement within the field. I hope my story provides a window for a wider audience into the quest to evolve brains within computers.

myself and my co-author Joel Lehman wrote the book, Why Greatness Cannot Be Planned: The Myth of the Objective.
In other words, as we crack the puzzle of neuroevolution, we are learning not just about computer algorithms, but about how the world works in deep and fundamental ways.

Mentions (quotes below as well):
Dario Floreano - plastic neural networks is influenced by the early works
Andrea Soltoggio - later ideas on neuromodulation, which allows some neurons to modulate the plasticity of others
Xin Yao, Risto Mikkilainenen, David Fogel's Blondie24!!
This is dealing with MindCode stuff!!
Key concepts :
indirect coding
novelty search - sometimes better-faster than selecting best candidates
quality diversification :
        “quality diversity” and sometimes “illumination algorithms.” This new class of algorithms, generally derived from novelty search, aims not to find a single optimal solution but rather to illuminate a broad cross-section of all the high-quality variations of what is possible for a task, like all the gaits that can be effective for a quadruped robot. One such algorithm, called MAP-Elites (invented by Jean-Baptiste Mouret and Jeff Clune), landed on the cover of Nature recently (in an article by Antione Cully, Jeff Clune, Danesh Tarapore, and Jean-Baptiste Mouret) for the discovery of just such a large collection of robot gaits, which can be selectively called into action in the event the robot experiences damage.
Open-endedness
        Another interesting topic (and a favorite of mine) well suited to neuroevolution is open-endedness, or the idea of evolving increasingly complex and interesting behaviors without end. Many regard evolution on Earth as open-ended, and the prospect of a similar phenomenon occurring on a computer offers its own unique inspiration. One of the great challenges for neuroevolution is to provoke a succession of increasingly complex brains to evolve through a genuinely open-ended process. A vigorous and growing research community is pushing the boundaries of open-ended algorithms, as described here. My feeling is that open-endedness should be regarded as one of the great challenges of computer science, right alongside AI.
Players
        For example, Google Brain (an AI lab within Google) has published large-scale experiments encompassing hundreds of GPUs on attempts to evolve the architecture of deep networks. The idea is that neuroevolution might be able to evolve the best structure for a network intended for training with stochastic gradient descent. In fact, the idea of architecture search through neuroevolution is attracting a number of major players in 2016 and 2017, including (in addition to Google) Sentient Technologies, MIT Media Lab, Johns Hopkins, Carnegie Mellon, and the list keeps growing. (See here and here for examples of initial work from this area.)
http://eplex.cs.ucf.edu/neat_software/
Getting involved
        If you’re interested in evolving neural networks yourself, the good news is that it’s relatively easy to get started with neuroevolution. Plenty of software is available (see here), and for many people, the basic concept of breeding is intuitive enough to grasp the main ideas without advanced expertise. In fact, neuroevolution has the distinction of many hobbyists running successful experiments from their home computers, as you can see if you search for “neuroevolution” or “NEAT neural” on YouTube. As another example, one of the most popular and elegant software packages for NEAT, called SharpNEAT, was written by Colin Green, an independent software engineer with no official academic affiliation or training in the field.
https://conferences.oreilly.com/artificial-intelligence/ai-ny?intcmp=il-data-confreg-lp-ainy18_20171215_new_site_neuroevolution_ken_stanley_end_cta
        Learn more about developments in AI and machine learning at the AI Conference in New York, April 29 to May 2, 2018. Hurry—best price ends February 2.


endemail