Subject: RE: StepEncog
From: "Bill Howell. Hussar. Alberta. Canada" <>
Date: Fri, 31 May 2019 17:23:26 -0600
To: "John Hilton O'Brien. Co-Owner. Hobbs Hobbies. Strathmore. Alberta"
Cc:

StepEncog  -  I don't think you'll realize how much, but I really appreciate your detailed comments, valuable to me as you are from outside of the Artificial Neural Network community, and you have put thought into this.   I rarely get responses from scientists like this - of a critical [insight, viewpoint].

Value is that your perspective is VERY different, which I retain in the context of "Multiple Conflicting Hypothesis".  I have no interest in "changing your mind", as you way of thinking is intrinsically valuable, and while [blending, hybridization] is normal in CI, it is important to retain independent streams of concepts.

You are very astute at picking out [flaws, limitation], and seem to have a good sense of how primitive Computational Intelligence (CI) is, in spite of its oft-wickedly [complex, rich, creative] mathematical basis.  Mathematics is like the Greek sirens - so alluring as to be dangerous?
  • Are you expecting too much?   The very primitive, low-level state of much of the work in this area is in stark contrast to the [hype, media, science fiction].   There are impressive results, but understanding their limitations isn't seemingly of interest to many people, whereas you have immediately pointed that out.
  • Sigmund Freud has had a HUGE impact on NNs, although I doubt many experts (particularly of the last generation or two) have any idea about this.   Plato is "never" mentioned, but Freud honestly attributed origins of some of his key ideas to the Greeks (Plato?), so maybe Plato is just hidden?
  • Other perspectives - Even experts in CI tend to have very specialised views of the areas depending on their own favourite approches.  Examples :
    • information theoretics - just like [chemical, mechanical] engineering thermodynamics!
    • biological -  this is a stretch, but it is a goal for many to understand have biological [neurons, brain] accomplish what they do
    • learning theories (supervised, unsupervised)
    • classification -  good example of how a small area of application is actually the only thing they see in CI
    • black boxes - many simply don't care, they just want software to get good results, and it has to be fast and simple to use.
    • etc, etc

Plato's solids -> Philospher's stone -> crystal structures  -> alternative theory to quantum mechanics for [atoms, molecules]
I was at Sunridge Mall yesterday buying a pair of running shoes when I saw a ~4 or 5 3D printers at a games store :
Alchemy 403-457-5244 (hard to read phone #!)
They produce snap-together components of Dungeons and Dragons on site (no inventory).  I don't play games, but the idea of producing 3D molecules based on a theory that could replace quantum mechanics (for some things) might be of interest, although my budget puts this into the future.  The trick would be to find the designs online.
https://www.design-point.com/blog/3d-software/cad/platonic-solids-modeling-a-dodecahedron-using-3d-sketches/

Geometrical structure of the atom :
/media/bill/VIDEOS/Electric Universe/EU2017 Future Science conf/Kaal, Edwin 170820 The proton-electron atom - a proposal for a structured atomic model.ogv
Note : the Electric Universe 2017 presentations are available online (30$ to access the whole conference or something), but some older presentations are posted freely I think.

In closing :  I promise not to bombard you with huge piles more of stuff.   But this was interesting to me.

By the way, I lost my last email to you, possibly due to a regular series of crashes with my newly upgraded operating system (Linux mint Debian Edition 3).  That's very worrisome, as I don't have much of a memory, and I need past emails for conference work etc!  Oh well, comes with the territory...

Cheers,


Mr. Bill Howell
1-587-707-2027     www.BillHowell.ca
P.O. Box 299, Hussar, Alberta, T0J1S0
member - International Neural Network Society (INNS), IEEE Computational Intelligence Society (IEEE-CIS),
IJCNN2019 Budapest, Authors' Guide, Sponsors & Exhibits Chair, https://www.ijcnn.org/organizing-committee
WCCI2020 Glasgow, Publicity Chair mass emails, http://wcci2020.org/
Retired: Science Research Manager (SE-REM-01) at Natural Resources Canada, CanmetMINING, Ottawa


Note - my comments in [blue, italics] font below are too long-winded, perhaps of more interest to me than you!

-------- Forwarded Message --------
Subject: StepEncog
Date: Thu, 30 May 2019 14:59:17 -0600
From: John Hilton-O'Brien <>
To:


Hi, Bill;

I found this paper interesting - but I also see why it might generate relatively little interest.

For me personally, it was an interesting first view into where AI studies currently are.
To me, Oota etal's work falls between Computational Intelligence (CI) and Artificial Intelligence (AI), which makes it interesting to me.   The work cannot be seen as even being known so much in the community, other than perhaps being yet another series of studies of fMRI data by those in that field.

[neuroscience, neurology, CI] are still extremely immature with regards to [brain, mind] function. 
  • Advantage - [quantitative, testable, reproducible] models for system [identification, prediction, control, optimisation] that go far beyond the capability of words and logic. Drawing "general boxes" around concept areas, and drawing conclusions on that basis fall [far, far] short of what these systems do, and the light that they shine on challenges.
  • Disadvantage - In most respects, nowhere close to describing actual [biology, brain], and frankly at a very primitive conceptual level.  The approach might be seen as "ground up" with perhaps "top down" direction.
  • Example :  consciousness.  John Taylor's work was going in the direction of "computable consciousness" (?), but at so primitive a level it doesn't match what consciousness experts seem to enjoy discussing.
Clarification :  [Neural Networks, Evolutionary computation, Fuzzy systems, etc] I still look at as "CI - Computational Intelligence", not "AI - Artificial Intelligence", although they are grouped together now, especially as the hype is now with CI (quite the change in mindset by the AI guys!!  Marvin Minsky of MIT was a "godfather" of that area, or at least self-proclaimed with his followers).
But when I consider it from the perspective of a fellow researcher, already deep in the field, I think it is disappointing.  Nobody’s methods are called into question.  No concepts are being tested out: it is making use of existing modalities, which it applies more broadly to emulate a larger part of the brain. 
Perhaps the assemblage and adaptation of [concepts, tools] will be of central interest, but that is not what I see from the reactions of others at this time.   What strikes me is that I am not aware of anything that can touch their results, and though I don't normally look at linguistics and resoning in the traditional sense, this kind of bridges two disparate worlds.

You might be surprised at the "advances" in CI - often refinements or adaptations thet give marginally best test results on standard (large) datasets.  It's highly competitive, and often marginal improvements are enough to make results much more useful in application.

Disappointingly, the authors do not actually give a complete picture of brain activity.  No attempt is made to cover reasoning activity. 
Correct -  they are very focused and limited at this stage.  Furthermore, I suspect that very different approaches may be needed for each different type of brain processing.  Even for the same type of [function, processing], there is a "No free lunch" theory that is taken quite seriously, that no [technique, math, approach]  is best for all cases.  And of course, best-performing toolsets for [vision, sound, language, control] are very different. 

Deep Learning Neural Networks might be a bit more general - but maybe this is superficial as that's kind of a general concept anyways.  Here's the description ofa panel sessions for this summer's www.ijcnn.org :

"...   Panel 3: Deep Learning: Hype or Hallelujah?
Panel Chair: Vladimir Cherkassky, University of Minnesota, USA
Panelists: in progress
Abstract:
In the last 3-5 years there have been tremendous interest in the so-called Deep Learning Networks (DLN). Unfortunately, there is little theoretical understanding of DLNs and many claims about their superior  capabilities often represent technical marketing. These are 3 main types of arguments made by supporters  of DLNs: (1) automatic feature selection by DLNs; (2) biological flavor of DLN learning; (3) their competitive  generalization performance on several large real-life application data sets, such as image recognition, etc.  One may adopt more cautious and skeptical viewpoint about DLNs arguing that:
There is no theoretical reason for DLNs to perform better than other methods. So their superior  performance (on some application data) is simply due to good match between statistical characteristics of  the data at hand and DLN parameterization. All existing empirical results using DLN on large data sets effectively implement Empirical Risk Minimization (ERM) inductive principle (under VC-theoretical framework). In spite of all hype and publicity, there have been no systematic empirical comparison studies using synthetic data sets under (under small size setting). Claims about biological motivation behind DL are rather naive (especially since such claims are made by computer scientists and engineers, not neuroscientists). The panel will present opposing views on DL, followed by questions from the audience.  The panel starts with a critical view by panel chair, continues with responses from panelists, some follow
up questions, and questions from the audience.   ..."

By the way, I'm a big fan of the panel chair Vladimir Cherkassky, and his inspriration, Vladimir Vapnik.

The idea that perception and processing and combination of sensory stimuli was enunciated by Aristotle 24 centuries ago, and had general acceptance of every scholar who paid attention to the study of people.  This study doesn’t go into sense such as smell, taste, and touch, either, sticking with “safe” subjects already covered by other researchers. This isn’t groundbreaking: it's the work of some bright grad students, not someone accepted as Doctoris in the sense that a PhD implies.
You are right - this is an incremental conference paper, not a thesis or one of the very rare [revolutionary, creative] breakthroughs.  There are vastly more PhDs than the latter, probably by at least a factor of 100,000!
Here are some ways that the paper might be improved, or future research might be done.  I think that a 

1. Is an established model actually inferior in predictive ability?  That claim and its demonstration will generate controversy - the inevitable counter-studies and other responses that it generates are exactly what the writers need to get a career boost.
Yes -  and that is normal in Computational Intelligence where scientists are often trying to break through "performance barriers" of previous work, either with [same, similar, different] conceptual basis or instrumentation.   A HUGE issue is to try and handle vastly large amounts of data to get the breakthroughs, and that is NOT usually a simple issue of scale-up.  It often requires fundamentally new concepts and adaptations. 

Merely increasing the size and speed of computing is NOT central to progress, it is the conceptual advances that are more important.  In the current paper by Oota etal., Deep Learning is not their invention - it almost IS the basis of the "AI breakthroughs" currently being discussed in the media across a broad range of applications.   

The references ([10] and on) dealing with analysis of individual words based on fMRI data are fairly recent, and the first paper by Oota etal that I saw was in 2017 (I think they had earlier papers).   As they state p1 "...   Our main contribution in this paper is a unified deep encoding model which can model all of the fMRI voxel activations for multiple users for  given stimulus, and predict the activations on unseen stimulus. This can be seen as a departure from traditional method  of using multiregression methods on selected subset of voxels.   ..."  This isn't a little departure, even if, as I mention elsewhere, Oota etal are using the Deep Leanring NN approach which has been developed and advanced  by many others.   They ahve also implemented several specific adaptations of their own from a vast small-world (Section II.B) of possiblities. 

2. Previous models appear to only have covered words and images.  What about smell, taste, and touch?  Are these also predictable?
There has been a fair amount of work on artificial [nose, taste], whereby actual [instrumentation, computational systems] have been built to try to replicate human-like classification using "artificial sensory systems" with some success.  A particularly [fun, dangerous] project was a wine-tasting system for French wines.  Talk about looking for trouble! <grin>  Chaos theory was applied to EEG-based olfacory information with some success, and was expanded [Walter Freeman, Robert Kozma] as proposing that chaos is a basis for understanding whole-brain processing.  Can't say that's strongly supported by the broader community.

Both of the above I am only aware of from [neural network, evolutionary computing] papers, and I have not looked more broadly.

Oota etal doen't seem to be emphasizing [smell, touch, taste, motor] information at this stage, and this is perhaps understandable in that they still have a long ways to go with the [text, image] data and different types of experiments.   But if chaotic dynamics are critical to olfactory (and other processing), fMRI data might not be at all adequate - it's temporal resolution is very low (EEG was used forthe chaos theory stuff).   But who knows how far this could be taken?   For one thing, Convolutional Neural Networks (CNNs) are a very primitive Deep Learning architecture, albeit good at what they do well (intentionally circular here!). 

3.  Could this model predict the brain activity of other species, such as dogs?  Aristotle suggests that their processing must be similar, and the learning systems in use here suggest that it could measure such activity.
I don't know, but I suspect that it could, except the funding support would be biased to human studies with fMRI information.  However, rat studies in particular are important given the relative [ease, ?ethical acceptability?] of such studies, and perhaps mostly where actual hardware implants are developed.   Here the best example I am thinking of is a hypocampal prothesis (navigation, but the current project is short-term to long-term memory conversion, with Alzheimers as the target), which apparently has undergone human trials.  But I am a little unsure of the details and I don't have great confidence in those claims as I haven't seen the actual papers. 
4.  If the model COULD model the brain activity of animals analyzing images, and showed that it was remarkably different, it might controvert the work of a bunch of contemporary biologists and philosophers such as Martha Nussbaum, and even have consequence for ethics.  It could potentially cause ripples throughout academia, and generate a ton of replies from other disciplines.
We'll have to wait and see.

The only comment that I have is that a completely different stream of research by Robert Hecht-Nielson, "Confabulation theory", postulated a concept for mammalian cognition that was VERY specific to brain structures (thalamo-cortical loops were a focus).   That was the best system that I have run into for automated generation of written text responses to a simple "third plausible sentence" test scenario.  I did a non-scientific quick survey, and only one person of about 15-20 was able to tell which respondent was the computer.  But Hecht-Nielson did not relate his concept to fMRI data, it was a computational model.

5.  It would be useful to see what someone learning something NEW looks like, rather than recognizing already-known.  Would it be different than simply processing an image?  If so, its noteworthy.
An earlier paper by Oota and colleauges dealt with predicting "where" a new word would be stored in the brain of a test subject.  "Where" is very fuzzy, as fMRI isn't very exact in terms of location, and that is a limit of the techniques they are using.  However, the fMRI data was still image-sequence (time dependent) based, the results do not arise from a static image.
6. A big question here is whether kinesthetic learning is substantially different than other forms, since it involves a multi-step process.  Can they model it?  Is it materially different than simply analyzing an image?
I had to look up the definion (wikipedia, to which I donate a small sum annually) "...   Kinesthetic learning (American English), kinaesthetic learning (British English), or tactile learning is a learning style in which learning takes place by the students carrying out physical activities, rather than listening to a lecture or watching demonstrations. As cited by Favre (2009), Dunn and Dunn define kinesthetic learners as students who require whole-body movement to process new and difficult information.   ..."

Great question!   Within the artificial neural network area (based on other areas), there has long been a strong perception that cognition etc are tightly lied to the motor (action) system!    That isn't used in the Oota paper, and I doubt that they would dive into that immediately.  Others in robotics and prosthetics have probably long worked on that.


Anyhow, thanks very much for sharing this with me!

-John HOB
587-229-9318