#] #] ********************* #] "$d_web"'Neural nets/0_Neural Nets notes public.txt' # www.BillHowell.ca <"$d_web"'Neural nets/0_Neural Nets notes public TableOfContents.txt' # 24************************24 #] +-----+ #] ToReads : "$d_web"'Neural nets/References/' : Schmidhuber 26Mar2022 Neural nets learn to program neural nets with with fast weights (1991).html Schmidhuber 29Dec2022 Annotated history of modern AI and deep neural networks.html (Godel machines etc) Sejnowski 21Aug2022 Large Language Models and the Reverse Turing Test.pdf https://docs.midjourney.com/ - imagery from short text descriptions (better than chatGPT?) #] +-----+ 24************************24 08********08 #] ??Feb2024 08********08 #] ??Feb2024 08********08 #] ??Feb2024 08********08 #] ??Feb2024 08********08 #] ??Feb2024 08********08 #] ??Feb2024 08********08 #] ??Feb2024 08********08 #] ??Feb2024 08********08 #] 14Feb2024 Grossberg: How children learn to understand language meanings ------- Forwarded Message -------- From: "Grossberg, Stephen" To: Mitsu Hadeishi , travelsummer2006@yahoo.com Cc: connectionists@mailman.srv.cs.cmu.edu Subject: Re: Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate Date: Wed, 14 Feb 2024 21:34:06 +0000 Dear All, Perhaps some of you might find the following recent article relevant to the discussions in this email thread: Grossberg, S. (2023). How children learn to understand language meanings: A neural model of adult–child multimodal interactions in real-time. Frontiers in Psychology, August 2, 2023. Section on Cognitive Science, Volume 14. https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2023.1216479/full Best, Steve Grossberg sites.bu.edu/steveg 08********08 #] 12Feb2024 Tsvi Achler: do better by looking at and supporting the next generation -------- Forwarded Message -------- From: Tsvi Achler To: Gary Marcus Cc: connectionists@mailman.srv.cs.cmu.edu Subject: Re: Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate Date: Mon, 12 Feb 2024 14:58:54 -0800 Unfortunately this field is too preoccupied with egos, hype, pomp and circumstance. All this politicking effectively inhibits novel approaches. I think the whole field can do better by looking at and supporting the next generation, those offering significantly different ideas. Sincerely, -Tsvi 08********08 #] 18Jan2024 Open Source Brain v2.0: NetPyNE, NWB Explorer and JupyterLab From p.gleeson at ucl.ac.uk Wed Jan 17 10:59:31 2024 From: p.gleeson at ucl.ac.uk (Padraig Gleeson) Date: Wed, 17 Jan 2024 15:59:31 +0000 Subject: Connectionists: Announcing Open Source Brain v2.0: NetPyNE, NWB Explorer and JupyterLab; ModelDB models and DANDI Archive datasets Message-ID: We would like to announce a major new version of the Open Source Brain platform (v2.0) we have been working on which has a range of new features for you to try: https://v2.opensourcebrain.org. You can create cloud based, persistent workspaces which contain models and data from a number of sources (https://v2.opensourcebrain.org/repositories) including: - All *OSBv1* projects incorporating NeuroML and PyNN models - All *ModelDB* entries - All *BRAIN Initiative DANDI Archive* datasets containing NWB files Inbuilt applications which can open these workspaces include: *- NetPyNE UI*, a 3D graphical application for neuronal simulations (NeuroML compliant) *- NWB Explorer* for opening and visualising data in NWB files *- JupyterLab*, a full interactive Python environment, with a number of computational neuroscience, data analysis and machine learning packages preinstalled. Full documentation is at https://docs.opensourcebrain.org and a Guided Tour introducing the main features of the platform is here . We are very keen for new users to try out our platform. Please note that you will require a different username/password for OSBv2 from OSBv1. We are also very happy to make dedicated resources available on the platform to support classes/tutorials/computational neuroscience schools. The Open Source Brain Initiative aims to: - Make neuroscience experimental data and computational models from around the world easily findable in a single location - Provide a free to use, collaborative, web based, interactive computing platform that integrates state of the art software tools for accessing and working with large scale data sets and computational models. - Encourage the use of standards for making research outputs more FAIR (Findable, Accessible, Interoperable and Reusable): OSBv2 supports NeuroML for models and NWB for data. Please get in contact if you would like to help us with any of these goals! Regards, The OSB Team 08********08 #] 31Mar2023 Sejnowski 21Aug2022 Large Language Models and the Reverse Turing Test /home/bill/web//Neural nets/References/Sejnowski 21Aug2022 Large Language Models and the Reverse Turing Test.pdf Sejnowski 21Aug2022 "Large Language Models and the Reverse Turing Test" https://arxiv.org/pdf/2207.14382.pdf 08********08 #] 23Mar2023 Connectionists topics summary? (daily notice maybe?) - good list! as I have used for other NN blogs? From: Matthew S Evanusa To: connectionists@mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Connectionists Digest, Vol 834, Issue 3 Date: 2023-03-22 08:17:09 PM On Wed, Mar 22, 2023, 12:04 PM wrote: Send Connectionists mailing list submissions to connectionists@mailman.srv.cs.cmu.edu To subscribe or unsubscribe via the World Wide Web, visit https://mailman.srv.cs.cmu.edu/mailman/listinfo/connectionists or, via email, send a message with subject or body 'help' to connectionists-request@mailman.srv.cs.cmu.edu You can reach the person managing the list at connectionists-owner@mailman.srv.cs.cmu.edu When replying, please edit your Subject line so it is more specific than "Re: Contents of Connectionists digest..." Today's Topics: 1. Re: Can LLMs think? (Terry Sejnowski) 2. NEURAL COMPUTATION - April 1, 2023 (Terry Sejnowski) 3. Re: Can LLMs think? (Thomas Miconi) 4. attention mechanisms (Baldi,Pierre) 5. Can LLMs think? (Rothganger, Fredrick) 6. Re: Can LLMs think? (Asim Roy) 7. CFP: SBP-BRiMS'2023: Social Computing, Behavior-Cultural Modeling, Prediction and Simulation (Donald Adjeroh) 8. Re: Can LLMs think? (Gary Marcus) 9. Postdoc in computational neuroscience/machine learning at the University of Nottingham (UK) - closes March 30th (Mark Humphries) 10. Call for Participation - REACT 2023 Challenge: Multiple Appropriate Facial Reaction Generation in Dyadic Interactions (REACT2023) (CRISTINA PALMERO CANTARI?O) 11. Re: Can LLMs think? (Stephen Jos? Hanson) 08********08 #] 21Mar2023 Baldi: attention mechanisms -------- Forwarded Message -------- From: "Baldi,Pierre" To: connectionists@cs.cmu.edu Subject: Connectionists: attention mechanisms Date: Tue, 21 Mar 2023 14:35:24 -0700 On a less exciting note than the GPT discussion, let me bring to your attention this article that just came out: https://doi.org/10.1016/j.artint.2023.103901 Basically it identifies the basic building blocks of attention in deep learning architectures and shows why these are computationally efficient. Caution: this is just a beginning, not a full theory of transformers. --Pierre Baldi, Vershynin 02Mar2023 "The quarks of attention: Structure and capacity of neural attention building blocks", Artificial Intelligence, v319, 103901, ISSN 0004-3702, https://doi.org/10.1016/j.artint.2023.103901. (https://www.sciencedirect.com/science/article/pii/S0004370223000474) Abstract: Attention plays a fundamental role in both natural and artificial intelligence systems. In deep learning, attention-based neural architectures, such as transformer architectures, are widely used to tackle problems in natural language processing and beyond. Here we investigate the most fundamental building blocks of attention and their computational properties within the standard model of deep learning. We first derive a systematic taxonomy of all possible attention mechanisms within, or as extensions of the standard model into 18 classes depending on the origin of the attention signal, the target of the attention signal, and whether the interaction is additive or multiplicative. Second, using this taxonomy, we identify three key attention mechanisms: additive activation attention (multiplexing), multiplicative output attention (output gating), and multiplicative synaptic attention (synaptic gating). Output gating and synaptic gating are proper extensions of the standard model and all current attention-based architectures, including transformers, use either output gating or synaptic gating, or a combination of both. Third, we develop a theory of attention capacity and derive mathematical results about the capacity of basic attention networks comprising linear or polynomial threshold gates. For example, the output gating of a linear threshold gate of n variables by another linear threshold gate of the same n variables has capacity 2n2(1+o(1)), achieving the maximal doubling of the capacity for a doubling of the number of parameters. Perhaps surprisingly, multiplexing attention is used in the proofs of these results. Synaptic and output gating provide computationally efficient extensions of the standard model enabling sparse quadratic activation functions. They can also be viewed as primitives for collapsing several layers of processing in the standard model into shallow compact representations. Keywords: Neural networks; Deep learning; Attention; Gating; Synaptic modulation; Transformers; Capacity; Circuit complexity >> paywall, not-get 08********08 #] 20Mar2023 Vaswani etal, Google 2017 "All you need is attention" Vaswani, Shazeer, Parmar, Uszkoreit, Jones, Gomez, Kaiser, Polosukhin 12Jun2017 Attention Is All You Need.pdf 08********08 #] 17Mar2023 Schmidhuber - 1992 Transformer NN, 1993 variable binding/ATTENTION/soft links "$d_web"'Neural nets/References/' : Schmidhuber 26Mar2022 Neural nets learn to program neural nets with with fast weights (1991).html Schmidhuber 29Dec2022 Annotated history of modern AI and deep neural networks.html link fixes : Schmidhuber%2026Mar2022%20 Schmidhuber%2029Dec2022%20 -------- Forwarded Message -------- From: Schmidhuber Juergen To: Connectionists Subject: Re: Connectionists: Galileo and the priest Date: Fri, 17 Mar 2023 09:27:31 +0000 Dear Risto and Claudius, I like your discussion on variable binding / attention / soft links. To an extent, this already worked in the early 1990s, although compute was a million times more expensive than today! 3 decades ago we published what’s now called a "Transformer with linearized self-attention" (apart from normalization): Learning to control fast-weight memories: an alternative to recurrent networks, Neural Computation, 1992. Based on TR FKI-147-91, TUM, 1991. Here is a well-known tweet on this: https://twitter.com/SchmidhuberAI/status/1576966129993797632?cxt=HHwWgMDSkeKVweIrAAAA One of the experiments in Sec. 3.2 was really about what you mentioned: learning to bind “fillers" to “slots" or “keys" to "values” through "soft links." I called that “learning internal spotlights of attention” in a follow-up paper at ICANN 1993. How does this work? A slow net learns by gradient descent to invent context-dependent useful pairs of “keys" and “values” (called FROM and TO) whose outer products define the "attention mapping" of a fast net with “soft links” or “fast weights” being applied to queries. (The 2017 Transformer combines this with a softmax and a projection operator.) The 1991 work separated memory and control like in traditional computers, but in an end-to-end differentiable fashion. I am happy to see that the basic principles have become popular again. Here an overview in Sec. 13 of the Annotated History of Modern AI and Deep Learning (2022): https://people.idsia.ch/~juergen/deep-learning-history.html#transformer . Longer blog post: https://people.idsia.ch/~juergen/fast-weight-programmer-1991-transformer.html There is also an ICML 2021 publication on this, with Imanol Schlag and Kazuki Irie: Linear Transformers Are Secretly Fast Weight Programmers. Preprint https://arxiv.org/abs/2102.11174 Juergen 08********08 #] 14Mar2023 AIforGood - advancing fusion energy through enhancing simulation Michael Churchill (Particle Physics Lab, Princeton), Diakhère Gueye (talks fast to appear smart?, backfire) XCG Tokamak edge simulations and collisons - run on large-scale supercomputers particle-in-cell model - stolen from Anthony Peratt etc Plasma Universe? Fokker_Planck particle collison model woth ?Lagrange? iterations hybrid [Lagrange, Euler] particle-in-cell method t solve Boltzmann equation Fokker_Planck-Landau collision operator and XCG - physics constraints (energy, mass] conservation etc] part-in-cell noise is natural challenge implicitly simpler systems : experiment agrees with model simulations machine learning Churchill first thought very similar to vision applications (me : thtat was stupid) encoder-decoder networks RegSeg (UofT?) had been used 32*32 point on CFAR dataset - same as fusion work A. Dener "Stochastic Augmented Lagrangian" for encoder-decoder NN training as training progressed, constaints >> learning rate consideration great simulation results TorchScript - mixed Fortran/C++ (PyTorch), only 30 lines of code result : train !+ test, single time-step ahead only (dumb-fucks!) ML collision operator wasn't learning long-term behaviour of Fokker-Planc operator interesting noise issues - being learnt came up with new normalisation to get better longer-term dstribution functions Maxwellian PDF behaviour constraint - learn identity transform edge effects - padded up to 48*48 grid, artifacts out of convolution (brilliantly simple!) Runge-Kutta 4 methods widely used for ODEs focus on ions rather than electrons - inherently less noisyno shit!?!!!) change basis from del(f) (gradient) to fluxes especially important - hard constraints Ad-hoc fluid codes SOLPS - don't know ranges of validity approach - use [forward, inverse] models to map [observed vs pysics] parameters he called it Bayesian inference sequential nerual posterior estimation example test : UEDGE fluid model for edge effects "Simulation Based Inference" (sbi) Python simulator ?NPE? test - great results, identified regions "beyond applicability" of model Future work noise inheerent to measurements real fusion diagnostics (LLAMA) score-based modelling for SBI score-based mdels popularized by Dall-E2, stable diffusion ?Deep Mind? - true based dffusion modlity separation? (I didn't catch this) +-----+ Chats - don't work for me, cannot enter questions RNNs? - playing with it Alphabet DeepMind - your view of them? nice simControl on boundary conditions challenge is ***simulation-real gap***, not enough simulation experts how to learn when not certain of results DM promise for [real-time, continuous space] - good for fusion Where MLsimModel going in future? ?name? - shows where benchmark, sims] need to be improved standard benchmarks can help a lot - praise Microsoft for PDE benchmarks traditional solvers - huge historical work [robustness, etc] need to tie in better with them! +-----+ Video Wall geany regexpr search ^([a-zA-Z0-9]\+)(.*)\n\n([a-zA-Z0-9]\+)(.*)\n([a-zA-Z0-9]\+)(.*) replace \1\2, \3\4\n\t\5\6 \+ doesn't work in geany? search ^([a-zA-Z0-9])(.*)\n\n([a-zA-Z0-9])(.*)\n([a-zA-Z0-9])(.*) replace \1\2, \3\4\n\t\5\6 \n doesn't work in geany search term? BUT yes, it has in past!?!?!?! search ^([a-zA-Z0-9])(.*)$^([a-zA-Z0-9])(.*)$^([a-zA-Z0-9])(.*) replace \1\2, \3\4\n\t\5\6 search doesn't work. search ^([a-zA-Z0-9])(.*)^([a-zA-Z0-9])(.*)^([a-zA-Z0-9])(.*) replace \1\2, \3\4\n\t\5\6 search doesn't work. #strt:******start of linSeq********************** AI for Good, 1 hour Greetings from the AI4G team and welcome to another exciting session tackling Physical Science! Thank you for joining us, this week our speaker, Michael Churchill from the Princeton Plasma Physics Laboratory, will be presenting ‘AI for advancing fusion energy through enhancing simulation’, with Diakhère Gueyem from the International Atomic Energy Agency (IAEA) moderating. As usual please add your questions in the video wall, and enjoy! Bill Howell just now Michael Churchill - very enjoyable presentation, and answers to questions. I just found this Video Wall, couldn't ask questions during session. Feda Muhesen 2 minutes Thanks ALMANZO ARJUNA 2 minutes Great talk! thanks! 三 张 3 minutes Regarding the issue of using data-driven methods to fit PDEs in Tokamak simulation programs, are there limitations to the neural network models used for this fitting process? Are these types of models suitable for feedback control or are they only able to predict the next moment's state through forward inference? Vadim Nemytov 4 minutes if I may highlight this paper (https://arxiv.org/abs/2109.14152) my understanding is that DeepMind haven't established feeback control closed loop stability - that's important in application on bigger device with risk of machine damage is high (gotta run - thanks for the talk and, everyone, for comments) Jacob Willem Bruin 6 minutes Thank you for the interesting presentation and insightful awnsers! Jacob Willem Bruin 10 minutes What are your thoughts on Deepminds work with fusion plasma control? ALMANZO ARJUNA 11 minutes Just out of curiosity, how long did you run the simulation for? and do you think it can be run faster using different ML models or other parallelization method? Jacob Willem Bruin 13 minutes What causes the convolutional model to create artifacts near the image edges? Is it because the training data is too uniform near the image edges? María Ortiz de Zúñiga López-Chicheri 15 minutes Previous work on use of Machine Learning for Plasma Physics showed difficulties when extending the work done on one fusion device to another fusion device. Have you tried testing these models trained with LLAMA (I understant) on other magnetic confinement fusion machine? Md. Selim Reza 56 minutes Greetings from Bangladesh; Md. Selim Reza sa@ird.gov.bd AI for Good, 1 hour For further information on AI for fusion please see the following links; https://nucleus.iaea.org/sites/ai4atoms/ai4fusion and https://conferences.iaea.org/event/335/ #endd:******end of linSeq************************ 08********08 #] 25Feb2023 search 'C++ versus Python' +-----+ https://www.coursera.org/articles/python-vs-c Python vs. C++: Which to Learn First and Where to Start Written by Coursera • Updated on Aug 9, 2022 Deciding whether to learn Python or C++ first is a matter of preference for most people. Learn more about the pros and cons of each before you make a decision. +-----+ https://techvidvan.com/tutorials/python-advantages-and-disadvantages/ Python Advantages and Disadvantages – Step in the right direction 3. Interpreted Language Python is an interpreted language which means that Python directly executes the code line by line. In case of any error, it stops further execution and reports back the error which has occurred. Python shows only one error even if the program has multiple errors. This makes debugging easier 08********08 #] 21Feb2023 To migrate from QNial? : from "$d_PROJECTS"'My Reviews//home/bill/PROJECTS/2023 IJCNN Gold Coast, Aus/reviews, mine/4056 r Learning Arc-Length Value Function for Fast Time-Optimal Pick and Place Sequence Planning and Execution.txt' : [21] P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, J. Bright, S. J. van der Walt, M. Brett, J. Wilson, K. J. Millman, N. Mayorov, A. R. J. Nelson, E. Jones, R. Kern, E. Larson, C. J. Carey, İ. Polat, Y. Feng, E. W. Moore, J. VanderPlas, D. Laxalde, J. Perktold, R. Cimrman, I. Henriksen, E. A. Quintero, C. R. Harris, A. M. Archibald, A. H. Ribeiro, F. Pedregosa, P. van Mulbregt, and SciPy 1.0 Contributors, “SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python,” Nature Methods, vol. 17, pp. 261–272, 2020. [22] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” International Conference on Learning Representations, 12 2014. [25] P. I. Corke, Robotics, Vision & Control: Fundamental Algorithms in MATLAB, 2nd ed. Springer, 2017, iSBN 978-3-319-54413-7. 08********08 #] 17Jan2023 Psychology: Dynamic Friday Tutorials on Feb. 3rd and March 3rd... -------- Forwarded Message -------- From: John Spencer (PSY - Staff) To: connectionists@mailman.srv.cs.cmu.edu Subject: Connectionists: Dynamic Friday Tutorials on Feb. 3rd and March 3rd... Date: Tue, 17 Jan 2023 11:36:49 +0000 Greetings, The next Dynamic Friday Tutorials onFebruary 3rd and March 3rd will discuss the following sequence of papers: February 3: Lipinski, J., Schneegans, S., Sandamirskaya, Y., Spencer, J. P., & Schöner, G.. (2012). A Neuro-Behavioral Model of Flexible Spatial Language Behaviors.Journal of Experimental Psychology: Learning, Memory and Cognition., 38(6), 1490–1511.https://dynamicfieldtheory.org/upload/file/1470692845_fe09e17da927514823a1/LipinskiEtAl2011.pdf Richter, M.,Lins, J., Schneegans, S., Sandamirskaya, Y., & Schöner, G.. (2014). Autonomous Neural Dynamics to Test Hypotheses in a Model of Spatial Language. In P. Bello, Guarini, M., McShane, M., & Scassellati, B. (Eds.),Proceedings of the 36th Annual Conference of the Cognitive Science Society (pp. 2847–2852). Austin, TX: Cognitive Science Society.https://dynamicfieldtheory.org/upload/file/1470692845_573a9c7ffe8e21330360/RichterEtAl2014.pdf March 3: Sabinasz, D., & Schöner, G.. (2022). A Neural Dynamic Model Perceptually Grounds Nested Noun Phrases.Topics in Cognitive Science. http://doi.org/10.1111/tops.12630 Details are on-line: https://dynamicfieldtheory.org/events/dynamic_friday_tutorials_dft/ You can register on our website if interested. Cheers, John Spencer John P. Spencer, PhD Professor Developmental Dynamics Lab https://www.facebook.com/DDPSYUEA https://ddlabs.uea.ac.uk School of Psychology, Room 0.09 Lawrence Stenhouse Building, University of East Anglia, Norwich Research Park, Norwich NR4 7TJ United Kingdom Telephone 01603 593968 signature_1254237645 08********08 #] 13Jan2023 Vladimir Lyashenko, Basic Guide to Spiking Neural Networks for Deep Learning >> great article : https://cnvrg.io/spiking-neural-networks/ Basic Guide to Spiking Neural Networks for Deep Learning By Vladimir Lyashenko ... How to build a Spiking Neural Network? Sure, working with SNNs is a challenging task. Still, there are some tools you might find interesting and useful: If you want a software that helps to simulate Spiking Neural Networks and is mainly used by biologists, you might want to check: GENESIS https://en.wikipedia.org/wiki/GENESIS_(software) Neuron https://en.wikipedia.org/wiki/Neuron_(software) Brian https://en.wikipedia.org/wiki/Brian_(software) NEST https://en.wikipedia.org/wiki/NEST_(software) If you want a software that can be used to solve not theoretical but real problems, you should check: SpikeNet https://www.sciencedirect.com/science/article/abs/pii/S0925231299000958 Anyway, if you simply want to touch the sphere you should probably use either TensorFlow https://www.tensorflow.org/ SpykeTorch https://cnrl.ut.ac.ir/SpykeTorch/doc/index.html Still, please be aware that working with SNNs locally without specialized hardware is very computationally expensive. 08********08 #] 22Jan2023 Grady Booch, Gary Marcus - AGI will not happen in your lifetime. Or will it? https://garymarcus.substack.com/p/agi-will-not-happen-in-your-lifetime &&&&&&&& Howell - Interesting thoughts, and I really like the generational perspective.. Reminds me of Robert Hecht-Nielson's assumption ~20 years ago that it would be 200 years or so to get to human-like intelligence, and my own estimate of 300 years based on neural network [excitement, advances] roughly every 20 years (that's not a real number, as advances are more continuous). We are like worms in the middle of an onion. Each time there is a breakthrough, that's it, we're almost there. But as the hype dies down and the concept advances, we become aware of its limitations, and are left with the realization that this onion has even more levels, each more difficult than the last, and that the timeline has extended, not receded. My caveat was the possibility of "hyper-evolution", perhaps following Kay Chen Tan and others, where concepts arise from our systems, probably mostly hybridized with our own thinking, sometimes not? 08********08 #] 07Jan2023 Karl Pribram - Can't fin "Quantum Consciousness" proceedings that I've lost http://karlpribram.com/bibliography/ 08********08 #] 23Dec2022 Kincaid - 3 Ways to Tame ChatGPT | WIRED Thanks again, Perry. I don't think I mentioned in a previous note for the last session that I was censured by DALL-E when trying to re-produce by machine a WWII painting by my father. Furthermore, I have been following (sparsely) the group "AI for good", which seems to imply that other uses for AI that don't follow their thinking are bad. Furthermore, I think the operative phrase might become "conclusions-driven research" as we often see in normal research areas, but pumped several levels by machine-aided censorship (political wokeness). Any area of science might serve as an example. For now - always ways to get around that, maybe not in the future. I suspect that the problems might be worse in chat.openAI.com than DALL-E, but I haven't probed around in the former. To me, as in other areas of [science, policy, media, government], there is an expectation that these systems will be tuned to generate responses that are suitable to thise who control them. How far do dissident scientific views go today (often not far at all, not even for consideration). That will likely accelerate in tandem with usage. But overall, I am very [enthusiastic, surprised] about the new systems. Gary Marcus's criticism of a recent Yann LeCunn statement may herald a return of the real "AI" (the classical [rational, logical, scientific] thinking AI (eg Kasparov versus Deep Blue), collaborating with the [connectionist, information theoretic, etc] approaches. There is much [hope, potential] here. Bill Howell Member of Hussar Lion's Club & Sundowners, (retired from volunteer FireFighters Jan2021) 1-587-707-2027 Bill@BillHowell.ca www.BillHowell.ca P.O. Box 299, Hussar, Alberta, T0J1S0 +-----+ https://www.wired.com/story/chatgpt-generative-ai-regulation-policy/ Meeri Haataja is the CEO & Co-founder of Saidot. Dec 15, 2022 7:00 AM 3 Ways to Tame ChatGPT Governments around the world are pushing AI regulation that has nothing to say about generative models. That could be dangerous. mentions : GPT-3, DALL-E, Stable Diffusion, and AlphaCode also assessment : "... Methodologies for standardized measurement and benchmarking, such as Standford University’s HELM, are needed. ..." https://hai.stanford.edu/news/language-models-are-changing-ai-we-need-understand-them "... This year, we’ve seen the introduction of powerful generative AI systems that have the ability to create images and text on demand. ..." Seems to be a bit late in timing for some of these systems (or at least awareness they were coming). Very late in terms of other types of Deep Learning Neural Networks etc. 1. Transparency in the foundational models "... For example, DeepMind’s researchers suggest that the harms of large language models must be addressed by collaborating with a wide range of stakeholders building on a sufficient level of explainability and interpretability to allow efficient detection, assessment, and mitigation of harms. ..." "Explainable AI" has been a major thrust for perhaps 5+ years, not counting much earlier work. Some progress and a long ways to go? But that depends on what you mean by "explainable" for complex systems. Perhaps there will be accelerating improvements here. Juergen Schmidhuber had a great comment (approximate - no a real quote) : "... We are still working to improve explanations of how these systems generate the respeonses, but there has been some progress. (he seemed optimistic). That contrasts with human thinking, for which there are relatively no proofs. ..." (Howell : think of the expert systems challenges and limitations). 2. Transparency in the use of foundational models Cute - but an intereesting challenge for very diverse [themes, governing bodies]? This has eveolved for scientific publications, news media etc, and will presumably do so with the "AI" (CI} systems. 3. Transparency in the outcomes created by AI "... One of the biggest transparency challenges is the last-mile issue: distinguishing AI-generated content from that created by humans. ..." Might not be anything as simple as that as these systems become standard tools...and perhaps the work of the future will become human-machine hybrid, with the machine holding most of the cards? 4, Feedback loops The article says essentially nothing here. Really powerful systems typically use "feedback" (recurrent neural nets), and often with human systems that isn't always required for applications (recurrent systems can be difficult and problematic to employ). An issue outstanding (frustrating) since prior to 1991 is advanced aircraft flight control. In the past this has not allowed "balck boxes", although that may be changing. Mathematical proofs of stability with previous approaches were (are?) required, but may not keep the aircraft flying in exceptional circumstances that exceed conventional control? Renegade advanced systems may crash the aircraft by their own actions? 5. Alignment techniques Perhaps not saying much, as humans have actively [followed, fretted] over results, and legitimate publishers will do checks. However, this is where the machines themselves (independent systes) might help enormously. Other : The article seems to be missing the most important protection : that users be able to run their own analysis (diverse) and compare results to what they are seeing in postings, or as produced by others on the same theme. But "modern society" doesn't seem to like this? 08********08 #] 11Dec2022 [INNS elections, ByLaw changes] my choices coded in "$d_bin""diff pdftotext.sh" - run diff on pdf files : d_work="$d_web"'Neural nets/INNS/' f1='221211 ByLaws 1991' f2='221211 ByLaws 2022' https://neural.memberclicks.net/form/submission/207072726/field/205627852/helie-nomination.pdf >> links don't work Board of Governors Candidates:* Peter Andras Jonathan Chan Yun Raymond Fu x Sebastien Helie Zeng-Guang Hou x Chrisina Jayne Marcus Liwicki x Valeri Mladanov x Seiichi Ozawa Marley Vellasco x G. Kumar Venayagamoorthy x DeLiang Wang x Qinglai Wei Hong Yu 08********08 #] 08Dec2022 Generative Pre-trained Transformer 3 (GPT-3; stylized GPT·3) +-----+ https://en.wikipedia.org/wiki/GPT-3 Generative Pre-trained Transformer 3 (GPT-3; stylized GPT·3) is an autoregressive language model that uses deep learning to produce human-like text. Given an initial text as prompt, it will produce text that continues the prompt. The architecture is a standard transformer network (with a few engineering tweaks) with the unprecedented size of 2048-token-long context and 175 billion parameters (requiring 800 GB of storage). The training method is "generative pretraining", meaning that it is trained to predict what the next token is. The model demonstrated strong few-shot learning on many text-based tasks. It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2) created by OpenAI, a San Francisco-based artificial intelligence research laboratory.[2] GPT-3, which was introduced in May 2020, and was in beta testing as of July 2020,[3] is part of a trend in natural language processing (NLP) systems of pre-trained language representations.[1] https://en.wikipedia.org/wiki/OpenAI OpenAI is an artificial intelligence (AI) research laboratory consisting of the for-profit corporation OpenAI LP and its parent company, the non-profit OpenAI Inc. The company, considered a competitor to DeepMind, conducts research in the field of AI with the stated goal of promoting and developing friendly AI in a way that benefits humanity as a whole. The organization was founded in San Francisco in late 2015 by Elon Musk, Sam Altman, and others, who collectively pledged US$1 billion. Musk resigned from the board in February 2018 but remained a donor. In 2019, OpenAI LP received a US$1 billion investment from Microsoft. >> Elon Musk funds, but resunged from board due to potnetial conflicts with driverless cars +-----+ https://www.atanet.org/industry-news/meet-gpt-3-the-latest-natural-language-system/ American Translators Association December 1, 2020 Meet GPT-3, the Latest Natural-Language System GPT-3 is the culmination of several years of work inside the world’s leading artificial intelligence labs, including labs at Google and Facebook. At Google, a similar system helps answer queries on the company’s search engine. +-----+ https://etc.cuit.columbia.edu/news/basics-language-modeling-transformers-gpt The Basics of Language Modeling with Transformers: GPT Nov 14, 2021Introduction. OpenAI's GPT is a language model based on transformers that was introduced in the paper "Improving Language Understanding using Generative Pre-Training" by Rashford, et. al. in 2018. It achieved great success in its time by pre-training the model in an unsupervised way on a large corpus, and then fine tuning the model for ... +-----+ https://www.itbusinessedge.com/development/what-is-gpt-3/ What is GPT-3 and Why Does it Matter? By Kashyap Vyas October 19, 2021 GPT-3, at its core, is a transformer model — a sequence-to-sequence deep learning model that can give out a sequence of structured text if an input sequence of text is provided. This machine learning (ML) model is designed for text generation functions such as question-answering, machine translation, and summarizing text. Unlike Long Short-Term Memory (LSTM) neural networking models, Transformer models operate using multiple units named attention blocks focusing only on the relevant parts of a text sequence. LSTM, a complex area of deep learning, is required by domains like machine translation and speech recognition, to name a few. 08********08 #] 08Dec2022 AiforGood "Paving the way towards 100% network autonomy" https://neuralnetwork.aiforgood.itu.int/event/ai-for-good/auditorium-archive/618bd56f1ca6ce6ae07960c9/timeslot/63849fb00eac211dd71f2fa0 06:00 4 speakers : Le Zhang, Jinhua, Yongsheng, Ronan moderator Vishnu Ram >> I was late by 9 hours~, but session recorded and immediately available +--+ Le Zhang, China Telecom (difficult to understand) - fast >> efficient? - towards [collaborative, convergent, integrated] +--+ Jinhua Kuang, China Mobile Research Inst, 5g network & data intelligence easy to understand) radio network intelligence - not much attention put here yet national policy -> industry focus -> CMCC 5G strategy Level 1. manual, recording 2. automated by static programs 3. dynamic policies 4. knowledge, learn, evolve 5. self-[evolve, adapt] network infrastructurte intelligence - energy savings (Network Management System NMS) - provide global view - formulate policy rules - providee common model base network system is highly automated already - easy to add AI theory, use cases, dataSet, algorithms, performance, evaluation, specs, industrial realization will lead to 6G architecture changes???!!! problems for AI? interfreence problem wireless feature extract failure root cause load&energy forecast wireless parameter optimization AI - classification, cluster, apriori, regression, reinforcement learning Collaborations : carrier selection, coverage structure, user experience, energy consumption Usage example case : Radio network infrastructure intelligence huge # parameters & optimisation gpoals, difficult to formulate carrier selection service SLA guarantee slicing service SLA - Multi-Objective Optimisation (DNN & Q-learning) +--+ Yongsheng Liu,China Unicom Research Inst 8 Autonomous Network orgs? TMForum*, GSMA, 3GPP*, ETSt, IETF, IEEE, CCSA*, ITU-T * pioneers in autonomous networks AN development plan : goal -> elements [architecture, methodology, practice] -> technologies AN architecture layers : architecture - platform - network wireless net oprn : 5G site plan 0.23>+ (millions), community coverage 5M+ broadband : speed analysis 0.6+M quality 70+M O&Mrobots : energy save 95%+ auto-dispatch ?89%+? >> other great usage stats!! +--+ Ronan Yanbin Dai, ZTE uSmartNet Solution, OSS MKT Huge diversity of new users!! Cross-domain collaboration layers [service, network, basic] Case1 - cross-domain service quality management : voice is key! Case2 - abnormal log detection 34 cut-overs/day in one Chinese network - hard to find) alarm data traditionally uused, but not efficient (60 kLogs/day, 457 events/log, etc) Case3 - sustainable power saving in RAN domain Case4 - Enterprise AI platform of T-project +--+ Q&A 08********08 #] 30Nov2022 Elon Musk - NeuraLink demo https://www.youtube.com/watch?v=YreDYmXTYi4 What do we do about AI general intelligence? even with benign AI, how do we go along for the ride? biggest limitation is bandwidth to inteact with computer (currently - phone, computer) >> Bad example >> Howell - how do we contribute to a human-machine netwrok in an economically competitive way? (hybrids) - already the threshold for ecomomically competitive human intelligence surpasses MOST people! . monkey-pong = telepathic video prototypes are easy, production is 100k time more difficult, 99.9% persperation testing before animal testing : fake brain rubber simulator no timeline given for longevity of wired linkjs (1 month until glial cell inactivation?) . vision - visaul cortex still there motor cortex - able to operate normally inoperative muscles (Stephen Hawking) full body functionality for severed spinal chord (Howell - soldiers) NO mention of Ted Berger & colleagues, Mark Humayan etc . hiring is primary goal of presentation - advanced skills, don't have to know how brain works . South Korean project manager - Neo learn kung fu in the Matrix, today is tractable? interfaces between biology & technology 1st step: N1 implant 1024 channels, fully implanted, multi-threaded, battery charged inductively Brain electrode insertion via prosthesis (automated real-time view) Employee Ineer (?): Neural decoding, monkey-pong example motions of pongs imagine keyboard instead of visual Employee Blyth: paralysis use computer as well as I Employee Ovenosh: ASIC 1024 channels <20microV in spiking activity now !32mW preserves battries 3 point spike ID Employee Matt: brain interface electro-engineering thermal safety <2Celsius deltaT Employee Julian: implant data acceptance tests [field tests, accelLife, animalModel] tsting became bottleneck for development used baseboard - allowed rapide drop-ins, [current, next] gen also on same board 1/5*[cost, development, time] - greatly accelerated development monitor implant 24/7, eg track spouriaous spikes low-impedence electrodes, out DAC, in 2*ADC initial: 4 hours/1000 channels now: 20 secs Employee Joshua: accelerated lifetime longevity in tissue (?glial cells?) : working fluid chgemistry + T (Arrhehnius), aggressive cycling 20 months actual -> 100 accelerated observed failures, rinse & repeat, now 3rd gen 4th gen plan - inspired by hi-density compute servers, will have thousands of implants testing remaining - even tuissue growth LATER - Question addressed this on scar tissue. Team hoping to minimize scar tissue and reduce [thread, electrode] size Musk - it works 2 years later (600 days an employee said for older device, 1 year for newer device) reduce infklamation efforts Employee Christine - surgery intervention implant in skull hole, roibot implants wires shortage of neurosurgeons (?how many?): one must over-see many implantations at a time Employee Alex - upgradability & other future projects layer of tissue above implant is problem, now leave dura and sponge part dura is [tough, opaque] challenges for wire insertion now using medical dyes to see submerged vessels Employee Sam - insert threads through dura key - speed on insertion, tantalum?? dethreading on withdrawal, Employee Leslie - microfabR&D complex & dynamic evironments surgical robots double as data sensors Employee Dan: exciting next-gen app restore vision to blind bypass eyes to generate image in brain directly (phosphene??) BIG advance versus Mark Humayun? (pixels might anready been there?) Employee Joey - neuroengineer, spinal chord fixes motor pool areas are connectors? controls into spinal chord difficult - could damage [tissue, electrod] but R1 robots accurate, deep fast CLOSING QUESTIONS: Elon Musk and whole team (guys vastly better answers) 50-100 micron vision neuron radius stimulated Musk - have 10-100:1 neuron: electrode ratio implant lifetime Jeremy - 617 days to dater on old, 1) seal: heremtic enclosure 20 years?, 2) battery 80% runtime at 3 years, 2-4 times soon, 3) threads, SiC inbcrease big time, biocomp - biostabem testing often, using known materials as sdtarting point what are biggest lessons osince last presentation (3 years) woman - brain can move 100s of microns, big for threads woman2 - very dynamic environment Alex - continuous [validation, test] highly accurate movements, design alongside products Iranisan? - work every day for all monkeys Bluetooth bandwidth limitations - others? guy - great question, other radio, more efficient - 50 kbps data 150 kbps BlueTooth guy - 500 MHz ultra wide-band 8-10 Mbps womanQ - Nemo ability to perform kung fu IranGuy? - bidirectional learning? womanQ how are animal mental health Autumn - head of animal welfare, conditionaling ans primary training method 3Rs [Refine, Replace. Reduce] guyEngg - greater independence of animals due to systems!! manQ [expand, upgrade] impant damage plans guyA - just as easy to upgrade as install, have replaced in exact same location damage in braib we care most about, we talked of tissue layer ob top of brain, scars are minimal monkeys with second devices really use them manQ - plasticity of behaviour, brain changes faster than impant? IranGuy - after 3 days learn very quickly on new device manQagain Johan - massive reduce signal, can play with Jeremty - bluetooth heats up . Musk - if you don't cut dura, 10 minute operation, healing super-fast manQ - scar tissue from long-term use, how to lower safety questions BrutGuy - have data, threads almost no damage, don't see any bads in our anomals which is great damage potnetial when removing device, we don't think will be a problem manBitRate - FDA approval? youngBritGuy - rigorous proof definitely needed, overheating (redesigned now), bio-compatibility we are working with all FDA's questions, difficult to understand novelty of our work cannot rely on historic Musk - I believe it would not be dangerous now woamQ - motor disabilities youngUSGuy - trade stocks, play games Musk - we are scaling up device in parallel, progress should be exponential manQ - all cortical now, what about rest? timescale? Musk - funadamentals of device will stay the same, intended to be general I/O device manQ - locate blood vessels manA - right now only see 1 mm, within team disussing how to go deeper womanA - if needles accurate enough will go through vessel no damage Musk - I'm optimistic for thism depends on how big hole is manA - 2 mm needle current surgery - breaks blood vessel patient dies manQ - how to validate thread placement manA - proxy happens [all time, never] manA - signals read almost immediately, good data many channels WonamQ - emotions in AI, any ambitions Musk - we build, open-source, record memories? (black mirrors) Jeremy - decades of work this is based on, Nerualink advanicing rapidly manQ - much higher bandwidths for complex human tasks? IranGuy - decode handwriting directly example, different keyboards etc youngUSGuy - other areas of brain [language, speech] can greatly improve bandwidth Musk - we will learn orders of magnitude more about the brain. Howell: BIG impact on standard brain surgery!!! 08********08 #] 29Nov2022 SINDy - goal is to learnb sparse ID of non-linear dynamics Steven Bruno Diakhère Gueye, Steven L. Brunton with Machine learning for scientific discovery with examples in fluid mechanics Reduced-order modelling PhD thesis Catheline SINDY + Autoencoder - help to discover coordinate system 20-30 modes for turbulent flow example : non-linear correlation analysis to see which modesd drive (like stoke markets?) Alternate approaches : 1. 2 surat-Landau osscillators 2. Langevin regression Vadim Nemytov 14 minutes Hi Dr. Brunton, thank you for this interesting talk and all your youtube video-lectures. Some questions 1) is the open-source code to take your ML-based system identification (PySINDY-based) and define LQG-like optimal controller? I am aware of KRONIC, but it's in Matlab and I think it expect ML-free PySINDY model 2) Is it possible to contact you via e-mail to discuss the possibility of inviting you to give a talk at our Plasma Physics team, at Tokamak Energy? 3) Doesn't PySINDY on github require the user to pre-define the features (~basis set) for fitting - or does it support the auto-regression ML approach as well ? (Apologies if I got it wrong) 08********08 #] 17Nov2022 DevelopingMinds : Oudeyer "Developmental Artificial Intelligence: [machines, children] learn better https://www.youtube.com/watch?v=NUyhWn3qeHo 2021-09-30: Pierre-Yves Oudeyer, INRIA, "Developmental Artificial Intelligence: machines that learn like children and help children learn better". transcript see : /home/bill/SG6/web/Neural nets/DevelopingMinds/210930 Oudeyer Developmental Artificial Intelligencem [machines, children] learn better transcript.txt 08********08 #] 16Nov2022 2022 IEEE Computational Intelligence Society AdCom Election Open until November 17, 2022 4:00 PM (GMT-05:00) Eastern Time (US & Canada) Society Administrative Committee Term 1 January 2023 – 31 December 2025 (Please check the box next to your selection. You may vote for up to 5 candidates.) y Steven Corns Keeley Crockett y Catherine Huang Tingwen Huang Min Jiang Jialin Liu Jose Lozano y Leandro L. Minku y Nikhil R. Pal Alice E. Smith Christian Wagner Bing Xue y Gary G. Yen Confirmation#: 178702211333973 Date: November 16, 2022 Time: 10:03 PM (GMT-05:00) Eastern Time (US & Canada) 08********08 #] 29Sep2022 KEEP - Immersive Learning in the Metaverse: How to Bring Education to Life - Part two 08********08 #] 27Sep2022 AIDA talk Sebastian Lapuschkin ‘Towards Actionable XAI’ I missed it (lost track) https://www.i-aida.org/event_cat/ai-lectures/ Sebastian Lapuschkin , a prominent AI researcher internationally, will deliver the e-lecture: ‘Towards Actionable XAI’, on Tuesday 27th September 2022 17:00-18:00 CEST (8:00-9:00 am PST), (12:00 am-1:00am CST), see details in: http://www.i-aida.org/ai-lectures/ You can join for free using the zoom link: https://authgr.zoom.us/j/97981785191 & Passcode: 148148 The International AI Doctoral Academy (AIDA), a joint initiative of the European R&D projects AI4Media, ELISE, Humane AI Net, TAILOR, VISION, currently in the process of formation, is very pleased to offer you top quality scientific lectures on several current hot AI topics. 08********08 #] 25Jul2022 Is connectionist symbol processing dead the WaybackMachine : https://web.archive.org/web/20170706013814/ftp://ftp.icsi.berkeley.edu/pub/ai/jagota/vol2_1.pdf ftp://ftp.icsi.berkeley.edu/pub/ai/jagota/vol2_1.pdf /media/bill/Dell2/Website - raw/Neural nets/References/Jakota Is connectionist symbol processing dead.pdf 08********08 #] 28Jun2022 Jurgen Schmidhuber - Scientific Integrity and the History of Deep Learning: #] The 2021 Turing Lecture, and the 2018 Turing Award Awesome resource Schmidhuber 28Jun2022 the History of Deep Learning - awesome resource 08********08 #] 21Jun2022 Jan Peters (Technische Universitaet Darmstadt, Germany) ‘Robot Learning’ Prof. Jan Peters (Technische Universitaet Darmstadt, Germany), a prominent AI & Robotics researcher internationally, will deliver the e-lecture: ‘Robot Learning’, on Tuesday 21st June 2022 $ find '/home/bill/.local/share/evolution/mail/local/.5_5FNewsgroups.Connectionists.2022/cur/' | tr \\n \\0 | xargs -0 -IFILE grep --with-filename --line-number 'Jan Peters' "FILE" 1654693031.11025_1815.dell64 You can join for free using the zoom link: = https://authgr.zoom.us/j/92400537552 & Passcode: 148148 2007 key paper 1. Can we learn on a real system from little data? Reinforcement Learning - uses Bellman equation Howell - so it's just [Approximate, Adaptive] Dynamic Programming with imitation being key to make tractable 2. How can we learn comprehensible, modular policies? Modular Control Policies - multiple conflicting hypothesis, sometimes recombining like Mixture of Experts ping pong - like Mitsuo Kawata (?) I introduced at IJCNN2015 but he presented a different topic localize behaviout can be led responsibly sequencing in manipulation - placement of blocks, slice egg plant Policy composition by [select, superpose, sequence] advantage of knowing physics of body >> knowledge, ?skill? models super-important for execution, eg [inverse -> energy -> forward] models [engineering breakdown, system ID - becomes black box, black-box - eg Deep Learning gimplausible models] Michael Lutter 3 weeks a joined Boston Dynamics -> Deep Lagrangian networks based on good physics models sufficient inductive phsics dynamics DeLaN : model representation -> physics prior -> parameter optimization, Lagrangian and energy Energy control of Furuta Pendulu - sytem ID fails because frequencies very fast so online learn hard, but DeLaN works with Deep Learning 3. How can we learn physically plausible deep models? 4. How can we build the best bodies and learn real systems? 5. Conclusions and outlook 08********08 #] 16Jun2022 Connectionists: LeCun on Marcus -------- Forwarded Message -------- Date: Thu, 16 Jun 2022 12:39:01 -0700 Subject: Connectionists: LeCun on Marcus To: Connectionists List From: Gary Marcus I’ll probably write a bit of reply later, but this is an excellent new essay by Yann LeCun, quite relevant to many recent discussions here: https://www.noemamag.com/what-ai-can-tell-us-about-intelligence 08********08 #] 10Jun2022 NN journal articles read Ruihong Li, Huaiqin Wu, Jinde Cao Apr2022 "Exponential synchronization for variable-order fractional discontinuous complex dynamical networks with short memory via impulsive control" Neural Networks v148 pp13-22 Garrett Bingham, Risto Miikkulainen Apr2022 "Discovering Parametric Activation Functions" Neural Networks v148 pp48-65 Iam Palatnik de Sousa, Marley M.B.R. Vellasco, Eduardo Costa da Silva Apr2022 "Evolved explainable classifications for lymph node metastases" Neural Networks v148 pp1-12 08********08 #] 14Jun2022 Zoom - Watanabe, Towards a Scientific Investigation of Consciousness and Mind-Uploading 9 am-10:30 am Friday 24, June 2022 (Japan Standard Time) Zoom registration: https://zoom.us/meeting/register/tJEpf-6qpjojGNeQDTfZ1Y9YNAc17bYgJLYq Prof Masataka Watanabe (Univ Tokyo) Solving the Hard Problem and Closing the Explanatory Gap with Natural Laws of Consciousness - Towards a Scientific Investigation of Consciousness and Mind-Uploading - 08********08 #] 06Jun2022 Juyang Weng paper "A Developmental Network Model of Conscious Learning in Biological Brains" Weng 06Jun2022 A Developmental Network Model of Conscious Learning in Biological Brains 08********08 #] 19Apr2022 Building AI for humanity, LeCunn (FB), Bengio (MILA), Blaise Agüera y Arcas (Google) Zoom from Europe ?Adrian Fairhall? (female), Seatle UProf ?female philosopher prof? stupid fucking general questions!!! +-----+ LeCunn (FB) ATT, NYU, Facebook (FAIR) -> now Meta New in last few years - unsupervised learning (horsehit!!! - he's DECADES behind curve!) AI is amplification of human intelligence still useful to work on autonomous intelligence printing press - enabled Protestant movement Europe, 200 years of massacres effects are very difficult tdict social networks - initially for adept at [computer, communicate] now - battleground for fights Social Media companies realized 2012-2013 that need moderation misinformation, infoBubbles, polarization actual studies show SM makes people LESS polarized polarization is NOT caused by inequalities! (wow! not allowed to say that in Canada) polarization is decreasing in European countries +-----+ Bengio (MILA) - born in France Drafted Montreal Declaration for the Responsible development of AI Looking at what we have achieved : doing very well - alphaFold (3D proteins), physics, chemistry struggle with - common sense, robots H2 from [solar panel, wmills] would essentially solve climate change (MORON!!) obstacles to socially important applications : not profitable - eg to s climate change (profit not there) misuse of AI fear too fast changes that we can't see coming we should have had a sociologist on panel +-----+ Blaise Agüera y Arcas (Google) Microsoft -> Google Leads MachineLearn pre-DeepLearn - small [problems, data (labelled), programs (specific architectures)] very quick impact of DeepLearn 217-2018 very large models unlabelled data [GPT3, Lambda] !*!*! to me it is general AI! (same as Schmidhuber) - not task-specific, very general, justk and you will receive AI in [practice, data efficiency] language, theory of mind, objects, reasoning benefits applications fusion most challinging issues for AI researchers managing people's attention moderate online communications (eg social media) need human language, understandable rules [politics, morality] - our inability to govern ourselves is biggest challenge - polarity etc +-----+ Questions from audience : +--+ student in sociology : Explainability is big issue Bengio much of what is going on in your brain is also not explainable (hah!! - he took this from Juergen Schmidhuber - HILARIOUS!!) feel it will be possible one day Arcas : example of Hdgkins-Huxley neuron, very difficult to analyze getting intyo non-explainable domains (HOWELL - non-[rational, logical, scientific] reasoning) humans are great bullshitters - very compelling crappyformance baseline of explainability self-driving cars - court of law, why did it happen? rules will not do as well as DeepLearn LeCunn agrees with Arcas response we don't have solid explanations of much in the world, but we get things to work doctors can't exxplain why dde patient has appendicitus want better performing system always, not the one with an explanation Bengio different levels of explain - can show design of system, prove things like convergence physics may have basic [law, equation]s - but complex systems still not explainable must push human-understandable explanations, but accept limits moderator accountability WWII bomber crew committed suicides, but not Iraq teleremote drone crews distance between perpetrators and victims is important LeCunn not expert, just repeat what I heard all countries using autonomous weapons have existed for centuries - anti-personnel mines some want autonomous - limit damage, eg carpet bomb versus precision Arcas often mostly emotion (sentiments) rather than reason who decides which "opinion" much of cisation - as much "stripping away" emotion, as [use, control]ing it Bengio philosphers cannot distill rules versus [,non-]virtuous behaviour extinctively, knowledge hack many iterations (evolved is word) +--+ Kenji Doja (chat) how can we make SNS platforms open LeCunn no such thing as "Facebook algorithm" - multi-componetn system impose to explain how it works, 2015-16 described architecture, now changed completely very highly personalised myth that engineers decide what you see Bengio (to LeCunn - hidden, behind your back) ie bullshit LeCunn AugmentReal in 10 years, no cellphone personal agent to help ctrol, use info bombardment FB, YouTube, etc - people decide what NOT to show you (controversial viewpts) FB integrity - that's all that they do, trade off [expression, perversion] [FB, Google] don't want to decide, but have to only Macron government in France responded with any rules Trump cited 1st Ammendment - so go away (YAY Trump!!!!) Arcas transparency is entirely feasible, given language capabilities key is supervisory system in behind must do this LeCunn I agree (so why did he totally avois?!? dumb, hidden agenda?) one giant model for 400 languages for hate speech later - more of that +--+ Al (as in Alan) research assistant how can we allow other people to use AI systems already trained? should we worry when systems become hugely powerful? only priviledged can access? LeCuss Microsoft OPenAI is NOT open!! (proprietary) - he mentions many open systems Arcas these models will become more [proprietary, competitive, priviliedged] important challenge! for now we are lucky - so expensive to run, just not enough computing power this won't last long +--+ PhD student medical iterventions - whave to go through protocols does AI do this? LeCunn my R&D is on autonomous only - so no humans FB example - emotional contagion eg happy post make you OR[happy, jealous] conclusion : [see, be] happy but FB accused for manipulating emotions but everybody being tested every time use [social media, etc] Arcas we live in an uncontrolled experiment we are way outside our design parameters oversight of science good, but classic IRPs a re quate, as address key isss is it [adequate, appropriate] to create more controls than we have? Bengio we need to have more IRPs to control research (3rd party reviews) computer science hasn't had that, doesn't have ethics courses for students medical system IRPs are not approp for computer science +--+ ?student? AI decides, not you - not sam aspirin hardware efficiency - rebound effect any surveys, democracy regarding what do we want to see from AI? LeCunn you describe a market your statement incorrect that enormous competition to make AI efficient economics decide business computing power Arcas I agree with LeCunn ecology is important private AI - can't put 300W device on body training - general system does everything (vesus bitcoin mining) LeCunn training <<< operation for big AI Bengio don't track good to have legal frameworks for tracking GHGs Arcas I agree with Bengio Bengio not true that ohters don't publish, have patents, much behind doors Arcas much code now open source better momentum in open source 08********08 #] 17Mar2022 Roger Dev 17Jun2021 Reproducting Kernel Hilbert Spaces (RKHS) - A primer for non-mathematicians "$d_PROJECTS"'2022 WCCI Padua, Italy/reviews - mine/3806 r A Relative Spike-Timing Approach to Kernel-Based Decoding Demonstrated for Insect Flight Experiments.txt In [25], the author proposed a reproducingkernel Hilbert space (RKHS) framework that uses an instantaneous kernel to determine similarities between single spike trains directly. [25] A. R. Paiva, I. Park, and J. C. Principe, “A reproducing kernel hilbertspace framework for spike train signal processing,” Neural Computation,vol. 21, no. 2, pp. 424–449, 2009. [26] I. M. Park, S. Seth, A. R. Paiva, L. Li, and J. C. Principe, “Kernel methods on spike train space for neuroscience: a tutorial,” IEEE Signal Processing Magazine, vol. 30, no. 4, pp. 149–160, 2013. +-----+ Howell - fantastic aid!!!! : https://hpccsystems.com/blog/reproducing-RKHS Reproducing Kernel Hilbert Space (RKHS) - A primer for non-mathematicians Roger Dev on 06/17/2021 Mathematically, RKHSs are a specific type of topological space with some very handy and surprising characteristics. It is almost impossible to imagine the shape of most RKHS spaces. They are typically very non-linear, and high to infinite dimensional. But here are some of the practical implications. It is a space of functions. Each point in space is a particular function. Function spaces tend to be infinite dimensional, so not only are there an infinite set of functions there, but the vector that identifies a given function can be of arbitrary (theoretically even infinite) length. Your sample data is typically used as the index into the function space. Index entries are real-valued. The functions are organized locally such that if the indices are similar, the resulting functions will be similar. The functions are smooth and continuous. Each function takes a single real-valued parameter, and returns a single real value. Linear functions in RKHS provide useful non-linear results. This is the unique characteristic of RKHS that makes it so useful. RKHSs are identified by a unique "kernel function". Each kernel induces exactly one RKHS, and each RKHS has a unique kernel function. One common use of RKHS is to find a function that describes a given variable. The data (samples of that variable) is given as the index into the function, and the function returned will estimate the probability density of the data's generating function. It is guaranteed to converge to the exact function that generated the data as the number of data points approaches infinity (with a few caveats*). Better than that, it will approach the correct distribution with every additional data point. Better still, every prediction that that function returns will be close to the actual value. "... Different kernels produce different types of functions, but some kernels are universal, meaning that they can emulate any continuous function to within isometric isomorphism, which is a fancy way of saying "exactly the same on the outside, though it may be different inside". ..." >> wow very well stated "... Now I'll share a secret about RKHS that you will seldom see exposed in any papers or articles. Implementing RKHS is embarrassingly easy. It may take hundreds of pages of math to describe RKHS, but only a few lines of code to make it work. The code sample at the bottom implements a flexible, optimized RKHS in under 30 lines of Python code. A hard coded RKHS could probably be done in 10 lines. It bothers me that it took several weeks of intense study to understand an algorithm that took me an hour to implement. That's what motivated me to write this article. ..." https://hpccsystems.com/ -> free, open source!!! tensorflow build-on too Appendix A -- Python Code Earlier in the article I talked about indexing into the infinite set of functions in the RKHS. In practice, your data is the index into your function space. But just to illustrate notions of function locality, and to provide a simple example, we will look at functions with various indexes that are close or far from each other. >> Howell: I will have to retype the pyton code... later? 08********08 #] 03Mar2022 Jonty Sinai 18Jan2019 - Understanding Neural ODE's https://jontysinai.github.io/jekyll/update/2019/01/18/understanding-neural-odes.html Understanding Neural ODE's Posted by Jonty Sinai on January 18, 2019 · 39 mins read "... Based on a 2018 paper by Ricky Tian Qi Chen, Yulia Rubanova, Jesse Bettenourt and David Duvenaud from the University of Toronto, neural ODE’s became prominent after being named one of the best student papers at NeurIPS 2018 in Montreal. ..." "... I’ll introduce ODE’s as an alternative approach to regression and explain why they may hold an advantage. ..." What would be really fun would be the extension to "Fractional Order Calculus" (FOC). 08********08 #] 14Feb2022 Hintron, Connectionists, Iam Palatnik: Weird beliefs about consciousness Great comment : Date: Mon, 14 Feb 2022 12:46:28 -0300 Subject: Re: Connectionists: Weird beliefs about consciousness Cc: Connectionists To: Gary Marcus From: Iam Palatnik A somewhat related question, just out of curiosity. Imagine the following: - An automatic solar panel that tracks the position of the sun. - A group of single celled microbes with phototaxis that follow the sunlight. - A jellyfish (animal without a brain) that follows/avoids the sunlight. - A cockroach (animal with a brain) that avoids the sunlight. - A drone with onboard AI that flies to regions of more intense sunlight to recharge its batteries. - A human that dislikes sunlight and actively avoids it. Can any of these, beside the human, be said to be aware or conscious of the sunlight, and why? What is most relevant? Being a biological life form, having a brain, being able to make decisions based on the environment? Being taxonomically close to humans? +-----+ https://en.wikipedia.org/wiki/GPT-3 GPT-3 - Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2) created by OpenAI, a San Francisco-based artificial intelligence research laboratory.[2] GPT-3's full version has a capacity of 175 billion machine learning parameters. GPT-3, which was introduced in May 2020, and was in beta testing as of July 2020,[3] is part of a trend in natural language processing (NLP) systems of pre-trained language representations.[1] The quality of the text generated by GPT-3 is so high that it can be difficult to determine whether or not it was written by a human, which has both benefits and risks.[4] Thirty-one OpenAI researchers and engineers presented the original May 28, 2020 paper introducing GPT-3. In their paper, they warned of GPT-3's potential dangers and called for research to mitigate risk.[1]: 34  David Chalmers, an Australian philosopher, described GPT-3 as "one of the most interesting and important AI systems ever produced."[5] Microsoft announced on September 22, 2020, that it had licensed "exclusive" use of GPT-3; others can still use the public API to receive output, but only Microsoft has access to GPT-3's underlying model.[6] >> Howell : reminds me of Robert Hecht-Nielson's Confabulation theory +----+ https://en.wikipedia.org/wiki/OpenAI OpenAI is an artificial intelligence (AI) research laboratory consisting of the for-profit corporation OpenAI LP and its parent company, the non-profit OpenAI Inc. The company, considered a competitor to DeepMind, conducts research in the field of AI with the stated goal of promoting and developing friendly AI in a way that benefits humanity as a whole. The organization was founded in San Francisco in late 2015 by Elon Musk, Sam Altman, and others, who collectively pledged US$1 billion. Musk resigned from the board in February 2018 but remained a donor. In 2019, OpenAI LP received a US$1 billion investment from Microsoft. >> Howell : Musk again! I'm impressed, but prerhaps it's just the $$ DALL-E and CLIP, Main article: https://en.wikipedia.org/wiki/DALL-E DALL-E is a Transformer model that creates images from textual descriptions, revealed by OpenAI in January 2021.[77] CLIP does the opposite: it creates a description for a given image.[78] DALL-E uses a 12-billion-parameter version of GPT-3 to interpret natural language inputs (such as "a green leather purse shaped like a pentagon" or "an isometric view of a sad capybara") and generate corresponding images. It is able to create images of realistic objects ("a stained glass window with an image of a blue strawberry") as well as objects that do not exist in reality ("a cube with the texture of a porcupine"). As of March 2021, no API or code is available. 08********08 #] 08Feb2022 Tsvi Achler - Real examples of inhibiting new research https://youtu.be/9BIn_Vmiwz4 Tsvi Achler Real examples of inhibiting new research: World Wide Theoretical Neuroscience Seminar 59 views Premiered Feb 1, 2022 https://www.youtube.com/channel/UCGOP9NyH-yp5IStpoipYDmA/videos Tsvi Achler, 113 subscribers https://www.youtube.com/watch?v=tK2RufalkLE Tsvi Achler, 113 subscribers I study the brain's recognition processing through a multidisciplinary perspective and I have found a novel solution that does not require as much handwaving. However as countless others have experienced in the past and present, academia is not the best place for novel ideas. However I want the field to progress forward faster, so I have dedicated myself to create videos to explain my research and the academic environment. https://www.youtube.com/channel/UCbvTQ3lLVvikKaYnNH3kH3g GABA, glutamate 08********08 #] 18Dec2021 Jurgen Schmidhuber's "Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc." >> Awesome discussion! Continue "up the threads" from : From: Asim Roy To: Tsvi Achler Cc: Juyang Weng , connectionists Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Date: Sun, 7 Nov 2021 08:12:47 +0000 (07/11/21 01:12:47 AM) Awesome com(I misssed recording most) : +-----+ From: Randall O'Reilly To: Schmidhuber Juergen Cc: Connectionists Connectionists Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Date: Wed, 27 Oct 2021 22:48:10 -0700 (27/10/21 11:48:10 PM) I vaguely remember someone making an interesting case a while back that it is the *last* person to invent something that gets all the credit. This is almost by definition: once it is sufficiently widely known, nobody can successfully reinvent it; conversely, if it can be successfully reinvented, then the previous attempts failed for one reason or another (which may have nothing to do with the merit of the work in question). For example, I remember being surprised how little Einstein added to what was already established by Lorentz and others, at the mathematical level, in the theory of special relativity. But he put those equations into a conceptual framework that obviously changed our understanding of basic physical concepts. Sometimes, it is not the basic equations etc that matter: it is the big picture vision. Cheers, - Randy +-----+ Date: Mon, 1 Nov 2021 12:07:50 +0000 Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Cc: connectionists@cs.cmu.edu To: Tsvi Achler From: "Levine, Daniel S" Tsvi, My book does not include the regulatory feedback you mention, but includes a lot of recurrent networks dating as far back as 1973 (some of them in high-impact journals). It is indeed readily available, in fact it was announced on Connectionists about two years ago. The link is https://www.routledge.com/Introduction-to-Neural-and-Cognitive-Modeling-3rd-Edition/Levine/p/book/9781848726482 . It is organized primarily by problems and secondarily by approaches. Dan 08********08 #] 18Dec2021 Juyang Weng - On Post Selection Using Test Sets (PSUTS) in AI https://www.youtube.com/watch?v=VpsufMtia14 104 views Jul 21, 2021 >> Weng argues that a [full, trained] system of many s-nets is require to avoid PSUTS? Perhaps related to MindCode basic concept? This is an AI theory talk. It first raises a rarely reported but unethical practice in Artificial Intelligence (AI) called Post Selection Using Test Sets (PSUTS). Consequently, the popular error-backprop methodology in deep learning lacks an acceptable generalization power. All AI methods fall into two broad schools, connectionist and symbolic. PSUTS practices have two kinds, machine PSUTS and human PSUTS. The connectionist school received criticisms for its ``scruffiness'' due to a huge number of scruffy parameters and now the machine PSUTS; but the seemingly ``clean'' symbolic school seems more brittle than what is known because of using human PSUTS. This paper formally defines what PSUTS is, analyzes why error-backprop methods with random initial weights suffer from severe local minima, why PSUTS violates well-established research ethics, and how every paper that used PSUTS should have at least transparently reported PSUTS data. For improved transparency in future publications, this paper proposes a new standard for AI metrology, called developmental errors for all networks trained in a project that the selection of the luckiest network depends on, along with Three Conditions: (1) system restrictions, (2) training experience and (3) computational resources. The paper is available at : http%3A%2F%2Fwww.cse.msu.edu%2F%7Eweng%2Fresearch%2FPSUTS-IJCNN2021rvsd-cite.pdf 08********08 #] 17Dec2021 Neural Net journal papers to read P. Baldi, R. Vershynin Nov2021 "A theory of capacity and sparse neural coding" Neural Networks Oops - wrog article, I only briefly skimmed this one! Olcay, Özgören, Karaçalı Nov2021 "On the characterization of cognitive tasks using activity-specific short-lived synchronization between electroencephalography channels" B.Orkan Olcay(a), Murat Özgören(b), Bilge Karaçalı(a) Nov2021 "On the characterization of cognitive tasks using activity-specific short-lived synchronization between electroencephalography channels" Neural Networks Volume 143, November 2021, Pages 452-474 a Department of Electrical and Electronics Engineering, Izmir Institute of Technology, 35430, Urla, Izmir, Turkey b Department of Biophysics, Faculty of Medicine, Near East University, 99138, Nicosia, Cyprus Received 12 February 2021, Revised 4 May 2021, Accepted 18 June 2021, Available online 30 June 2021. 08********08 #] 14Dec2021 Schmidhuber 24Sep2021 Scientific Integrity Schmidhuber 24Sep2021 Scientific Integrity, the 2021 Turing Lecture, and the 2018 Turing Award for Deep Learning.html Schmidhuber%2024Sep2021%20 > Beautiful paper, exciting, relevant to MindCode - ontogeny and "wave training" 08********08 #] 18Oct2021 Learn Python, as I need it to be employable!! see "$d_SysMaint""Python/0_python notes.txt" 08********08 17Jun2021 https://www.ijcnn.org/ 325$US registration paid 08*****08 01Mar2021 https://en.wikipedia.org/wiki/Backpropagation Neural Network Backpropagation - origins Seppo Linnainmaa (1970). The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors. Master's Thesis (in Finnish), Univ. Helsinki, 6–7. Linnainmaa, Seppo (1976). "Taylor expansion of the accumulated rounding error". BIT Numerical Mathematics. 16 (2): 146–160. doi:10.1007/bf01931367. S2CID 122357351. The thesis, and some supplementary information, can be found in his book, Werbos, Paul J. (1994). The Roots of Backpropagation : From Ordered Derivatives to Neural Networks and Political Forecasting. New York: John Wiley & Sons. ISBN 0-471-59897-6. ********************* IEEE contacts +--------+ IEEE - GDPR & other Jo-Ellen Snyder. Technical Community Program Specialist. IEEE-CIS <> Noel Simonson. web content administrator. IEEE-CIS. USA <> GDPR SME, Kevin Dreseley here at IEEE Thomas Compton. Senior Manager - Volunteer Engagement and Business Projects. IEEE Technical Activities. Piscataway. NJ. USA <> MGA staff Manager Vera Sharoff as her expertise is managing the IEEE Listserves IEEE Intellectual Property Rights group. Piscataway. NJ. USA http://www.ieee.org/web/publications/rights/copyrightmain.html +--------+ IEEE ListServers (for mass emails) IEEE List-master. Help with IEEE-CIS ListServers WCCI2020 mass email. IEEE ListServer Howell's ListServ [sub, un]-scribe instructions http://www.BillHowell.ca/Neural nets/Conference guides/Author guide website/IEEE ListServe publicity subscriptions.html IEEE ListServ - all publicly viewable lists http://listserv.ieee.org/ IEEE ListServer FAQs http://listserv.ieee.org/request/faq-listowners.html IEEE ListServer website http://listserv.ieee.org/cgi-bin/wa IEEE ListServer pwd setup https://listserv.ieee.org/cgi-bin/wa?GETPW1= IEEE ListServer manual http://www.lsoft.com/manuals/16.0/LISTSERV16.0_ListOwnersManual.pdf IEEE ListServer logon https://listserv.ieee.org/cgi-bin/wa?LOGON=GETPW1&Y=Bill@BillHowell.ca IEEE ListServ - all publicly viewable lists https://listserv.ieee.org/ https://listserv.ieee.org/ ********************* Conference Guides - t +--------+ Howell's [Author, PubChair, Publicity] guides Overall Guide - the menu links to the guides below http://www.billhowell.ca/Neural%20nets/Conference%20guides/Conference%20guides.html The individual ides below are "silo'd", and generally do not link to this overall guide. This as done to reduce confusion people wandered into the wrong area (especially authors). Authors' Guide http://www.billhowell.ca/Neural%20nets/Conference%20guides/Author%20guide%20website/Author%20guide.html Publications Guide http://www.billhowell.ca/Neural%20nets/Conference%20guides/Publications%20website/PubChair%20guide.html Publicity Guide - including ListServer instructions http://www.billhowell.ca/Neural%20nets/Conference%20guides/Publicity%20website/Publicity%20Guide.html Reviewers web-page http://www.billhowell.ca/Neural%20nets/Conference%20guides/Reviewers%20website/Reviewers%20guide.html Sponsors Guide http://www.billhowell.ca/Neural%20nets/Conference%20guides/Sponsors%20website/Call%20for%20Sponsors.html +--------+ Authors' Guide - indexing by Scopus/SCI http://www.billhowell.ca/Neural%20nets/Conference%20guides/Author%20guide%20website/IEEE%20Xplore.html Authors' Guide : IEEE CrossCheck http://www.billhowell.ca/Neural%20nets/Conference%20guides/Author%20guide%20website/IEEE%20CrossCheck.html Authors' Guide : IEEE CrossCheck blog http://www.billhowell.ca/Neural%20nets/Conference%20guides/Author%20guide%20website/IEEE%20CrossCheck%20blog.html Authors' Guide : CrossCheck analysis - single-sentence summaries http://www.billhowell.ca/Neural%20nets/Conference%20guides/Author%20guide%20website/IJCNN2019%20CrossCheck.xls Attendee downloads of conference papers http://www.BillHowell.ca/Neural%20nets/Conference%20guides/Author%20guide%20website/Attendee%20downloads%20of%20conference%20papers.html Blog : Attendee downloads of conference papers http://www.billhowell.ca/Neural%20nets/Conference%20guides/Author%20guide%20website/Attendee%20downloads%20of%20conference%20papers%20blog.html Attendee downloads - summary http://www.billhowell.ca/Neural%20nets/Conference%20guides/Author%20guide%20website/Attendee%20downloads%20-%20summary.html Authors Guide : IEEE-CIS ListServe subscriptions http://www.billhowell.ca/Neural%20nets/Conference%20guides/Author%20guide%20website/IEEE%20ListServe%20publicity%20subscriptions.html Authors Guide : WCCI 2020 HELP web-page http://www.billhowell.ca/Neural%20nets/Conference%20guides/Author%20guide%20website/HELP.html Authors Guide : HELP system description http://www.billhowell.ca/Neural%20nets/Conference%20guides/Author%20guide%20website/HELP%20system%20description.html +--------+ Publications Guide : IEEE CrossCheck , Problematic papers , http://www.billhowell.ca/Neural%20nets/Conference%20guides/Author%20guide%20website/190520%20problematic%20papers,%20sorted.txt http://www.billhowell.ca/Neural nets/Conference guides/Author guide website/190520 problematic papers, sorted.txt Publicity : Add a new email list to an IEEE-CIS ListServer , +--------+ Software, special pdf - insert [ISBN, copyright, header, footer] http://www.billhowell.ca/Software%20programming%20&%20code/bin/pdf%20edits/pdf%20insert%20[ISBN,%20copyright]%20by%20author,%20single%20paper.sh 24********24 #] 0Nov2020 INNS virual workshop on explain AI For Vladimir Cherkassky : Rather than how do we achieve explainable AI systems, what are your thoughts on how can we get Computational Intelligence systems to explain us? This is a a much more important problem that is already being tackled in [marketing, politics, social media, etc], but I don't think we would the machines to do that honestly. We only want proofs that confirm our beliefs. For Lee Giles : in your presentation You may have mentioned trtansfer learnign >? (too late, garbled question, session was ended...) ******** 0Nov2020 INNS virual workshop on explain AI Passcode: 205505 Asim Roy : 862 people rtered for the workshop 07:00 100 people had joined https://us02web.zoom.us/w/84654522678?tk=cCkTAX5fb8RmcDsYK40gpCp6zVzNiQN9QiE0N7nCF6I.AG.uItKqSNOVmCL9_PuvGAhnHH37pKMbJfFvj3UVCFoQ9MHmso0ZGrQ684qs_pV6luFMyspXguyari_GrLlNHeVsm0f0M9-t_ld.RQm6lc1YvbbWoqFHbeWgHg.3WRFelnWPmw3jv8T&pwd=SXV2WU5OVHNqUGF3a0ovYkdjTHdIQT09&uuid=WN_tCQOcbNiRDq7fepEmcYpEg WORKSHOP SPEAKERS Stephen Grossberg, Wang Professor of Cognitive and Neural Systems, Boston University, http://sites.bu.edu/steveg/ Juergen Schmidhuber, Scientific Director of IDSIA, http://people.idsia.ch/~juergen/ Jeff Krichmar, Professor of Cognitive Sciences, University of California, Irvine, http://www.socsci.uci.edu/~jkrichma Vladimir Cherkassky, Professor of ECE, Univ. of Minnesota, https://ece.umn.edu/directory/cherkassky-vladimir/ Lee Giles, Professor of Information Sciences, Pennsylvania State University, https://clgiles.ist.psu.edu/ MODERATORS: Asim Roy, Professor, Arizona State University (https://lifeboat.com/ex/bios.asim.roy) Daniel Levine, Professor, University of Texas at Arlington (https://www.uta.edu/psychology/people/daniel-levine.php) FORMAT: Each speaker – 35 minute presentation + 10 mins Q&A REGISTRATION - https://www.eventbrite.hk/e/explainable-ai-xai-virtual-workshop-registration-122938932657 With regards, Irwin King INNS President, FIEEE, FHKIE, DMACM, BoG APNNS & INNS Chair, Dept. of Computer Science & Engineering o +(852) 3943-8398 The Chinese University of Hong Kong f +(852) 2603-5024 Shatin, N.T., Hong Kong http://www.cse.cuhk.edu.hk/irwin.king Asim Roy VP, Industry Relations, INNS Chair, Committee for Virtual Technical Events, INNS Professor, Arizona State University https://lifeboat.com/ex/bios.asim.roy ******** 01Sep2020 https://ieeexplore.ieee.org/document/8892612/authors#authors IEEE Transactions on Neural Networks and Learning Systems ( Volume: 31 , Issue: 7 , July 2020 ) Page(s): 2409 - 2429 Date of Publication: 06 November 2019 Donna Xu ; Yaxin Shi ; Ivor W. Tsang ; Yew-Soon Ong ; Chen G Survey on Multi-Output Learning >> looks good, should read IEEE Transactions on Neural Networks and Learning Systems ( Volume: 31 , Issue: 7 , July 2020 ) Page(s): 2409 - 2429 Date of Publication: 06 November 2019 Xu etal Survey on Multi-Output Learning, IEEE-TNNLS, July 2020 ******** 28Jul2020 https://cec2021.mini.pw.edu.pl/en Jacek Mańdziuk. General Co-Chair. IEEE-CEC2021 Krakow. Warsaw UofT. Poland <> Stanislaw Kazmierczak. Publicity Co-Chair mass emails. IEEE-CEC2021 Krakow. Warsaw UofT. Poland <> NCAA - don't review paper : While the paper looks interesting and is well suited my current area of focus, I am limiting my peer reviews to once every 3-4 months to focus on my own project priorities and other responsibilities. *********** #] 30Apr2020 Hava Sieglemann - RNN symbolic processing João Pedro Neto1, Hava T. Siegelmann2, and J. Félix Costa3 Symbolic Processing in Neural Networks 2003, Journal of The Brazilian Computer Society - JBCS https://www.academia.edu/15199726/Symbolic_Processing_in_Neural_Networks?email_work_card=abstract-read-more jpn@di.fc.ul.pt, iehava@ie.technion.ac.il, and fgc@math.ist.utl.pt 1 Faculdade de Ciências, Dept. Informática, Bloco C5, Piso 1, 1700 Lisboa – PORTUGAL 2 Faculty of Industrial Engineering and Management, TECHNION CITY, HAIFA 32 000 – ISRAEL 3 Instituto Superior Técnico, Dept. Matemática, Av. Rovisco Pais, 1049-001 Lisboa – PORTUGAL /media/bill/SWAPPER/Neural Nets/References/Siegelmann, Neto, Costa 2003 Symbolic_Processing_in_Neural_Networks.pdf *********** #] 25Apr2020 Asim Roy - rebuttal of Grandmother cell commentary https://www.frontiersin.org/articles/10.3389/fnins.2019.01121/full Front. Neurosci., 24 October 2019 | https://doi.org/10.3389/fnins.2019.01121 The Value of Failure in Science: The Story of Grandmother Cells in Neuroscience Ann-Sophie Barwich* Department of History and Philosophy of Science and Medicine, Cognitive Science Program, Indiana University Bloomington, Bloomington, IN, United States From this perspective, focusing on the notion of failure first sounds like a challenge to pluralism. Scientific pluralism is the philosophical view that science is most progressive when it maintains and works with various, sometimes even conflicting models and methods (Kellert et al., 2006). Speaking of failure seems to imply the opposite: that we drop concepts that are considered false or are in dire conflict with other models. However, this opposition is misleading. Pluralism and failure meet at the dilemma of choice. A general objection directed at scientific pluralism is its abundance of options, which opens up concerns of relativism, as Chang (2012, 261) observed: “The fear of relativism, and its conflation with pluralism, will not go away easily. The objection comes back, in a different guise: ‘If you go with pluralism, how do you choose what to believe?’ Well, how do you choose in any case?” (emphasis in original) Pluralism does not preclude choice but suggests that choices are not exclusive, whether such decisions concern the selection of questions, methods, models, or hypotheses. Failure analysis aids in identifying and making such choices. ... In sum, the reasoning embodied by grandmother cells sidelined fruitful lines of inquiry. While picked up by contemporary research, these lines were already present in the 1970s (e.g., in Neisser, 1976). Such “unconceived alternatives” resonate with a phenomenon that the philosopher Stanford (2006) found in the history of science: namely, that many notable scientific ideas could have “made it” much earlier because their delayed success was not due to an absence of data, but poverty in conceiving the salience in evidence of alternatives. From this perspective, failure analysis acts as a conceptual tool of model pluralism without relativism: by making epistemic choices explicit and to question the plausibility of our more widespread, although implicit theoretical foundations. -------- Forwarded Message -------- Subject: RE: Connectionists: Grandmother cells a failed concept? Or is Barlow right that grandmother cells exist and “can now be recorded from and studied reliably?" Date: Sat, 25 Apr 2020 22:00:57 -0600 From: Bill Howell. Hussar. Alberta. Canada To: Asim Roy. IJCNN Conference Chair. WCCI2020 Glasgow. Arizona StateU. USA Asim Roy - I finally spent some time going through Ann-Sophie Barwich's article and your response. Thank-you for posting it on connectionists. Why on earth would she have used grandmother cells as an example of failure in the first place? It almost seems to me that she has meta-science concepts to "prove", and by picking a topic that most scientists wouldn't be familiar with she was free to wave her arms around and make proclamations. I don't feel there is much [substance, consistency] in her analysis, it just starts with the assumption that the grandmother cell concept is flawed, then she [waves her arms, flounders around] showing us ?what?. It is far too sloppy for my taste. There are unlimited failures in [climate, astronomy, physics, geology, history, etc, etc] that she MIGHT have been able to handle. I've long thought that there are "philosophical problems in science", but the main feeling that I draw from Barwich's paper (as well as from past issues and one particular disussion with philosophers) is that philosophers are ill-suited as a group (with some exceptions) to addressing those problems. I see far more [powerful, relevant] philosophical statements by scientists and even non-scientists, than by philosophers. After watching some degree of "trench warfare" (here I am exaggerating) since 2013 on the grandmother cell concept, it really raises the question to me as why it is so categorically opposed by many. It seems that many cannot abide by the very existence of a concept that they do not prefer, without anything close to to a hard case for rejection. I see that in other areas of science as well, to the point that it may the norm rather than the exception. Strange. Just tonight I downloaded a "four valued logic" software that I promised someone I would look at. I need to try to see how that fits as a tool in the spectrum [boolean, multi-valued, neutrosophic, fuzzy]. For now, I can relate to boolean and fuzzy (although fuzzy is not a focus of mine), and I sense that there may be great advantages for particular problems for the intermediate logics. But it's not clear that I will actually use those. Mr. Bill Howell 1-587-707-2027 Bill@BillHowell.ca www.BillHowell.ca P.O. Box 299, Hussar, Alberta, T0J1S0 member - International Neural Network Society (INNS), IEEE Computational Intelligence Society (IEEE-CIS), WCCI2020 Glasgow, Publicity Chair mass emails, http://wcci2020.org/ Retired: Science Research Manager (SE-REM-01) at Natural Resources Canada, CanmetMINING, Ottawa *********** #] 01Jan2020 WCCI2020 (IJCNN) review topics : Main : Aj. Neuroevolution and development Also : 6b. Neural prosthesis 3a. Dynamical models of spiking neurons 1h. Spiking neural networks 9g. Computations in tissues and cells extras not included : S48. Evolutionary Computation-based NNs for AI and Industrial Apps 1i. Reservoir networks ([echo, liquid]-state) 8i. Approximate dynamic programming, adaptiv critics, Markov decision processes S39. Challenges in Reservoir Computing *********** 19Dec2019 INNS Elections - my choices President : Chrisina Jayne Directors : Richard Duro DeLiang Wang Khan Iftekharuddin Seiichi Ozawa Jaouad Boumhidi Qinglai Wei ************** #] 04Nov2019 IEEE-CIS AdCom election I voted for : Special Election for AdCom member - To fill a vacancy For the Term ending 31 December 2019 Timothy Havens Special Election for AdCom member - To fill a vacancy for the Remaining Term 1 January 2020 – 31 December 2022 Robi Polikar Society Administrative Committee - Term 1 January 2020 – 31 December 2022 Peter Corcoran Keeley Crockett Gary Fogel Haibo He Yaochu Jin Confirmation#: 5740191134237186 Date: November 04, 2019 Time: 10:42 PM (GMT-05:00) Eastern Time (US & Canada) ************** #] 02Nov2019 NEUNET-D-19-00287R1 p Sahoo,Narayanan - Differential-game for Resource Aware Optimal Control NEUNET-D-19-00287R1 p Sahoo,Narayanan - Differential-game for Resource Aware Approximate Optimal Control of Large-scale Nonlinear Systems with Multiple Players.pdf ************** #] 29Oct2019 INNS Ada Lovelace Award submission Bill howell has been a member of the International Neural Network Society since aproximately 1988 (member # 174), and has assisted with INNS-related conferences since 2005. A list of conference responsibilities is given below. He was elected to INNS Board of Governors for 2014-2015, during which time he served as Secretary. Bill has also been a member of also a member of IEEE-CIS since ~2003. Neural Networks have long been a primary "hobby interest" for Bill, and IJCNN attendance has long been one of two principle types of personal vacations (the other being visiting family). As such, his involvement in IJCNN Organizing Committees and in doing peer reviews over the years was a way of giving back to the community, and a great way to get to know awesome scientists as friends. Neural Networks were never a part of his job responsibilities, and were not of part of the research groups that he managed or for which he worked as a [Secretary, administrator] at the technical or management committee level at the CANMET Mining research laboratories of the Department of Natural Resources in Canada. Nor did Neural Networks come up in policy-science committees in which he participated. Retired since late 2012, Bill's worked in inorganic [chemical plant operations, industrial R&D, market research], and spent the last half of his career at the CANMET Mining Labs of the Department of Natural Resources of Canada. Bill's current research project is to do step-by-step verifications of classical physics ("natural philosophy") concepts that could potentially replace major portions of modern physics (or maybe not, who knows?). In a year or two, once the current stage of that project is complete, Bill plans to return to his primary interest in neural networks, "Mindcode", a loose collection of ideas for a "DNA" basis for spiking neural networks, and the related issues of [architecture, data, functions, processes, operating systems, conciousness, behaviours, personalities, creativity]. Recent personal R&D projects include history (with his father, who is a history buff), and climate science, and floowing specialized areas of plasma physics, astronomy, and geology. Besides INNS, Bill's other volunteer committments include the local volunteer fire department in the small village of Hussar, Alberta (we are called "the basement savers", as it takes 20-30 minutes to get to many of the farmsteads), charity work with Lions Club international (including work at the Lions eyeglass center, which ships recyclyed and quality-tested eyeglasses to foreign nations), and help with local festivals and ceremonies. Conference involvement : 2005 IJCNN Montreal : Technical Co-Chair with Dan Levine, under General Chair Danil Prokhorov 2007 IJCNN Orlando : Publicity Chair under General Chair Jennie Si 2009 IJCNN Atlanta : Publicity Chair under General Chair Robert Kozma 2013 IJCNN Dallas : Publicity Chair under General Co-Chairs Daniel Levine and Plamen Angelov 2015 INNS BigData : Publicity Co-Chair with Simone Scardapane and Julio 2015 IJCNN Killarney: Publicity Co-Chair with Simone Scardapane and Julio 2016 INNS BigData & DeepLearning : Publicity Co-Chair with Simone Scardapane and Julio 2017 IJCNN Anchorage: Publications Chair, assisted with Publicity mass emails, under General Chair Yoonsuck Choe 2018 WCCI Rio de Janeiro : Publicity Co-Chair for mass emails, under General Co-Chairs Marley Vellasco & Pablo Estevez 2019 IJCNN Budapest : Sponsors & Exhibits Chair, assisted with [Publications, Publicity mass emails] 2020 WCCI Glasgow : Publicity Co-Chair for mass emails Publicity mass emails : - from 2007-2019, Bill ran an email server through his ISP provider - starting in ~2009, he started evolving of his own quick programs for maintaining mass emails lists - for WCCI2020 he set up IEEE ListServers for handling [lists, maintenance, sendouts]. This will also make it far easier for [security, hand-off to other Publicity Co-Chairs, data security year-to-year]. [Author, Publications] Guide websites - This was built for IJCNN2019, primarily for his own understanding of the details of the IEEE Publications process. The Author Guide was also a necessary system for him to better understand author needs. It alsomay have been useful for perehaps 100-200 young authors and professionals who were not familiar with paper submissions via Tomasz Cholewo's paper system, the IEEE [CrossCheck, copyright, PDF eXpress, Xplore] systems, or conference paper [formatting, corrections], and some other author actions-responsibilities. IEEE CrossCheck analysis : - This was a trial project for IJCNN2019, for which quantuitative criteria were put in place for paper acceptance based on CrossCheck (iThenticate-based) text [self, external] similarities with published literature. Self-similarity was the main challenge and there is still a need for quick tools to allow authors who do not have direct access to iThenticate to pre-check their papers before submission to make sure that they meet the criteria. Publications in the Neural Networks area : Only one conference (discussion) paper : William Neil Howell 2006 "Genetic specification of recurrent neural networks: Initial thoughts", Proceedings of WCCI 2006, World Congress on Computational Intelligence. Vancouver, paper#2074, pp 9370-9379, 16-21 July 2006 Peer reviews : - Journal papers : ~50 peer reviews, primarily for Neural Networks journal, but also IEEE Transactions on Neural Networks and Learning Systems, and Neural Computing Applications - Conference papers : >150 peer reviews, primarily for IJCNN and WCCI, but also INNS BigData and Deep Learning, and for a few conferences of INISTA ISNN, ICICIP, ICIST, SSCI, and CISDA. ************** #] 19Aug2019 Suykens papers on NN modules, architectures Singaravel S., Suykens J.A.K., Geyer P., ``Deep-learning neural-network architectures and methods: using component-based models in building-design energy prediction'', Advanced Engineering Informatics, vol. 38, Oct. 2018, pp. 81-90., Lirias number: 1693175. Karevan Z., Suykens J. A. K., ``Spatio-temporal Stacked LSTM for Temperature Prediction in Weather Forecasting'', Internal Report 18-136, ESAT-STADIUS, KU Leuven (Leuven, Belgium), 2018., Lirias number: x. ************** #] 01May2019 http://archive.ics.uci.edu/ml ************** #] 13Mar2019 ICIC2019 Call for Papers-Update 5 - html mistakes http://ic-ic.tongji.edu.cn/2019/Call%20for%20Papers.htm ************** #] 29Aug2018 email to Sarah Howell Catherine, Sarah - For no particular reason, tired after working to 03:10 in the morning last night as a volunteer for the lion's casino, I checked up on the hippocampal prosthesis. Turns out that earlier this year, a news article about www.kernel.com popped up : https://www.wired.com/story/hippocampal-neural-prosthetic/ Although human trials were carried out, I have not yet looked at the scientific papers. "... The team worked with 22 patients awaiting surgery for epilepsy. ..." (good choice given ethics and concerned about ruining a normal person's memories). In any case the results were quite impressive on the surface. https://logancollinsblog.com/2018/01/02/global-highlights-neuroengineering-towards-whole-brain-emulation-and-mind-uploading/ Strangely, I would not have been able to predict the "telepathic rats", which was done in 2013. Was this substantial? What does it mean? Are rats firmly ahead of humans now? Ted Berger, the long-term key scientist, broke off with the company, which started with Ted as Chief Scientist, [apparently, perhaps] out of concern with the demanded pace of advancement? http://neurotechreports.com/pages/publishersletterFeb17.html My own encounters with Ted : IJCNN2004 Budapest - Ted gave a presentation IJCNN2005 Montreal - I had invited him to present at this conference, which he did. Strangely, enthusiasm for his work wasn't great at the conference. He was disappointed in me as he thought that a conference paper automatically would lead to a journal paper, and I can't remember if he was part of that conference's Best Papers Special Issue. I did [get,?buy?] his book "Replacement Parts for the Brain" from him, which he didn't want to carry home, and read the book. 22Feb2016 Giacomo Borachi sent me a review invitation - I was really surprised to see it turned out to be a paper by Ted Berger! I was HUGELY enthusiastic, like a cheerleader, and gave it highest ratings. I really liked the way their mathematical approches had advanced. But I guess the other reviewers panned it. I suppose it doesn't surprise you that your Dad's opinion often differes radically from the opinions of leading experts. https://spectrum.ieee.org/the-human-os/biomedical/devices/starkeys-ai-transforms-hearing-aid-into-smart-wearables Your grandfather's hearing is getting worse, so he shouts as he talks in A&W in the morning. He never uses his hearing aids,, because of well-known problems with them. DeLiang Wang, who I've know well since 2005 and from the INNS Board of Directors (I sat for two years), and as he is Co-Editor-in-Chief of the INNS Neural Networks journal, has progressed in his work with Deep Learning neural networks and hearing aids. He collaborates with Starkey, a leading manufacturer, but they haven't produced a commercial model. I've seen this before - I don't think they can sell hearing aids that require a wire that descends to a sub-clothing torso electric unit with the power to do that kind of processing. Strange to me that people don't want function if it interferes with their looks?!?! ******************************* #] 20Nov2018 LinkedIn Subba Reddy Oota 7:19 AM Hi Bill Hope u r doing good I am happy to share this news that my two NIPS workshop papers got accepted this year https://openreview.net/forum?id=HJfYl4s0tX openreview.net A Deep Autoencoder for Near-Perfect fMRI Encoding openreview.net May I know any PhD recommendations from you.. My Reply Thanks for the reminder, Subba - I'm just trying to get back to my projects after two months of scrambling on IJCNN2019 organisation (I'm way behind on that, too) and other things. I promise to send out feelers over the next two weeks regarding PhD opportunities at main USA universities. Congratulations on the Workshop paper acceptances for NIPS! Fascinating... *********************** Good papers! N-0028.pdf, N-0017.pdf (quaternion! IJCNN2015 as well) ********************** #] 19Jun2019 for Sarah Howell : Kenneth O. Stanley 13Jul2018 "Neuroevolution: A different kind of deep learning. The quest to evolve neural networks through evolutionary algorithms." https://www.oreilly.com/ideas/neuroevolution-a-different-kind-of-deep-learning Stanley 13Jul2018 Neuroevolution: A different kind of deep learning /media/bill/HOWELL_BASE/Neural Nets/References/Robinson, Barron 07Apr2017 Epigenetics and the evolution of instincts.pdf ********************** #] 26Mar2018 WCCI2018 travel budget Registration 1,000 C$ (700 $US/ 0.7 $US/$C) due 01Apr2018 Hotel 1,400 = 201 $US/day * 7 days Windsor Barra Hotel (conference hotel) Airfare travelocity 20 to 30 hours leave [Fri 06Jul, Sat 14Jul] - C$ 1,642 to 1931 [Thu 05Jul, Sat 14Jul] - same price, [Thu 05Jul, Sun 15Jul] - C$ 1,475 $ (lower by 167 $ which is >200$ compared to separate flight-hotel bookings I need to find "close by" hotels Meals 1,050 = 150 $/day * 7 days Taxis, Bus 150 $C ??? YYC parking 150 $C Tutorials none Total 5,200 C$ = 1,000 + 2844 + 1,050 + 150 +150 3 days conf only (plus 2 travel days) 4,000 C$ = 1,000 registrn + 1,642 airfare + 3*(200 hotel + 150 meals) + 150 taxis + 150 YYC prkg ***************** #] 20Mar2018 search "Ted Berger and hippocampal prosthesis company" https://spectrum.ieee.org/the-human-os/biomedical/bionics/new-startup-aims-to-commercialize-a-brain-prosthetic-to-improve-memory The Human OSBiomedicalBionics 16 Aug 2016 | 18:00 GMT New Startup Aims to Commercialize a Brain Prosthetic to Improve Memory Kernel wants to build a neural implant based on neuroscientist Ted Berger's memory research By Eliza Strickland telepathic rats ***************** #] 06Mar2018 DeepLearn 2018 - 2nd INTERNATIONAL SUMMER SCHOOL ON DEEP LEARNING July 23-27, 2018, Genova, Italy Organized by: University of Genova, IRDTA – Brussels/London http://grammars.grlmc.com/DeepLearn2018/ Jose C. Principe (University of Florida), [introductory/advanced] Cognitive Architectures for Object Recognition in Video Johan Suykens (KU Leuven), [introductory/intermediate] Deep Learning and Kernel Machines ***************** #] 26Feb2018 search "Neural networks and performance measures" RMSE, NMI, Purity https://www.frontiersin.org/articles/10.3389/fncom.2014.00043/full Sirko Straube* and Mario M. Krell, Robotics Group, University of Bremen, Bremen, Germany 10Apr2014 How to evaluate an agent's behavior to infrequent events?—Reliable performance estimation insensitive to class distribution Front. Comput. Neurosci., 10 April 2014 | https://doi.org/10.3389/fncom.2014.00043 For classification - imbalance, confusion matrices ([positive, negative] versus [positive, negative]) 5. Conclusions: Metrics Insensitive to Imbalanced Classes https://papers.nips.cc/paper/548-benchmarking-feed-forward-neural-networks-models-and-measures.pdf Leonard G.C. Harney, Computing Discipline Macquarie University NSW2109 AUSTRALIA ?date? Benchmarking Feed-Forward Neural Networks: Models and Measures Too esoteric! Mainly for long epoch times... http://eng.auburn.edu/sites/personal/aesmith/files/publications/journal/PerformanceMeasures.pdf J. M. TWOMEY AND A. E. SMITH+ Department of Industrial Engineering, Uof Pittsburgh Performance Measures, Consistency, and Power for Artificial Neural Network Models* Mathl. Comput. Modelling Vol. 21, No. l/2, pp. 243-258, 1995 Too old RMSE... ************** #] 20Feb2018 INNS President election Hava Sieglemann, then Leonid Perlovsky, not Irwin King ************************* #] 12Feb2018 NIPS 2017 papers - analysis https://papers.nips.cc/book/advances-in-neural-information-processing-systems-30-2017 Synced, 2017-12-20 to extract papers & author emails, see "/media/bill/SWAPPER/bin/conf papers & author emails.sh" +-----+ https://syncedreview.com/2017/12/20/a-statistical-tour-of-nips-2017/ Awesome summary!! Demonstration - 20 Invited talk - 9 Oral 176 40 Poster 2,428 679 Spotlight 387 112 Symposium - 4 Tutorial - 9 Workshop - 52 Acceptance by subject (approximate from graph) submit accept Algorithms 950 200 Deep Learning 650 140 Applications 610 100 Prob Methods 230 70 Optimization 220 65 Theory 210 65 NeuroCogSci 170 30 RL&Planning 100 20 Data etc 10 5 Other 5 - carry 3,200 210 Total 3,155 695 eyeballed Total 3,240 679 official Acceptance rate 21% Publication Leaderboard (approximate from graph) Google 60 Carnegie MellonU 48 MIT 45 Microsoft 40 StanfordU 38 UC Berkley 35 DeepMind 31 Uof Oxford 22 UIll, Urbana-Champaign 20 GeorgiaTech 18 Princeton 17 ETH Zurich 17 IBM 16 INRIA 15 Harvard 15 Cornell 15 Duke 14 ColumbiaU 14 UofCambridge 14 EPFL 14 Uof Michigan 13 Uof Toronto 12 USC 12 TsinghuaU 12 Facebook 11 Riken 11 Uof Washington 10 UCLA 10 UofTexas Austin 10 NYU 10 UCollege London 10 UdeMontreal 9 Tecent AI Lab 9 OpenAI 8 Adobe 8 UCSanDiego 8 Uof Tokyo 7 Uof Pittsburg 7 Uof Minnesota 7 Uod CalDavis 7 Technion 6 UofPennsylvania 6 NanjingU 6 JohnHopkinsU 6 Uof Wisconsin-Madison 6 Australian NatU 5 TelAvivU 5 OhioStateU 5 NatU Signapore 5 carry-over 7(21)0 Total listed 741 eyeballed & hand calc 749 eyeballed & spreadsheet Totala papers 679 official Participation Get to know the trend 76.90% Networking 15.38 Job opportunities 7.69 Others 0.03 Sponsorships 2016 840 k$ 2017 1,760 84 institutions, 51 from US https://www.change.org/p/take-nips-2017-out-of-the-us-to-safeguard-academic-freedom ********************************* #] 02Feb2018 Yoh-Han Pao, Yoshiyasu Takefuji: Functional-Link Net Computing 14. Yoh-Han Pao, Yoshiyasu Takefuji: Functional-Link Net Computing: Theory, System Architecture, and Functionalities. IEEE Computer 25(5): 76-79 (1992) *************************** #] 01Feb2018 Regularisation vesus ordered derivatives http://cs231n.github.io/neural-networks-2/ very good overall discussion about setting up NNs and improving results of course, no mention of ordered derivatives ******************************** #] 18Jan2018 Kenneth Stanley "Neuroevolution: A different kind of deep learning" I posted to Facebook : https://www.oreilly.com/ideas/neuroevolution-a-different-kind-of-deep-learning Neuroevolution: A different kind of deep learning Is this the new BIG THING in AI (actually CI - Computational Intelligence), well beyond Deep Learning neural nets, and much more profound? I think Simone Scardapane posted this, important for me as the area targets much of my thinking for my "MindCode" project that I never work on since the mid-to-late 1990s. Kenneth O. Stanley's article is extremely well done, and mentions many great researchers like Dario Floreano, Andrea Soltoggio, Xin Yao, Risto Mikkilainenen, and David Fogel (Blondie24!!). 18Jan2018 NEURO-EVOLUTION - this is MindCode concepts!! from 20 years ago https://www.oreilly.com/ideas/neuroevolution-a-different-kind-of-deep-learning Neuroevolution: A different kind of deep learning The quest to evolve neural networks through evolutionary algorithms. By Kenneth O. Stanley, July 13, 2017 When I first waded into AI research in the late 1990s, the idea that brains could be evolved inside computers resonated with my sense of adventure. At that time, it was an unusual, even obscure field, but I felt a deep curiosity and affinity. The result has been 20 years of my life thinking about this subject, and a slew of algorithms developed with outstanding colleagues over the years, such as NEAT, HyperNEAT, and novelty search. In this article, I hope to convey some of the excitement of neuroevolution as well as provide insight into its issues, but without the opaque technical jargon of scientific articles. I have also taken, in part, an autobiographical perspective, reflecting my own deep involvement within the field. I hope my story provides a window for a wider audience into the quest to evolve brains within computers. myself and my co-author Joel Lehman wrote the book, Why Greatness Cannot Be Planned: The Myth of the Objective. In other words, as we crack the puzzle of neuroevolution, we are learning not just about computer algorithms, but about how the world works in deep and fundamental ways. Mentions : Dario Floreano - plastic neural networks is influenced by the early works Andrea Soltoggio - later ideas on neuromodulation, which allows some neurons to modulate the plasticity of others Xin Yao, Risto Mikkilainenen, David Fogel's Blondie24!! This is dealing with MindCode stuff!! Key concepts : indirect coding novelty search - sometimes better-faster than selecting best candidates quality diversification : “quality diversity” and sometimes “illumination algorithms.” This new class of algorithms, generally derived from novelty search, aims not to find a single optimal solution but rather to illuminate a broad cross-section of all the high-quality variations of what is possible for a task, like all the gaits that can be effective for a quadruped robot. One such algorithm, called MAP-Elites (invented by Jean-Baptiste Mouret and Jeff Clune), landed on the cover of Nature recently (in an article by Antione Cully, Jeff Clune, Danesh Tarapore, and Jean-Baptiste Mouret) for the discovery of just such a large collection of robot gaits, which can be selectively called into action in the event the robot experiences damage. Open-endedness Another interesting topic (and a favorite of mine) well suited to neuroevolution is open-endedness, or the idea of evolving increasingly complex and interesting behaviors without end. Many regard evolution on Earth as open-ended, and the prospect of a similar phenomenon occurring on a computer offers its own unique inspiration. One of the great challenges for neuroevolution is to provoke a succession of increasingly complex brains to evolve through a genuinely open-ended process. A vigorous and growing research community is pushing the boundaries of open-ended algorithms, as described here. My feeling is that open-endedness should be regarded as one of the great challenges of computer science, right alongside AI. Players For example, Google Brain (an AI lab within Google) has published large-scale experiments encompassing hundreds of GPUs on attempts to evolve the architecture of deep networks. The idea is that neuroevolution might be able to evolve the best structure for a network intended for training with stochastic gradient descent. In fact, the idea of architecture search through neuroevolution is attracting a number of major players in 2016 and 2017, including (in addition to Google) Sentient Technologies, MIT Media Lab, Johns Hopkins, Carnegie Mellon, and the list keeps growing. (See here and here for examples of initial work from this area.) http://eplex.cs.ucf.edu/neat_software/ Getting involved If you’re interested in evolving neural networks yourself, the good news is that it’s relatively easy to get started with neuroevolution. Plenty of software is available (see here), and for many people, the basic concept of breeding is intuitive enough to grasp the main ideas without advanced expertise. In fact, neuroevolution has the distinction of many hobbyists running successful experiments from their home computers, as you can see if you search for “neuroevolution” or “NEAT neural” on YouTube. As another example, one of the most popular and elegant software packages for NEAT, called SharpNEAT, was written by Colin Green, an independent software engineer with no official academic affiliation or training in the field. https://conferences.oreilly.com/artificial-intelligence/ai-ny?intcmp=il-data-confreg-lp-ainy18_20171215_new_site_neuroevolution_ken_stanley_end_cta Learn more about developments in AI and machine learning at the AI Conference in New York, April 29 to May 2, 2018. Hurry—best price ends February 2. xxx ******************* #] 15Oct2017 IEEE-CIS election, I chose : Society Administrative Committee Term 1 January 2018 – 31 December 2020 Christian Wagner Gary G. Yen Haibo He Derong Liu Sanaz Mostaghim Full list of candidates : Society Administrative Committee Term 1 January 2018 – 31 December 2020 (Please check the box next to your selection. You may vote for up to 5 candidates.) [Unchecked] Ali A. Minai details [Checked] Christian Wagner details [Checked] Gary G. Yen details [Unchecked] Chin-Teng (CT) Lin details [Unchecked] Jie Lu details [Unchecked] Dongrui Wu details [Unchecked] Plamen Angelov details [Unchecked] Ronald R. Yager details [Checked] Haibo He details [Checked] Derong Liu details [Unchecked] Ponnuthurai Nagaratnam Suganthan details [Unchecked] Janusz Kacprzyk details [Checked] Sanaz Mostaghim details [Unchecked] Timothy Havens details [Unchecked] Write-In [Unchecked] Abstain Confirmation#: 1658317102149887 Date: October 15, 2017 Time: 10:01 PM (GMT-05:00) Eastern Time (US & Canada) ******************** #] 13Sep2017 IEEE Elections my selections IEEE President-Elect, 2018 Y Vincenzo Piuri (Nominated by IEEE Board of Directors) N Jacek M. Zurada (Nominated by IEEE Board of Directors) Best Jose M. F. Moura (Nominated by Petition) . IEEE Division Delegate-Elect/Director-Elect, 2018 (IEEE-CIS in this) Division X Societies: IEEE Biometrics Council; IEEE Computational Intelligence Society; IEEE Control Systems Society; IEEE Engineering in Medicine and Biology Society; IEEE Photonics Society; IEEE Robotics and Automation Society; IEEE Sensors Council; IEEE Systems, Man, and Cybernetics Society; IEEE Systems Council YN John R. Vig (Nominated by IEEE Division X) N Ljiljana Trajkovic (Nominated by IEEE Division X) Y Okyay Kaynak (Nominated by Petition) . IEEE Region Delegate-Elect/Director-Elect, 2018-2019 Region 7 (Canada) Y Jason Jianjun Gu (Nominated by IEEE Region 7) N Adam Skorek (Nominated by IEEE Region 7) . IEEE Standards Association President-Elect, 2018 Y Dennis B. Brophy (Nominated by IEEE Standards Association) N Robert S. Fish (Nominated by IEEE Standards Association) . IEEE Standards Association Board of Governors Member-at-Large, 2018-2019 (Selections were given, no vote) Y Walter Weigel (Nominated by IEEE Standards Association) Y Masayuki Ariyoshi (Nominated by IEEE Standards Association) N Robby Robson (Nominated by IEEE Standards Association) YN Stephen D. Dukes (Nominated by IEEE Standards Association) . IEEE Technical Activities Vice President-Elect, 2018 Y K.J. Ray Liu (Nominated by IEEE Technical Activities) N Douglas N. Zuckerman (Nominated by IEEE Technical Activities) . IEEE-USA President-Elect, 2018 N Guruprasad “Guru” Madhavan (Nominated by IEEE-USA) Y Thomas M. Coughlin (Nominated by IEEE-USA) ************ #] 25Aug2017 NN stability references? Vu N.Phata (a), Le V.Hien (b) 07Jan2009 An application of Razumikhin theorem to exponential stability for linear non-autonomous systems with time-varying delay. Applied Mathematics Letters, v22, i9, Sep2009, Pages 1412-1417 http://www.sciencedirect.com/science/article/pii/S0893965909001372 (a) Institute of Mathematics, Hanoi, Viet Nam (b) Department of Mathematics, Hanoi National University of Education, Viet Nam Abstract - In this work, in the light of the Razumikhin stability theorem combined with the Newton–Leibniz formula, a new delay-dependent exponential stability condition is first derived for linear non-autonomous time delay systems without using model transformation and bounding techniques on the derivative of the time-varying delay function. The condition is presented in terms of the solution of Riccati differential equations. Phata, Hien 07Jan2009 An application of Razumikhin theorem to exponential stability for linear non-autonomous systems with time-varying delay Rob H. Gielen (a), Mircea Lazar (a), Sasa V. Rakovic(b) 01Apr2013 Necessary and Sufficient Razumikhin-Type Conditions for Stability of Delay Difference Equations. IEEE Transactions on Automatic Control, Volume: 58, Issue: 10, Oct. 2013, pp2637-2642, DOI: 10.1109/TAC.2013.2255951 http://ieeexplore.ieee.org/document/6491447/ (a) Dept. of Electr. Eng., Eindhoven Univ. of Technol., Eindhoven, Netherlands (b) St. Edmund Hall of Oxford Univ., Oxford, UK Gielen, Lazar, Rakovic 01Apr2013 Necessary and Sufficient Razumikhin-Type Conditions for Stability of Delay Difference Equations Abstract: This technical note considers stability analysis of time-delay systems described by delay difference equations (DDEs). All existing analysis methods for DDEs that rely on the Razumikhin approach provide sufficient, but not necessary conditions for asymptotic stability. Nevertheless, Lyapunov-Razumikhin functions are of interest because they induce invariant sets in the underlying state space of the dynamics. Therefore, we propose a relaxation of the Razumikhin conditions and prove that the relaxed conditions are necessary and sufficient for asymptotic stability of DDEs. For linear DDEs, it is shown that the developed conditions can be verified by solving a linear matrix inequality. Moreover, it is indicated that the proposed relaxation of Lyapunov-Razumikhin functions has an important implication for the construction of invariant sets for linear DDEs. George Seifert 18Jun1973 Liapunov-Razumikhin Conditions for Asymptotic Stability in Functional Differential Equations of Volterra Type. Journal Of Differential Equatlons, 16, 289-297 (1974) http://www.sciencedirect.com/science/article/pii/0022039674900163 Iozoa State University, Ames, Iowa, USA Seifert 18Jun1973 Liapunov-Razumikhin Conditions for Asymptotic Stability in Functional Differential Equations of Volterra Type D. Baleanu, A. Ranjbar N., S.J. Sadati R., H. Delavari, T. Abdeljawad (Maraaba), V. Gejji 27jan2010 "Lyapunov-Krasovskii Stability Theorem For Fractional Systems With Delay" http://www.ifin.ro/rjp/2011_56_5-6/0636_0643.pdf https://www.researchgate.net/publication/259441979_Lyapunov-Krasovskii_Stability_Theorem_for_Fractional_Systems_with_Delay V.L. Kharitonov (a), A.P. Zhabko (b) 24Jul2002 Lyapunov–Krasovskii approach to the robust stability analysis of time-delay systems, Automatica 39 (2003) 15–20, www.elsevier.com/locate/automatica, http://www.apmath.spbu.ru/ru/staff/kharitonov/publ/publ2.pdf (a) CINVESTAV-IPN, Automatic Control Department, A.P. 14-740, Mexico, D F07300, Mexico (b) Applied Mathematics and Control Processes Department, St.-Petersburg State University, 198904, St.-Petersburg, Russia Kharitonov, Zhabko 24Jul2002 Lyapunov–Krasovskii approach to the robust stability analysis of time-delay systems *********** #] 29Jun2017 Preparation of mass email instructions, example spreadsheet emto Abir Hussain. General Chair. 2017 Conf on Development of E-Systems Engineering in Edinburgh. John Moores U. Liverpool. UK ******************* #] 07May2017 Print out papers in two sessions that I chair Theory4 : 043 Popa - Octonion-Valued Bidirectional Associative Memories 058 Villasenor, Arana-Daniel, Alanis, Lopez-Franco - Hyperellipsoidal Neuron 453 Arce, Zamora, Sossa - Dendrite Ellipsoidal Neuron 814 Osakabe, Sato, Akima, Kinjo, Sakuraba - Neuro-inspired Quantum Associative Memory Using Adiabatic Hamiltonian Evolution 320 Liu, Sun, Hu, Gao, Ju, Yin - Matrix Variate RBM Model with Gaussian Distributions Deep6 : 628 Zhou, Bertram Shi - Action Unit Selective Feature Maps in Deep Networks for Facial Expression Recognition 660 Eisenbach etal - How to Get Pavement Distress Detection Ready for Deep Learning, A Systematic Approach 723 Monteiro, Granada, Barros, Meneguzzi - Deep Neural Networks for Kitchen Activity Recognition 491 Liu, Gao, Bao, Tang, Wu - Deep Convolutional Neural Networks for Pedestrian Detection with Skip Pooling ******************** Deep Learning successes : Search Engine - Google search revolution Image & Video - Driverless Cars, Google Images, Google speech recognition DeLiang Wang & Sparkey - Hearing Aids http://spectrum.ieee.org/tech-talk/computing/software/google-translate-gets-a-deep-learning-upgrade Google's neural machine translation 3Oct2016 Language - understanding text Conversation partners (Lisa in old days & people purportedly talked with it (I don't believe until I see it))?)WOW!! was I right!! http://spectrum.ieee.org/tech-talk/computing/software/deep-learning-startup-maluubas-ai-wants-to-talk-to-you Canadian startup Maluuba 01Dec2016 - deep conversation partner "... actor Joaquin Phoenix’s character and the AI named Samantha voiced by actress Scarlett Johansson in the 2013 film “Her.” Phoenix’s character eventually forms a romantic relationship with his AI companion as they share meaningful conversations that include both moments of laughter and sorrow. ..." Customer opportunities, problems, new trends, threats Games & Competition/Co-operation - Google Alpha Go, business strategies Human Resources - job candidate selection (Isabelle Guyon video job interviews), promotions/firing, reorganisation http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/deep-learning-ai-listens-to-machines-for-signs-of-trouble/?utm_source=RoboticsNews&utm_medium=Newsletter&utm_campaign=RN01172017 Machine condition trends from acoustic/vibration - 3DSignals, a startup based in Kefar Sava, Israel 27Dec2016 - relies on the artificial intelligence technique known as deep learning to understand the noise patterns of troubled machines and predict problems in advance ******************** #] 01Dec2016 TB incidence in China http://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/1472-6947-13-56 Cao etal 02May2013 A hybrid seasonal prediction model for tuberculosis incidence in China ******************** #] 01Dec2016 wireless optogenetic tools - AWESOME for neuroscience & neural networks!!! http://spectrum.ieee.org/biomedical/devices/neuroscientists-wirelessly-control-the-brain-of-a-scampering-lab-mouse?bt_alias=eyJ1c2VySWQiOiAiOTU5NzQ0YzUtZTY4MS00OTcyLTkzZWUtMTMyMjAxNWU5NjYyIn0%3D Neuroscientists Wirelessly Control the Brain of a Scampering Lab Mouse With wireless optogenetic tools, neuroscientists steer mice around their cages ****************** #] 2OOct2016 INNS Board votes : Yoonsuck Choe, definitely (very good, proactive, leader) Zheng-Guang Hou, China Marley Vellasco, latin/female Seiichi OZAWA, Japan, active Amir Hussain, Arabic prolific, active but not so energetic Barbara Hammer, Germany/female Couldn't fit in : Jonathan H. Chan, Thailand ****************** #] 08Aug2016 IJCNN mass email notes BillHowell.ca https://secure.lexicom.ca/login.php billhowell LB8%H5+#aCU+ ijcnn.cust.lexi.net http://customer.lexi.net/ billhowell a743bmtx When adding part 2 to empty list : 7: liangeng1976@163.com (OK) 8: liangf@illinois.edu (OK) 9: liangge@buffalo.edu (OK) 10: lianghongjing99@163.com (ERR) 11: liangjing@pmail.ntu.edu.sg (ERR) 12: liangjj8@163.com (ERR) 13: lianglc@ece.umn.edu (ERR) 14: liangping@pmail.ntu.edu.sg (ERR) 15: liangqingwei@sina.com (ERR) part 1 : 9999 on list, server accepted 9980 I didn't capture list addition results. part 2 : 9999 on list, server accepted 9420 /media/bill/USB DISK/a_INNS Lexicom email server/sendout list and response-list processing/160808 IJCNN a mass email list part 2 confirmation.txt part 3 : 577 on list, server accepted ???? *********** #] 08Aug2016 LibreCalc template error /home/bill/.config/libreoffice/4/user/template/Howell spreadsheet.ots does not exist. I copied /home/bill/Forms/0_form spreadsheet.ods to /home/bill/.config/libreoffice/4/user/template ********************* #] 28Jul2016 Technn papers A Memory Efficient DNA Sequence Alignment Technique Using Pointing Matrix, anonymous author The Architecture of the Multi-Agent Resource Conversion Processes Extended with Agent Coalitions, anonymous author Predicting Public Housing Prices Using Delayed Neural Networks, anonymous author *********************** #] 15Jun15 DeepLearn a mass email list - final.txt 18Jun2015 IEEE-CIS History Committee http://cis.ieee.org/history.html https://history.ieee-cis.sightworks.net/ But this is not trusted? May have been hijacked? ******************* #] 18Jun2015 Women in NN Bernadette Bouchon-Meunier, Vice President for Conferences (2014-2015) LIP6, University of Pierre & Marie Curie, France Bernadette.Bouchon-Meunier .a_t. lip6.fr www: http://webia.lip6.fr/~bouchon ******************* #] 01Mar2015? Questions for Dieterich : 1) (Deep Blue versus Kasparov in chess) and Blondie24 in contrast to (Watson versus Jenkins in Jeopardy) sense of "cheating" - [public relations, image] challenge human perceptions of machine performance - safest to "hide science under the hood" in a largely technophobic society? 2) [contrast, combination] * [Bayes theorem, Confabulation of Robert Hecht-Nielson] Confabulation : the impression that I have (without the experience of hanving applied it myself) : (a) Bayes - maximize expected value of outcome (find the "most truthful outcome") (b) Confabulation : i. maximize "truthfulness of inputs" ii. under simple conditions, reduces to Aristotilean logic Has IBM considered confabulation - if so : (a) do you see the distinction between [Bayes theorem, Confabulation of Robert Hecht-Nielson] as being legitimate? (b) If you see confabulation theory as being legitimate : i. In practise, has Bayes theorem been stretched beyond it's proper bounds, yielding pragmatic successes but leaving scientists with a misleading conceptual understanding? ii. Have you tried to apply confabulation? More questions for Schmidhuber 15562 p Zhu, Miao, Qing, Huang - Hierarchical Extreme Learning Machine for Unsupervised Representation Learning ************************* #] 15Feb2015 Schmidhuber interview comment that I posted "... Most current commercial interest is in plain pattern recognition, while this theory is about the next step, namely, making patterns (related to one of my previous answers). Which experiments should a robot’s RL controller, C, conduct to generate data that quickly improves its adaptive, predictive world model, M, which in turn can help to plan ahead? ..." Does this response lead into the questions of at least a simple form of machine consciousness, and beyond that self definition? For example, if the system goes beyond simply presenting interesting patterns (or conclusions) as part of a specifically assigned or designed task, can it lead into the next step of [pririotizing "hits", notifying "pertinent" people, recommending actions, helping to form teams for discussion]? In an ill-defined, open system like social media, this might involve having the system generalize its initial "marching orders" and perhaps defining new roles and targets? False positives could be a big problem, but could Deep Learning itself help to reduce "false positive advice" at a more abstract level? Measuring what is there is one thing, publishing a note and seeking reactions and actions is a process where some primitive level of consciouness may be necessary? I'm thinking of John Taylor's model of concept - sort of going beyond consciousness to a point where a system begins to understand the implications of its actions and the effect on the external environment. ************************* #] 09Feb2015 NEUNET-D-14-00410R1 r Capeccia, Kasabova etal - NeuCube SNN, EEG Data, Opiate Treatment.odt Confusion tables - chi-squared p-value see /home/bill/R statistical program/0_Notes - R statistical program.txt ************************ 08Feb2015 Oops - "Dead" emails (including remove my emails) were still being included in BigData send list!! ************************* #] 30Jan2015 Canada's anit-spam law - I think I'm OK http://fightspam.gc.ca/eic/site/030.nsf/eng/home Is conference publicity classified as a "Commercial operation?" Implied consent http://www.crtc.gc.ca/eng/com500/infograph3.htm The key for me may be : ...Implied consent ...Recipient’s e-mail address was conspicuously published or sent to you (we obtained emails from conference lists and postings on the Internet) ...The address was disclosed without any restrictions and your message relates to the recipient’s functions or activities in a business or official capacity. ************************* #] 13Jan2015 Filip Mischief-wit, Howewll IJCNN Facebook reply : Filip Mischief-wit - Your question is good, as the details may vary by conference. For IJCNN, if I remember correctly from my past experience, the Program Chairs & Technical Program Co-Chairs wait until the final (extended) deadline before going through submissions and dividing them up to assign to "Program Committee" PC members http://www.ijcnn.org/program-committee (terminology varies depending on the conference). The PC members then assign their papers to reviewers. That process can take a week or so (people also have day jobs!), and the reviewers have 3 or 4 weeks to do their jobs. Some conferences actually use a system for which reviewers "bid" on papers to review from all those submitted. I haven't looked for any analysis of the pros and cons of various approaches. Maybe someone who reads this can tell us! ******************* #] 11Jan2015 Qingshan Liu - refuse review I am sorry, but I am flooded with current deadlines and new responsibilities coming up in mid-to-late January, plus I am still working for the next two weeks on another Neural Networks journal peer review. Considering paper reviews and organisation activities related to the IJCNN & INNS-BigData conferences, I probably won't be able to review more NN journal papers until mid-April. ************************* /home/bill/a_INNS Lexicom email server/NIPS/2007 papers.txt /home/bill/a_INNS Lexicom email server/NIPS/2008 papers.txt /home/bill/a_INNS Lexicom email server/NIPS/2009 papers.txt /home/bill/a_INNS Lexicom email server/NIPS/2010 papers.txt /home/bill/a_INNS Lexicom email server/NIPS/2011 papers.txt /home/bill/a_INNS Lexicom email server/NIPS/2012 papers.txt /home/bill/a_INNS Lexicom email server/NIPS/2013 papers.txt /home/bill/a_INNS Lexicom email server/NIPS/NIPS authors, historical.ods **************** #] 19Dec2014 Thomson-Reuters. Conference Proceedings Citation Index 17Dec2014 comment on http://thomsonreuters.com/conference-proceedings-citation-index/ I could not download the T-R "Conference Proceedings Citation Index", as it would always freeze my computer (Linux Mint Debian operating system, FireFox browser). I would like to know if the "International Joint Conference on Neural Networks" (IJCNN) next summer (2015) will be indexed. As per comment below, this is important to scientists : "... Marek Jaszuk : I'm curious, whether the conference proceedings will be submitted to the Thomson Reuters Web of Science Conference Proceedings Citation Index? This is of key importance for me, because this is the only citation index accepted in my country. ..." **************** #] 01Dec2014??? http://innsbigdata.org/paper-submission/ https://easychair.org/conferences/?conf=innsbigdata2015 Conference content will be submitted for inclusion into Procedia Computer Science and indexed by Scopus 12Dec2014 NEUNET-D-14-00485 p Chen, Chen - Global asymptotical omega-periodicity of a fractional-order non-autonomous NNs.pdf 12Dec2014 Facebook - IJCNN registrations : The registration numbers will be posted to www.ijcnn.org soon, but to give you a pretty good idea, historical ballpark figures would put the cost at Advance : Members (INNS or IEEE-CIS) 550$US, Non-members 700$, Students 175$ Later registrations : Members 650$US, non-members 800$, Students 275$ The deadline for advanced registrations is not yet set, but It is VERY important to register early, given the fact that Killarney is a favorite vacation spot that will rapidly take up any available rooms, which are no longer "protected" for the conference after ?approximately mid-May? ************** #] ?date? Google ngrams: (influenza / sunspots),(cholera / sunspots), etc... Google ngrams: (influenza / sunspots),(cholera / sunspots),(malaria / sunspots),(bubonic plague / sunspots),(forest fires / sunspots) Pnina Geraldine Abir-Am 20Apr2010 "Gender and Technoscience: A Historical Perspective" J. Technol. Manag. Innov. 2010, Volume 5, Issue 1 http://pgabiram.scientificlegacies.org/doc/Pnina.Abir-Am.Gender.Technoscience.JOTMI.2010.pdf Abir-Am 20Apr2010 Gender and Technoscience - A Historical Perspective.pdf ********************* #] 24Oct2014 IEEE Spectrum interview of Michael Jordan re: BigData etc http://spectrum.ieee.org/robotics/artificial-intelligence/machinelearning-maestro-michael-jordan-on-the-delusions-of-big-data-and-other-huge-engineering-efforts/?utm_source=techalert&utm_medium=email& My response : "... it seems like every 20 years there is a new wave that involves them [neural nets] ..." Strange - I had been thinking of the same thing a few months ago after looking up pandemic cycles (flu, malaria, cholera, bubonic plague, but not smallpox) in relation to the ebola outbreaks (for which we'll have to wait 100 years and see), which is a "passive interest" after seeing a remarkable analysis in the early 2000's. Perhaps NN "outbreaks" are more of a generational thing, related to somehow to having to wait for "the old dogs to die" (not so funny any more, but ringing more true, now that I am an old dog). Assuming many generations of revolutions before we get anywhere near to a "more solid" understanding of the brain, 200 years to human-like intelligence seems even optimistic barring some form of machine hyper-evolution pushing the humans along. Robert Hecht-Nielson had suggested a similar number some decades ago, and at WCCI 2014 Don Wunsch suggested something like "... if you expect to see human-like intelligence in our lifetime ..." then you have to assume that human lifetimes will have to be dramatically extended. ********************* #] 15Oct2014 INNS mass email list - Removing bad email addresses from all emails in a Thunderbird folder - copy the text file containing the emails with bad addresses to "/home/bill/2015 IJCNN Publicity/mass emails/yymmdd Bad addresses.txt" this serves as a permanaent record - make a copy of that file in "/home/bill/2015 IJCNN Publicity/mass emails/" and rename as "email - list of recipients.in" - move "/home/bill/2015 IJCNN Publicity/mass emails/email - list of recipients.in" to "/home/bill/Qnial/MY_NDFS" (copy over any existing file) - modify "" by adding : f IS loaddefs '/home/bill/Qnial/MY_NDFS/email - remove addresses from a list of recipients.ndf' - start up QNial & Loaddefs "Profile - Qnial enter : lq (for loaddefs "quips) *********** #] 30Sep2014 IJCNN - "Failte Ireland" Tourism Board read through ibnsconnect.org **************** ICICIP 22 - extras (after porting notes from Toshiba) *********************** #] 04Jun2014 IEEE-SSCI 2014 paper reviews due 25Aug2014 Main : Adaptive programming based on Approximate Dynamic Programming 1. Evolvable systems 2. Control & Optimization via Approximate Dynamic Programming 3. Brain-computer interfaces 03Jun2014 Category Theory and control Jan H. van Schuppen yymmdd "Control and Algebra - An Introduction" http://www3.nd.edu/~mtns/papers/16047_1.pdf van Schuppen yymmdd Control and Algebra - An Introduction.pdf David I. Spivak 130918 "Category theory for scientists (Old version)" http://arxiv.org/abs/1302.6946v3 Spivak 130918 Category theory for scientists (Old version).pdf 30May2014 ICICIP 22 z_ref Qin, Xu, Shi - Convergence analysis for second-order interval Cohen–Grossberg neural networks.pdf ****************************** #] 28Apr2014 B. Widrow et al., “The No-Prop Algorithm: A New Learning Algorithm for Multilayer Neural Networks,” Neural Networks, vol. 37, 2013, pp. 182–188. Erik Cambria, Guang-Bin Huang 13mmdd "Extreme Learning Machines" IEEE Intelligent Systems, pp30-59 Includes : 1. Liyanaarachchi Lekamalage Chamara Kasun, Hongming Zhou, Guang-Bin Huang, Chi Man Vong "Representational Learning with ELMs for Big Data" 2. Jiarun Lin, Jianping Yin, Zhiping Cai, Qiang Liu, Kuan Li, Victor C.M. Leung "A Secure and Practical Mechanism for Outsourcing ELMs in Cloud Computing" 3. Liang Feng, Yew-Soon Ong, Meng-Hiot Lim "ELM-Guided Memetic Computation for Vehicle Routing" 4. Anton Akusok, Amaury Lendasse, Francesco Corona, ARui Nian, Yoan Miche "ELMVIS: A Nonlinear Visualization Technique Using Random Permutations and ELMs" 5. Paolo Gastaldo, Rodolfo Zunino, Erik Cambria, Sergio Decherchi "Combining ELMs with Random Projections" 6. Xuefeng Yang, Kezhi Mao "Reduced ELMs for Causal Relation Extraction from Unstructured Text" 7. Beom-Seok Oh, Jehyoung Jeon, Kar-Ann Toh, Andrew Beng Jin Teoh, Jaihie Kim "A System for Signature Verification Based on Horizontal and Vertical Components in Hand Gestures" 8. Hanchao Yu, Yiqiang Chen, Junfa Liu, Guang-Bin Huang "An Adaptive and Iterative Online Sequential ELMBased Multi-Degreeof-Freedom Gesture Recognition System" 9. "(didn't finish listing...) Guang-Bin Huang 140403 "An Insight into Extreme Learning Machines: Random Neurons, Random Features and Kernels" Cognitive Computing DOI 10.1007/s12559-014-9255-2 ****************** #] 28Apr2014 Schmidhuber's Overview of Deep Learning "... Sometimes we also speak of the depth of an architecture: SL FNNs with fixed topology imply a problem-independent maximal problem depth, typically the number of non-input layers. Similar for certain SL RNNs (Jaeger, 2001; Maass et al., 2002; Jaeger, 2004; Schrauwen et al., 2007) with fixed weights for all connections except those to output units—their maximal problem depth is 1, because only the final links in the corresponding CAPs are modifiable. In general, however, RNNs may solve problems of potentially unlimited depth. ..." >> Howell : Is this correct? Does credit assignment REQUIRE weight changes? Schmidhuber specifies that "modifiable weights" are the key point. End of Section 3 : "... It is possible to model and replace such unmodifiable environmental PCCs through a part of the NN that has already learned to predict (through some of its units) input events from former input events and actions (Sec. 6.1). Its weights are frozen, but can help to assign credit to other, still modifiable weights used to compute actions (Sec. 6.1). This approach may lead to very deep CAPs though. ..." >> Howell : This provides an escape, leading to DNA-specified, evolved RNNs (MindCode) ************************** www.elsevier.com/artworkinstructions http://www.elsevier.com/author-schemas/artwork-and-media-instructions http://cdn.elsevier.com/assets/pdf_file/0010/109963/Artwork.pdf *************************** #] WCCI 2014 Beijing Registration registration code dBTIvJ 625 $US WCCI 2014 Tutorials http://www.ieee-wcci2014.org/accepted-tutorials.htm NOTE: INNS Board of Governors meeting pre-empts tutorial attendance? Selections : 08:00 T6A1 Simon M. Lucas and Clare Bates Congdon, Computational Intelligence and Games 10:30 T3A2 Nikola Kasabov and Nathan Scott, Spiking Neural Networks for Machine Learning and Predictive Data Modeling: Methods, Systems, Applications 14:00 T3P1 Jennie Si, Computational Intelligence for Decoding Brain's Motor Cortical Functions 16:20 T2P2 Francesco Carlo Morabito, Computational Intelligence Approaches to Identification and Early Diagnosis of Memory Diseases **************************** #] 07Apr2014 http://www.ieee-wcci2014.org/Tutorials32/WCCITut-1P1-Raiko.pdf Deep Learning Turorial - Tapani Raiko, Aalto U Finland I'm not registered but could do these papers... ***** #] INNS-BOG https://doodle.com/polls/notifications?pollId=zdxztzmgc3b6wqyr&participantId=1972507232&participantKey=8dxs4377 ***** #] 04Mar2014 Chinese Visa Application form number 0001611922 Please be advised that this web site can hold your saved but not submitted application form for 30 days only. Entry 2014-07-04, Exit 2014-07-12 Do NOT submit application before 04Apr2014 Calgary.ChineseConsulate.org (NO www. !!) http://www.visaforchina.org/ 09:00-12:00 only, 403-264-3322 – apply for Chinese visa Ottawa embassy – 613-789-9608 Consulate General of the P. R. China in Calgary 1011-6th Ave., S.W., Suite 100, Calgary, AB T2P 0W1 Tel: (403) 264-3322 Ext 212 Fax: (403) 264-6656 Office hours: 9:00am - 12:00pm, Monday to Friday *************** Consulate General of the P. R. China in Calgary 1011-6th Ave., S.W., Suite 100, Calgary, AB T2P 0W1 Tel: (403) 264-3322 Ext 212 Fax: (403) 264-6656 Office hours: 9:00am - 12:00pm, Monday to Friday *************** #] The 2014 International Conference on Neural Networks - Fuzzy Systems Venice, Italy, March 15-17, 2014, http://tinyurl.com/nn-fs2014 General Chairs (Editors of Proceedings) ------------------- Prof.Yingxu Wang, PhD, P.Eng, F.WIF, F.ICIC, SM.IEEE, SM.ACM Schulich School of Engineering University of Calgary Calgary, AB, Canada T2N 1N4 http://scholar.google.ca/citations?user=gRVQjskAAAAJ&hl=en Keynote Lecture 1: "Latest Advances in Neuroinformatics and Fuzzy Systems" +---+ Yingxu Wang, PhD, Prof., PEng, FWIF, FICIC, SMIEEE, SMACM President, International Institute of Cognitive Informatics and Cognitive Computing (ICIC) Director, Laboratory for Cognitive Informatics and Cognitive Computing Dept. of Electrical and Computer Engineering Schulich School of Engineering University of Calgary 2500 University Drive NW, Calgary, Alberta, Canada T2N 1N4 ********************* #] NEUNET-D-13-00401 p He etal - Anti-Windup for time-varying delayed CNNs subject to Input Saturation http://web.mit.edu/braatzgroup/33_A_tutorial_on_linear_and_bilinear_matrix_inequalities.pdf #] Theorem provers : 1. http://en.wikipedia.org/wiki/LCF_theorem_prover 2. Successors include HOL (Higher Order Logic) - http://en.wikipedia.org/wiki/HOL_(proof_assistant) 3. and Isabelle. http://en.wikipedia.org/wiki/Isabelle_(proof_assistant) Isabelle theorem prover is an interactive theorem prover, successor of the Higher Order Logic (HOL) theorem prover. It is an LCF-style theorem prover (written in Standard ML), so it is based on a small logical core guaranteeing logical correctness. http://isabelle.in.tum.de/ OCaml - seems like QNial successor LMI software : 1. Gahinet and Nemirovskii wrote a software package called LMI-Lab [59] which evolved into the Matlab's LMI Control Toolbox [60]. 2. Vandenberghe and Boyd produced the code SP [23] which is an implementation of Nesterov and Todd's primal-dual potential reduction method for semideÆnite programming (this is an interior point algorithm). SP can be called from within Matlab [94]. 3. Boyd and Wu extended the usefulness of the SP program by writing SDPSOL [25,26], which is a parser/solver that calls SP. The advantages of SDPSOL are that the problem can be speciÆed in a high level language, and SDPSOL can run without Matlab. SDPSOL can, in addition to linear objective functions, handle trace and convex determi- nant objective functions. 4. LMITOOL is another software package for solving LMI problems that uses the SP solver for its computa- tions [50]. LMITOOL interfaces with Matlab, and there is an associated graphical user interface known as TKLMITOOL [49]. The Induced-Norm Control Tool- box [17] is a Matlab toolbox for robust and optimal control based on LMITOOL 5. The solvers that call SP are the easiest to use and can handle bigger problems than the other software. As of the publication of this tutorial, none of the above LMI sol- vers exploit matrix sparsity to a high degree ******************************************** #] INNS mass email list server submissions : ijcnn@ijcnn.cust.lexi.net IJCNN (paper deadline 15Jan)_09Sep*__06Oct*__03Nov*__17Nov*_ 01Dec*_05Jan*_12Jan*_____________27JanNIPS+___02Feb* The IJCNN schedule takes priority, emphasizes emails in November and December, and close to the paper submission deadline. The BigData schedule gives three quick mass emails up front, two weeks apart, but stretches out in the middle, trying to avoid too much overlap near to the IJCNN paper submission deadline. A request for [new items, emphasis] from the Chairs should be sent out a week or so before each mass emailing! IJCNN 150127 results Bad address 815 -> reduced to 525 when "cleaned" Moved 11 Out of office 31 Remove my email 11 150202-2 IJCNN ******************************************** IJCNN2015 Killarney Ireland, 12-17Jul2015 www.ijcnn.org Publicity Co-Chairs : Bill Howell - Mass emails, Advertisements in journals/mags, Calendars in journals/mags/societySites, Social Media (Facebook, LinkedIn) Giacomo Borrachi - ENNS and Europe, newsgroups, Special Session theme-seeking Yun Raymond Fu - IEEE-CIS newsletter 26Jan2015 5658 emails -> 5621 on email server 30Jan2015 1821 emails -> 1820 on email server 150130 BigData Publicity Reminders - it may help to keep repeating some important principles : 1. Credits on publicity messages : - INNS AND IEEE-CIS should always be mentioned as co-sponsors - Other Sponsors - As we get academic, industry, and government sponsors, where possible these should be mentioned (logos, name). There may be two or three classes of sponsors, depending on their levels of contribution not always the case) - Plenary speakers - are usually given high profile with at least some of the publicity (especially the flyers) 2. Review of pulicity material : - normally material should at least go to De-Shuang Huang and Yoonsuck Choe for [additions, deletions, corrections] before sending it out. - add Chairs responsible for a theme (eg Special Sessions, Tutorials, Workshops, Competitions) Alberta time UTC-7+1(DST) ******************************************** #] BigData (paper deadline 22Mar,ext 19Apr) 15Oct*_27Oct*_24Nov*_ 14Dec*_13Jan*_19JanNIPS*_21JanNIPS+*_30JanJMLR+*_12Feb*_23Feb*_13Mar*_26Mar*_?date?-general announcement * - denotes mass emailings that have already been sent. The maximum number of mass emailings is eight per conference, not including special emails to new lists [NIPS,ICML,JMLR]. Note that this schedule is very fluid and flexible!! It can easily be changed. ijcnn@ijcnn.cust.lexi.net QKC2.[4B7!X ************************************************* INNS BigData2015 San Francisco, 09-11Aug2015 http://innsbigdata.org Publicity Co-Chairs : Bill Howell - advisory and assistance Simone Scardapane - Social Media, Advertisements in journals/mags, Teng Teck Hou - Newsgroups, mailing lists, GoogleGroups, monthly updates from IJCNN OrgCom José Antonio Iglesias - mass emails, calendars of conferences in journals & mags, postings on society websites Publicity Reminders - it may help to keep repeating some important principles : 1. Credits on publicity messages : - INNS should always be mentioned as a Sponsor - Other Sponsors - As we get academic, industry, and government sponsors, where possible these should be mentioned (logos, name). There may be two or three classes of sponsors, depending on their levels of contribution not always the case) - Plenary speakers - are usually given high profile with at least some of the publicity (especially the flyers) 2. Review of publicity material : - normally material should at least go to De-Shuang Huang and Yoonsuck Choe for [additions, deletions, corrections] before sending it out. - add Chairs responsible for a theme (eg Special Sessions, Tutorials, Workshops, Competitions) 3. Important common content : - relevant deadlines (eg Paper submission, Special Sessions, Tutorials, Workshops, Competitions, Camera Ready copies, - an "unsubscribe" instruction for mass emails and social media emailings so that people can let us know they no longer want to receive these 4. Keep the message simple and clean - More is sometimes less, and less is sometime better. It is pointless to put everything in emails and postings, as details are available on the website, and excess clutter can simply discourage readers from reading anything in the message. Remember that while you may tend to read in great detail on a theme that you are passionate about, others are often drowning in information and won't get any more than a half-line or two into your message. So you need to catch their attention quickly, and get your basic message across before they quit your message. Mr. Bill Howell 1-587-707-2027 Bill@BillHowell.ca www.BillHowell.ca P.O. Box 299, Hussar, Alberta, T0J1S0 Society & Conference organisation responsibilities : INNS BOG member 2014-2016, past Secretary 2014 www.inns.org IJCNN 2005 Montreal Technical Co-Chair IJCNN 2007 Orlando Publicity Co-Chair IJCNN 2009 Atlanta Publicity Co-Chair IJCNN 2013 Dallas Publicity Co-Chair IJCNN 2015 Killarney IR Publicity Co-Chair, Program Committee member INNS-BigData 2015 San Francisco Publicity Co-Chair IJCNN 2017 Anchorage Publications Chair, Publicity mass emails WCCI 2018 Rio de Janiero Publicity Co-Chair Review committees (assign&review ?????) : ICICIP 2015 Wuhan China, INISTA 2015 Madrid, INNS-BigData 2015 San Francisco, ISNN 2016 St Petersburg, TECNON 2016 Singapore, SSCI 2014 Orlando, WIRN 2015 Rome (sub-reviewer), Reviewer (no assignment of reviews to others) : Neural Networks journal, IJCNN : 2016 Vancouver, 2015 Killarney Ireland, Last career position : Retired: Science Research Manager (SE-REM-01) of Natural Resources Canada, CanmetMINING, Ottawa Old Activities : Review committee (assign&review) : ICICIP2015 Wuhan China, Reviewer : IJCNN2015 Killarney, WCCI2014 Beijing, INISTA2015 Madrid, WIRN2015 Rome, SSCI2014 Orlando, INNS BOG member 2014-2016, past Secretary 2014 www.inns.org INNS 2016 & 2017 Awards committee work (not a member of the committee) Retired: Science Research Manager (SE-REM-01) of Natural Resources Canada, CanmetMINING, Ottawa Review committee (assign&review) : ISNN2016 St Petersburg, ICICIP2015 Wuhan China, Reviewer : Neural Networks journal, IJCNN2016 Vancouver, INNS-BigData2015 San Francisco, INISTA2015 Madrid, WIRN2015 Rome (sub-reviewer), SSCI2014 Orlando, TECNON2016 Singapore <<<<<<<<<<<<<<<<<<<<<<<<<<<< #] Easy Chair conf reviews BillHowell bill@billhowell.ca cXxJYMTVwp https://www.easychair.org/conferences/ IEEE-CIS member# 41565049 Region: 7 Intl Neural Network Society, member 174 Facebook XGaiHV4bPVPJ LinkedIn \G&.8"$H%jcu Google+&groups bill@billowell.ca ^gN:BNy[7x4v NatPost hi|}3>wF NakedNews MindCode VCkAXkQu NPA CP!m_h4=E4*| APEGA member# 26251 http://www.ieee.org/conferences_events/conferences/organizers/conf_app.html?appName=Publication Livestream 8fs5ftPBvMgt eGroupWare "li(((A_TcOb WordPress - 16Feb2015 I haven't yet created a website (an account) 198.161.91.130 IJCNN2017 logon 04Mar2014 USE NEPOMUK (based on STRIGI) for full-indexed file search - from Dolphin!! Google Groups Alberta time [UTC-7+1(DST)] ********************************** enddoc