/home/bill/web/Bill Howells book [note, review]s/0_Howell [note, review]s of other peoples work.HtmWeb.html:12: /home/bill/web/Bill Howells book [note, review]s/0_Howell [note, review]s of other peoples work.HtmWeb.html:14:directory /home/bill/web/Bill Howells book [note, review]s/0_Howell [note, review]s of other peoples work.HtmWeb.html:15:status & updates /home/bill/web/Bill Howells book [note, review]s/0_Howell [note, review]s of other peoples work.HtmWeb.html:16:copyrights /home/bill/web/Bill Howells book [note, review]s/0_Howell [note, review]s of other peoples work.HtmWeb.html:31: find "$d_web"'Bill Howells book [note, review]s/' -type f -name "*.link.txt" | sort | tr \\n \\0 | xargs -0 -IFILE grep '\"\$' "FILE" | sed 's|.*\"\$\(.*\)|"$\1|' | tr "'" '"' | sed 's|\"\$\(.*\)\"\"\(.*\)\/\(.*\)\"|
  • \n\t\3<\/a>|' >"$pLnkRaw" /home/bill/web/Bill Howells book [note, review]s/0_Howell [note, review]s of other peoples work.HtmWeb.html:48:
  • /home/bill/web/Bill Howells book [note, review]s/0_Howell [note, review]s of other peoples work.HtmWeb.html:50:
  • /home/bill/web/Bill Howells book [note, review]s/0_Howell [note, review]s of other peoples work.HtmWeb.html:52:
  • /home/bill/web/Bill Howells book [note, review]s/0_Howell [note, review]s of other peoples work.HtmWeb.html:54:
  • /home/bill/web/Bill Howells book [note, review]s/0_Howell [note, review]s of other peoples work.HtmWeb.html:56:
  • /home/bill/web/Bill Howells book [note, review]s/0_Howell [note, review]s of other peoples work.HtmWeb.html:58:
  • /home/bill/web/Bill Howells book [note, review]s/0_Howell [note, review]s of other peoples work.HtmWeb.html:60:
  • /home/bill/web/Bill Howells book [note, review]s/0_Howell [note, review]s of other peoples work.HtmWeb.html:62:
  • /home/bill/web/Bill Howells book [note, review]s/0_Howell [note, review]s of other peoples work.HtmWeb.html:64:
  • /home/bill/web/Bill Howells book [note, review]s/0_Howell [note, review]s of other peoples work.HtmWeb.html:66:
  • /home/bill/web/Bill Howells book [note, review]s/0_Howell [note, review]s of other peoples work.HtmWeb.html:68:
  • /home/bill/web/Bill Howells book [note, review]s/0_Howell [note, review]s of other peoples work.HtmWeb.html:70:
  • /home/bill/web/Bill Howells book [note, review]s/0_Howell [note, review]s of other peoples work.HtmWeb.html:72:
  • /home/bill/web/Bill Howells book [note, review]s/0_Howell [note, review]s of other peoples work.HtmWeb.html:74:
  • /home/bill/web/Bill Howells book [note, review]s/0_Howell [note, review]s of other peoples work.HtmWeb.html:76:
  • /home/bill/web/Bill Howells book [note, review]s/0_Howell [note, review]s of other peoples work.HtmWeb.html:78:
  • /home/bill/web/Bill Howells book [note, review]s/0_Howell [note, review]s of other peoples work.HtmWeb.html:80:
  • /home/bill/web/Bill Howells videos/160901 Big Data, Deep Learning, and Safety/0_Big Data, Deep Learning, and Safety.HtmWeb.html:20: /home/bill/web/Bill Howells videos/160901 Big Data, Deep Learning, and Safety/0_Big Data, Deep Learning, and Safety.HtmWeb.html:22:directory /home/bill/web/Bill Howells videos/160901 Big Data, Deep Learning, and Safety/0_Big Data, Deep Learning, and Safety.HtmWeb.html:23:status & updates /home/bill/web/Bill Howells videos/160901 Big Data, Deep Learning, and Safety/0_Big Data, Deep Learning, and Safety.HtmWeb.html:24:copyrights /home/bill/web/Bill Howells videos/170930 Past and Future Worlds - a STEM for kids/Past & future worlds.HtmWeb.html:146:The QNial programming language was used to [direct, sequence, conduct, whatever] the video production, together with a LibreOffice Calc spreadsheet that acts as a great front-end for preparing code specific to the video sequencing. These can be found in the Programming code directory listing, and will be handy for anyone interested in the details of how I produced the video. I like to describe the QNial programming language of Queen's University, Kingston, Ontario, Canada as "... the beautiful child of a marriage between LISP and APL ...". It is not commonly used today, and even though it is an interpreted language, I always get frustrated with other languages that I also use, it's conceptual power always brings me back home to it. Bug hunting can be problematic if you don't build in bug taps and [structured, object oriented] capabiities, but for much of what I do I keep those chains to a minimum so I can use the full power of the language. /home/bill/web/Bill Howells videos/170930 Past and Future Worlds - a STEM for kids/Past & future worlds.HtmWeb.html:20: /home/bill/web/Bill Howells videos/170930 Past and Future Worlds - a STEM for kids/Past & future worlds.HtmWeb.html:22:directory /home/bill/web/Bill Howells videos/170930 Past and Future Worlds - a STEM for kids/Past & future worlds.HtmWeb.html:23:status & updates /home/bill/web/Bill Howells videos/170930 Past and Future Worlds - a STEM for kids/Past & future worlds.HtmWeb.html:24:copyrights /home/bill/web/Bill Howells videos/170930 Past and Future Worlds - a STEM for kids/Past & future worlds.HtmWeb.html:41:
  • /home/bill/web/Bill Howells videos/170930 Past and Future Worlds - a STEM for kids/Past & future worlds.HtmWeb.html:43:
  • /home/bill/web/Bill Howells videos/170930 Past and Future Worlds - a STEM for kids/Past & future worlds.HtmWeb.html:47:At present, the full video (540 Mbytes) is too slow (dragging, deep voices, slow video), and is too cumbersome to go from one time to another. So until I convert to a different video [codec, contailer] formats (perhaps H.264 codec & .MKV container?) or find a video viewer that is better suited to large files, the videos for each scene are posted instead (see the listing below), giving better throughput and easy of going from one scene to another by separate loading. Microsoft Windows (and hopefully MacIntosh?) users can view this by downloading the VLC media viewer. "... VLC is a free and open source cross-platform multimedia player and framework that plays most multimedia files, and various streaming protocols. ..." At present, this full video cannot be moved forward and back within the video, something I will fix when I get the time, as the ability to go back over material and skip sections is particularly important with this video. In the meantime, the separate "Scenes" listed below can be used by moving back and forward. /home/bill/web/Bill Howells videos/170930 Past and Future Worlds - a STEM for kids/Past & future worlds.HtmWeb.html:52:
  • /home/bill/web/Bill Howells videos/170930 Past and Future Worlds - a STEM for kids/Past & future worlds.HtmWeb.html:54:
  • /home/bill/web/Bill Howells videos/170930 Past and Future Worlds - a STEM for kids/Past & future worlds.HtmWeb.html:58:
  • /home/bill/web/Bill Howells videos/170930 Past and Future Worlds - a STEM for kids/Past & future worlds.HtmWeb.html:60:
  • /home/bill/web/Bill Howells videos/170930 Past and Future Worlds - a STEM for kids/Past & future worlds.HtmWeb.html:62:
  • /home/bill/web/Bill Howells videos/170930 Past and Future Worlds - a STEM for kids/Past & future worlds.HtmWeb.html:64:
  • /home/bill/web/Bill Howells videos/170930 Past and Future Worlds - a STEM for kids/Past & future worlds.HtmWeb.html:67:
  • /home/bill/web/Bill Howells videos/170930 Past and Future Worlds - a STEM for kids/Past & future worlds.HtmWeb.html:70:
  • /home/bill/web/Bill Howells videos/170930 Past and Future Worlds - a STEM for kids/Past & future worlds.HtmWeb.html:72:
  • /home/bill/web/Bill Howells videos/170930 Past and Future Worlds - a STEM for kids/Past & future worlds.HtmWeb.html:74:
  • /home/bill/web/Bill Howells videos/170930 Past and Future Worlds - a STEM for kids/Past & future worlds.HtmWeb.html:77:
  • /home/bill/web/Bill Howells videos/220331 Hydrogen future Alberta/Howell - hydrogen future Alberta.HtmWeb.html:100:
  • Adobe pdf - file format. /home/bill/web/Bill Howells videos/220331 Hydrogen future Alberta/Howell - hydrogen future Alberta.HtmWeb.html:101:
  • Voice script - text file with script for the voice commentary. Also included are notes for some of the slides that were not commented on (marked by "(X)"). /home/bill/web/Bill Howells videos/220331 Hydrogen future Alberta/Howell - hydrogen future Alberta.HtmWeb.html:20: /home/bill/web/Bill Howells videos/220331 Hydrogen future Alberta/Howell - hydrogen future Alberta.HtmWeb.html:22:directory /home/bill/web/Bill Howells videos/220331 Hydrogen future Alberta/Howell - hydrogen future Alberta.HtmWeb.html:231:Click to view most files related to this presentation
    /home/bill/web/Bill Howells videos/220331 Hydrogen future Alberta/Howell - hydrogen future Alberta.HtmWeb.html:23:status & updates /home/bill/web/Bill Howells videos/220331 Hydrogen future Alberta/Howell - hydrogen future Alberta.HtmWeb.html:24:copyrights /home/bill/web/Bill Howells videos/220331 Hydrogen future Alberta/Howell - hydrogen future Alberta.HtmWeb.html:71:
  • Summary - my commentary as part of Perry Kincaid's winar, 31Mar2022. /home/bill/web/Bill Howells videos/220331 Hydrogen future Alberta/Howell - hydrogen future Alberta.HtmWeb.html:72:
  • Key files - to [view, hear] my commentary /home/bill/web/Bill Howells videos/220331 Hydrogen future Alberta/Howell - hydrogen future Alberta.HtmWeb.html:73:
  • References - unfortunately, the list is very incomplete, but does provide some links /home/bill/web/Bill Howells videos/220331 Hydrogen future Alberta/Howell - hydrogen future Alberta.HtmWeb.html:82:Perry Kincaid, founder of KEI Networks, organised a PUBLIC webinar Alberta is high on hydrogen : Introducing hydrogen to Alberta's energy mix and commentaries about how and why, Thursday, March 31st 4:00pm MST.
    /home/bill/web/Bill Howells videos/220331 Hydrogen future Alberta/Howell - hydrogen future Alberta.HtmWeb.html:98:
  • Slide show - open source presentation file format .odp. Microsoft PowerPoint will probably complain, but should be able to load. /home/bill/web/Bill Howells videos/220331 Hydrogen future Alberta/Howell - hydrogen future Alberta.HtmWeb.html:99:
  • Voice narrative - in mp3 audio file format. /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru background.HtmWeb.html:12: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru background.HtmWeb.html:14:directory /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru background.HtmWeb.html:15:status & updates /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru background.HtmWeb.html:16:copyrights /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru background.HtmWeb.html:49:
  • /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru background.HtmWeb.html:51:
  • /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:117:
  • /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:119:
  • /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:12: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:14:directory /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:155: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:157: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:159: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:15:status & updates /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:165: webPage description: Milky Way current sheet progression indicators; plus: Birkeland currents, NOT dark matter? right-click image for mpeg video inseparate [tab, window] /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:16:copyrights /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:192: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:194: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:196: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:198: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:200: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:214:The best description that I've seen of this hypothesis is a Expanding Earth video that was done by famous Batman comic book artist Neal Adams, which is part of the video link. Use the VLC media player for the .ogv video format, if necessary. A more complete [Science, Technology, Engineering, Math] (STEM) series of video scenes for school-children gives a larger perspective of this hypothesis. In the images above, the diameter of the Earth is not to scale, but it supposedly doubles (8 times the mass) since 200 MyBP. Seafloor ages are critical, but you must watch the Neal Adams video to see that.
    /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:217:Geologist James Maxlow did a great deal of detailed geological papers to establish scientific validity on the subject, as per his pdf "Beyond Plate Tectonics". Other scientists have also written about it, and some professional geologists claim success in applying it to .
    /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:232: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:234: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:236: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:286: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:322: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:324: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:326: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:328: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:330: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:365:

    /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:370: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:372: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:37:

    /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:407:

    /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:413: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:415:
    /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:41:
  • /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:427: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:429: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:431: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:43:
  • /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:442: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:456:

    /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:462: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:464: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:466: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:468: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:46:
  • /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:483:

    /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:489: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:491: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:493: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:495: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:49:
  • /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:512:

    /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:522: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:524: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:526: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:528: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:52:
  • /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:537: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:539: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:541: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:543: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:545: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:547: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:549: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:54:
  • /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:551: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:565:
    /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:56:
  • /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:570: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:572: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:574: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:576: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:596:

    /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:59:
  • /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:603: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:605: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:607: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:609: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:61:
  • /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:627: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:629: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:631: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:633: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:641: Peratt: polar aurora-to-filaments similiar to Stonehenge. Other petroglyphs and their plasma equivalents. /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:64:
  • /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:652:

    /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:660: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:662: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:66:
  • /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:670: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:671: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:672: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:673: /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:68:
  • /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:71:
  • /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:74:
  • /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:76:
  • /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:78:
  • /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:80:
  • /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:83:
  • /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:85:
  • /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:88:
  • /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:90:
  • /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:92:
  • /home/bill/web/Bill Howells videos/250927 Ocean Ru tale/A tale of Ocean Ru script.HtmWeb.html:95:
  • /home/bill/web/Bill Howells videos/Birkeland rotation in galaxy - not dark matter/Birkeland rotation in galaxy - not dark matter.HtmWeb.html:20: /home/bill/web/Bill Howells videos/Birkeland rotation in galaxy - not dark matter/Birkeland rotation in galaxy - not dark matter.HtmWeb.html:22:directory /home/bill/web/Bill Howells videos/Birkeland rotation in galaxy - not dark matter/Birkeland rotation in galaxy - not dark matter.HtmWeb.html:23:status & updates /home/bill/web/Bill Howells videos/Birkeland rotation in galaxy - not dark matter/Birkeland rotation in galaxy - not dark matter.HtmWeb.html:24:copyrights /home/bill/web/Bill Howells videos/Birkeland rotation in galaxy - not dark matter/Birkeland rotation in galaxy - not dark matter.HtmWeb.html:43:Ben Davidson of Suspicious Observers posted 3 brilliant videos on nearby stellar flaring, as further support for a potential "micro-flare" or other solar disruption to explain the 12,000 year [mythological observations, paleontology, geology, planetary] quasi-periodicity of disruptive events on Earth, which by appearances may be "imminent". I like Ben's <=50 to >=200 year uncertainty - and even though that is still a bit of guess, he is meticulous in pointing out the uncertainties.
    /home/bill/web/Bill Howells videos/Birkeland rotation in galaxy - not dark matter/Birkeland rotation in galaxy - not dark matter.HtmWeb.html:45:
  • 24Dec2019 DISASTER CYCLE | Signs in the Sky Now /home/bill/web/Bill Howells videos/Birkeland rotation in galaxy - not dark matter/Birkeland rotation in galaxy - not dark matter.HtmWeb.html:46:
  • 26Dec2019 Galactic Sheet Impact | Timing the Arrival /home/bill/web/Bill Howells videos/Birkeland rotation in galaxy - not dark matter/Birkeland rotation in galaxy - not dark matter.HtmWeb.html:47:
  • 27Dec2019 Nearby Superflares | What Do They Mean /home/bill/web/Bill Howells videos/Birkeland rotation in galaxy - not dark matter/Birkeland rotation in galaxy - not dark matter.HtmWeb.html:56:If we take an "Electric Universe" perspective, in particular Wal Thornhill's Birkeland current concepts for large-scale astronomy, and Don Scott's very interesting "solar transistor" model together with his 2015 Birkeland current model (also 2018 elaboration), then perhaps shifts in the galactic currents could be expected to "reincarnate-light up" or "dim-extinguish" stars to various degrees as the currents shift and move. Many stars (I can't remember all of them - perhaps brown dwarfs, giant planets close to being stars, etc) are not visible by light emission, but perhaps they are easily re-activated when current change. Perhaps in exteme case this might lead to "swapping the central star role" between a large planets and its star in the local z-pinch? In other words, the "lit-up regions" motions may relate more to drifts of galactic currents than to the motions of the stars themselves? In that manner, the "galactic spirals" could move independently of the stars.

    /home/bill/web/Bill Howells videos/Birkeland rotation in galaxy - not dark matter/Birkeland rotation in galaxy - not dark matter.HtmWeb.html:59:
    /home/bill/web/Bill Howells videos/Birkeland rotation in galaxy - not dark matter/Birkeland rotation in galaxy - not dark matter.HtmWeb.html:64:Note that Donald Scott's own analysis of "stellar velocity profiles" provides yet another explanation of what is observed. So my speculations here are just one of many that have been proposed.

    /home/bill/web/Bill Howells videos/Howell - videos.HtmWeb.html:20: /home/bill/web/Bill Howells videos/Howell - videos.HtmWeb.html:22:directory /home/bill/web/Bill Howells videos/Howell - videos.HtmWeb.html:23:status & updates /home/bill/web/Bill Howells videos/Howell - videos.HtmWeb.html:24:copyrights /home/bill/web/Bill Howells videos/Howell - videos.HtmWeb.html:36:ALL videos are provided in ogv file format, which is of higher quality and easier and more natural to me in a Linux environment. Microsoft Windows (and hopefully MacIntosh?) users can view this by downloading the VLC media viewer. "... VLC is a free and open source cross-platform multimedia player and framework that plays most multimedia files, and various streaming protocols. ...". /home/bill/web/Bill Howells videos/Howell - videos.HtmWeb.html:41:
  • /home/bill/web/Bill Howells videos/Howell - videos.HtmWeb.html:43: Ben Davidson of Suspicious Observers posted 3 brilliant videos on nearby stellar flaring, as further support for a potential "micro-flare" or other solar disruption to explain the 12,000 year [mythological observations, paleontology, geology, planetary] quasi-periodicity of disruptive events on Earth, which by appearances may be "imminent". But can stellar [apparent birth, brighten, dim, apparent death] also provide further potential evidence? Naturally we view stars' life-paths as "unidirectional", but is this a full picture, or can all these processes recur, as is the case fortheir [sunspots, micro-to-super novas, etc]? What has long fascinated me is the statement that the spirals of the galaxies move more rapidly than the stars in the galaxy, and how that might relate to the [Newtonian, General Relativity] problem at large scales.
    /home/bill/web/Bill Howells videos/Howell - videos.HtmWeb.html:47:
  • /home/bill/web/Bill Howells videos/Howell - videos.HtmWeb.html:58: Toolsets can be browsed via: Past and Future Worlds directory. Perhaps these may be of interest, help] to others putting together a film from Linux-based free software.
    /home/bill/web/Bill Howells videos/Howell - videos.HtmWeb.html:62:
  • /home/bill/web/Bill Howells videos/Howell - videos.HtmWeb.html:67: Toolsets can be browsed via: Big Data, Deep Learning, and Safety directory. Perhaps these may be of interest, help] to others putting together a film from Linux-based free software.
    /home/bill/web/Bill Howells videos/Howell - videos.HtmWeb.html:72:
  • /home/bill/web/Bill Howells videos/Howell - videos.HtmWeb.html:78: Toolsets can be browsed via: Icebreaker unchained directory. Perhaps these may be of interest, help] to others putting together a film from Linux-based free software.
    /home/bill/web/Bill Howells videos/Howell - videos.HtmWeb.html:82:
  • /home/bill/web/Bill Howells videos/Howell - videos.HtmWeb.html:90:
  • /home/bill/web/bin/bin logs/fileops run commentary, overall log.HtmWeb.html:106: code /home/bill/web/bin/bin logs/fileops run commentary, overall log.HtmWeb.html:113: code development /home/bill/web/bin/bin logs/fileops run commentary, overall log.HtmWeb.html:120: /home/bill/web/bin/bin logs/fileops run commentary, overall log.HtmWeb.html:127: /home/bill/web/bin/bin logs/fileops run commentary, overall log.HtmWeb.html:134: /home/bill/web/bin/bin logs/fileops run commentary, overall log.HtmWeb.html:140: overall /home/bill/web/bin/bin logs/fileops run commentary, overall log.HtmWeb.html:147: /home/bill/web/bin/bin logs/fileops run commentary, overall log.HtmWeb.html:154: /home/bill/web/bin/bin logs/fileops run commentary, overall log.HtmWeb.html:21: /home/bill/web/bin/bin logs/fileops run commentary, overall log.HtmWeb.html:23:directory /home/bill/web/bin/bin logs/fileops run commentary, overall log.HtmWeb.html:24:status & updates /home/bill/web/bin/bin logs/fileops run commentary, overall log.HtmWeb.html:25:copyrights /home/bill/web/bin/bin logs/fileops run commentary, overall log.HtmWeb.html:37:
  • /home/bill/web/bin/bin logs/fileops run commentary, overall log.HtmWeb.html:39:
  • /home/bill/web/bin/bin logs/fileops run webSite description.HtmWeb.html:124:fileops run commentary, overall.html
    /home/bill/web/bin/bin logs/fileops run webSite description.HtmWeb.html:133: code /home/bill/web/bin/bin logs/fileops run webSite description.HtmWeb.html:140: code development /home/bill/web/bin/bin logs/fileops run webSite description.HtmWeb.html:147: /home/bill/web/bin/bin logs/fileops run webSite description.HtmWeb.html:154: /home/bill/web/bin/bin logs/fileops run webSite description.HtmWeb.html:161: /home/bill/web/bin/bin logs/fileops run webSite description.HtmWeb.html:21: /home/bill/web/bin/bin logs/fileops run webSite description.HtmWeb.html:23:directory /home/bill/web/bin/bin logs/fileops run webSite description.HtmWeb.html:24:status & updates /home/bill/web/bin/bin logs/fileops run webSite description.HtmWeb.html:25:copyrights /home/bill/web/bin/bin logs/fileops run webSite description.HtmWeb.html:41:
  • /home/bill/web/bin/bin logs/fileops run webSite description.HtmWeb.html:43:
  • /home/bill/web/bin/bin logs/fileops run webSite description.HtmWeb.html:55:fileops run commentary, overall.html
    /home/bill/web/bin/bin logs/fileops run webSite description.HtmWeb.html:64:fileops run commentary, overall.html
    /home/bill/web/bin/blog/blog comment analysis.HtmWeb.html:110:
  • The video text is SUPPOSED to be a literal transcription of the video commentary. The vast bulk of the transcription is done by Google's "AI machines", as is sometimes the case for some youTube videos. However, the machine output has some [error, omission]s, and it lacks even minimal formatting. /home/bill/web/bin/blog/blog comment analysis.HtmWeb.html:12: /home/bill/web/bin/blog/blog comment analysis.HtmWeb.html:133:
  • /home/bill/web/bin/blog/blog comment analysis.HtmWeb.html:135:
  • /home/bill/web/bin/blog/blog comment analysis.HtmWeb.html:137:
  • /home/bill/web/bin/blog/blog comment analysis.HtmWeb.html:139:
  • /home/bill/web/bin/blog/blog comment analysis.HtmWeb.html:14:directory /home/bill/web/bin/blog/blog comment analysis.HtmWeb.html:15:status & updates /home/bill/web/bin/blog/blog comment analysis.HtmWeb.html:16:copyrights /home/bill/web/bin/blog/blog comment analysis.HtmWeb.html:274:Stephen Grossberg's book "Conscious mind, resonant brain" is the best reference that I know of that really brings out a solid basis for [cooperative-competetive, top down - bottom up, resonant resolving of a likely reality from very messy basic perceptions, Adaptive Resonance Theory (ART), laminar computing, consciousness, etc]. This is a whole new world for me. /home/bill/web/bin/blog/blog comment analysis.HtmWeb.html:305:Amateurs permit one to most easily avoid the turbic thinking of essentially all scientists, and especially the most [successful, awarded, famous] scientists.
    /home/bill/web/bin/blog/blog comment analysis.HtmWeb.html:310:Sites that profile scientific fraud: new one that had Ben Davidon book, Journal of irreproducible results, etc /home/bill/web/bin/blog/blog comment analysis.HtmWeb.html:32:
  • /home/bill/web/bin/blog/blog comment analysis.HtmWeb.html:35:
  • /home/bill/web/bin/blog/blog comment analysis.HtmWeb.html:37:
  • /home/bill/web/bin/blog/blog comment analysis.HtmWeb.html:40:
  • /home/bill/web/bin/blog/blog comment analysis.HtmWeb.html:43:
  • /home/bill/web/bin/blog/blog comment analysis.HtmWeb.html:45:
  • /home/bill/web/bin/blog/blog comment analysis.HtmWeb.html:47:
  • /home/bill/web/bin/blog/blog comment analysis.HtmWeb.html:49:
  • /home/bill/web/bin/blog/blog comment analysis.HtmWeb.html:51:
  • /home/bill/web/bin/blog/blog comment analysis.HtmWeb.html:53:
  • /home/bill/web/bin/blog/blog comment analysis.HtmWeb.html:55:
  • /home/bill/web/bin/blog/blog comment analysis.HtmWeb.html:57:
  • /home/bill/web/bin/blog/blog comment analysis.HtmWeb.html:59:
  • /home/bill/web/bin/blog/blog comment analysis.HtmWeb.html:70:Jacobovici's "Exodus proofs" video provides the only detailed blog analysis of scale that I have done (as of 18Oct2024). While I found the video to be [excellent, stunning, rich with details new to me], in the end it has been the blog analysis of 22,000+ comments by youTube viewers that provides a [deep, rich, fascinating] opportunity to look into some aspects of [what, how, why] people [think, believe]. That in turn affords me a strong basis for questioning my own [think, believe]s. Not apparent from this analysis are the even deeper subjects listed in the section ?? below: /home/bill/web/bin/fileops/fileops [nomenclature, docs].HtmWeb.html:106: code /home/bill/web/bin/fileops/fileops [nomenclature, docs].HtmWeb.html:113: code development /home/bill/web/bin/fileops/fileops [nomenclature, docs].HtmWeb.html:120: /home/bill/web/bin/fileops/fileops [nomenclature, docs].HtmWeb.html:127: /home/bill/web/bin/fileops/fileops [nomenclature, docs].HtmWeb.html:134: /home/bill/web/bin/fileops/fileops [nomenclature, docs].HtmWeb.html:140: overall /home/bill/web/bin/fileops/fileops [nomenclature, docs].HtmWeb.html:147: /home/bill/web/bin/fileops/fileops [nomenclature, docs].HtmWeb.html:154: /home/bill/web/bin/fileops/fileops [nomenclature, docs].HtmWeb.html:21: /home/bill/web/bin/fileops/fileops [nomenclature, docs].HtmWeb.html:23:directory /home/bill/web/bin/fileops/fileops [nomenclature, docs].HtmWeb.html:24:status & updates /home/bill/web/bin/fileops/fileops [nomenclature, docs].HtmWeb.html:25:copyrights /home/bill/web/bin/fileops/fileops [nomenclature, docs].HtmWeb.html:37:
  • /home/bill/web/bin/fileops/fileops [nomenclature, docs].HtmWeb.html:39:
  • /home/bill/web/bin/webLocal/1_webSite description.HtmWeb.html:128:fileops run commentary, overall.html
    /home/bill/web/bin/webLocal/1_webSite description.HtmWeb.html:137: code /home/bill/web/bin/webLocal/1_webSite description.HtmWeb.html:144: code development /home/bill/web/bin/webLocal/1_webSite description.HtmWeb.html:151: /home/bill/web/bin/webLocal/1_webSite description.HtmWeb.html:158: /home/bill/web/bin/webLocal/1_webSite description.HtmWeb.html:165: /home/bill/web/bin/webLocal/1_webSite description.HtmWeb.html:21: /home/bill/web/bin/webLocal/1_webSite description.HtmWeb.html:23:directory /home/bill/web/bin/webLocal/1_webSite description.HtmWeb.html:24:status & updates /home/bill/web/bin/webLocal/1_webSite description.HtmWeb.html:25:copyrights /home/bill/web/bin/webLocal/1_webSite description.HtmWeb.html:41:
  • /home/bill/web/bin/webLocal/1_webSite description.HtmWeb.html:43:
  • /home/bill/web/bin/webLocal/1_webSite description.HtmWeb.html:55:fileops run commentary, overall.html
    /home/bill/web/bin/webLocal/1_webSite description.HtmWeb.html:64:fileops run commentary, overall.html
    /home/bill/web/CompLangs/complexity, abstraction/con-joint [Complexity collapse, abstraction explosion], but where is Gorbans Blessing of dimensionality.HtmWeb.html:12: /home/bill/web/CompLangs/complexity, abstraction/con-joint [Complexity collapse, abstraction explosion], but where is Gorbans Blessing of dimensionality.HtmWeb.html:14:directory /home/bill/web/CompLangs/complexity, abstraction/con-joint [Complexity collapse, abstraction explosion], but where is Gorbans Blessing of dimensionality.HtmWeb.html:15:status & updates /home/bill/web/CompLangs/complexity, abstraction/con-joint [Complexity collapse, abstraction explosion], but where is Gorbans Blessing of dimensionality.HtmWeb.html:16:copyrights /home/bill/web/CompLangs/complexity, abstraction/con-joint [Complexity collapse, abstraction explosion], but where is Gorbans Blessing of dimensionality.HtmWeb.html:60:
  • /home/bill/web/CompLangs/complexity, abstraction/con-joint [Complexity collapse, abstraction explosion], but where is Gorbans Blessing of dimensionality.HtmWeb.html:62:
  • /home/bill/web/CompLangs/complexity, abstraction/con-joint [Complexity collapse, abstraction explosion], but where is Gorbans Blessing of dimensionality.HtmWeb.html:85:
  • /home/bill/web/CompLangs/complexity, abstraction/con-joint [Complexity collapse, abstraction explosion], but where is Gorbans Blessing of dimensionality.HtmWeb.html:87:
  • /home/bill/web/CompLangs/complexity, abstraction/con-joint [Complexity collapse, abstraction explosion], but where is Gorbans Blessing of dimensionality.HtmWeb.html:89:
  • /home/bill/web/CompLangs/complexity, abstraction/con-joint [Complexity collapse, abstraction explosion], but where is Gorbans Blessing of dimensionality.HtmWeb.html:91:
  • /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript [description, problem, debug].HtmWeb.html:20: /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript [description, problem, debug].HtmWeb.html:22:directory /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript [description, problem, debug].HtmWeb.html:23:status & updates /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript [description, problem, debug].HtmWeb.html:24:copyrights /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript [description, problem, debug].HtmWeb.html:79:
  • Howell - TradingView PineScript [description, problem, debug].html /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript [description, problem, debug].HtmWeb.html:80:
  • Howell - TradingView PineScript of priceTimeFractals.HtmWeb.html /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript [description, problem, debug].HtmWeb.html:81:
  • 0_PineScript notes.txt - details of software [code, bug, blogSolutions] /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript [description, problem, debug].HtmWeb.html:82:
  • 0_PineScript errors.txt - [error, solution]s that keep coming back /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript [description, problem, debug].HtmWeb.html:83:
  • Howell - References related to Puetz [H]UWS.html /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript [description, problem, debug].HtmWeb.html:89:
  • Kivanc Ozbilgics Turtle Trade PineScript - documention.txt /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript [description, problem, debug].HtmWeb.html:90:
  • Kivanc Ozbilgics Turtle Trade PineScript, plus 8-year detrended SP500.txt /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript [description, problem, debug].HtmWeb.html:91:
  • RicardoSantos, Function Polynomial Regression.txt /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript [description, problem, debug].HtmWeb.html:92:
  • sickojacko maximum [,relative] drawdown calculating functions.txt /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript [description, problem, debug].HtmWeb.html:93:
  • TradingView auto fib extension.txt /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript of priceTimeFractals.HtmWeb.html:142:Download symbol data (like [TVC:[USOIL, GOLD], NASDAQ: NDX]) from [TradingView, yahoo finance, etc]. My own data for SPX is in my LibreCalc spreadsheet SP500 1872-2020 TradingView, 1928-2020 yahoo finance.ods. Actually, it's in several different spreadsheets, hence the possibility of glitches as [update, change]s are made...
    /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript of priceTimeFractals.HtmWeb.html:201:Users can simply follow standard Trading View guide instructions to install the Pine Script program that super-imposes fractal [time, price] grids on their charts. I don't recommend that you do this UNLESS your [are, want to be] familiar with PineScript programming. The reason that I say that is because for every market symbol being tracked, you must provide a formula for the semi-log price [trend, relative standard deviation]. Preferably get as long a series as you can get, eg download from TradingView. If you don't have 20+ years of data (eg the young crypto market), it may be a waste of your time. Here are the statements that you need to adapt to your symbol's data :
    /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript of priceTimeFractals.HtmWeb.html:20: /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript of priceTimeFractals.HtmWeb.html:217:For details, see Howell - TradingView PineScript [description, problem, debug].html. /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript of priceTimeFractals.HtmWeb.html:227:
  • /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript of priceTimeFractals.HtmWeb.html:22:directory /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript of priceTimeFractals.HtmWeb.html:23:status & updates /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript of priceTimeFractals.HtmWeb.html:24:copyrights /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript of priceTimeFractals.HtmWeb.html:44:
  • /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript of priceTimeFractals.HtmWeb.html:47:
  • /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript of priceTimeFractals.HtmWeb.html:49:
  • /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript of priceTimeFractals.HtmWeb.html:51:
  • /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript of priceTimeFractals.HtmWeb.html:53:
  • /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript of priceTimeFractals.HtmWeb.html:56:
  • /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript of priceTimeFractals.HtmWeb.html:58:
  • /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript of priceTimeFractals.HtmWeb.html:70:Perhaps more importantly, are lessons that can be learned from my own failutres, and some of the techniques I've used to help debug my Pine Script code. General comments are provided on my webPage TradingView PineScript [description, problem, debug].
    /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript of priceTimeFractals.HtmWeb.html:81:This section also appears in my webPage for users , and also applies to programmers. /home/bill/web/CompLangs/PineScript/Howell - TradingView PineScript of priceTimeFractals.HtmWeb.html:83:Users only have to set up the basic chart and symbols in TradingView based on my chart PuetzUWS [time, price] multiFractal mirrors, SPX 1872-2020. To do so you must be a TradingView subscriber. After that, copy over my PineScript coding, which you can find on my TradingView page - click on "SCRIPTS", and select my script "PuetzUWS [time, price] multifractal of detrended SPX 1871-2020". Further setup details are given below.
    /home/bill/web/Cool emails/141113 Subject: Re: Minority issue, lunar calendar, Secretary.HtmEml.html:11:To: "Hava Siegelmann. INNS Awards & BoG. Uof Massachusetts. USA" <>
    /home/bill/web/Cool emails/141113 Subject: Re: Minority issue, lunar calendar, Secretary.HtmEml.html:9:From: "Bill Howell. Retired from NRCan. now in Alberta Canada" <>
    /home/bill/web/Cool emails/141122 Subject: Re: women in science.HtmEml.html:11:To: "Pnina Abir-Am. Founding Director. Scientific Legacies. Belmont. MA. USA" <>
    /home/bill/web/Cool emails/141122 Subject: Re: women in science.HtmEml.html:9:From: "Bill Howell. Retired from NRCan. now in Alberta Canada" <>
    /home/bill/web/Cool emails/141214 Subject: APEGA lunch discussion - fractional calculus.HtmEml.html:11:To: "Jason Ni. Geotechnical Engr. Parkland Geo. Calgary" <>
    /home/bill/web/Cool emails/141214 Subject: APEGA lunch discussion - fractional calculus.HtmEml.html:9:From: "Bill Howell. Retired from NRCan. now in Alberta Canada" <>
    /home/bill/web/Cool emails/141215 Subject: RE: Your Copy of the Global Climate Status Report.HtmEml.html:9:From: "Bill Howell. Retired from NRCan. now in Alberta Canada" <>
    /home/bill/web/Cool emails/150102 Subject: RE: Financial Markets.HtmEml.html:11:To: "Gordon Ball. PEng Aircraft Landing Nav Systems. Ottawa" <>
    /home/bill/web/Cool emails/150102 Subject: RE: Financial Markets.HtmEml.html:9:From: "Bill Howell. Retired from NRCan. now in Alberta Canada" <>
    /home/bill/web/Cool emails/150105 Subject: Connectionists: [publication and call for dialog] IEEE CIS.HtmEml.html:11:To: , all-flowers <>,
    /home/bill/web/Cool emails/150105 Subject: Connectionists: [publication and call for dialog] IEEE CIS.HtmEml.html:9:From: Pierre-Yves Oudeyer <>
    /home/bill/web/Cool emails/150109 Subject: RE: Nuclear Accidents.HtmEml.html:11:To: "Gordon Ball. PEng Aircraft Landing Nav Systems. Ottawa" <>
    /home/bill/web/Cool emails/150109 Subject: RE: Nuclear Accidents.HtmEml.html:9:From: "Bill Howell. Retired from NRCan. now in Alberta Canada" <>
    /home/bill/web/Cool emails/150109 Subject: Science religions, the dark side of the Western heritage.HtmEml.html:11:To: "Gordon Ball. PEng Aircraft Landing Nav Systems. Ottawa" <>
    /home/bill/web/Cool emails/150109 Subject: Science religions, the dark side of the Western heritage.HtmEml.html:9:From: "Bill Howell. Retired from NRCan. now in Alberta Canada" <>
    /home/bill/web/Cool emails/150217 Subject: Re: Interview for INNS BigData.HtmEml.html:11:To: "Bill Howell. Retired from NRCan. now in Alberta Canada" <>
    /home/bill/web/Cool emails/150217 Subject: Re: Interview for INNS BigData.HtmEml.html:9:From: Simone Scardapane <>
    /home/bill/web/Cool emails/150427 Subject: Reincarnation - a possible mechanistic context.HtmEml.html:11:To: "Sarah Howell. Director-Actor-Freelance E-Journalist. Panama Columbia Peru Signapore " <>
    /home/bill/web/Cool emails/150427 Subject: Reincarnation - a possible mechanistic context.HtmEml.html:9:From: "Bill Howell. Retired from NRCan. now in Alberta Canada" <>
    /home/bill/web/Cool emails/150501 Subject: Reincarnation : generalization and another Environment-DNA link?.HtmEml.html:11:To: "Sarah Howell. Director-Actor-Freelance E-Journalist. Panama Columbia Peru Signapore " <>
    /home/bill/web/Cool emails/150501 Subject: Reincarnation : generalization and another Environment-DNA link?.HtmEml.html:9:From: "Bill Howell. Retired from NRCan. now in Alberta Canada" <>
    /home/bill/web/Cool emails/150501 Subject: RE: Reincarnation : generalization and another Environment-DNA link?.HtmEml.html:11:To: "Sarah Howell. Director-Actor-Freelance E-Journalist. Panama Columbia Peru Signapore " <>
    /home/bill/web/Cool emails/150501 Subject: RE: Reincarnation : generalization and another Environment-DNA link?.HtmEml.html:9:From: "Bill Howell. Retired from NRCan. now in Alberta Canada" <>
    /home/bill/web/Cool emails/150506 Subject: RE: This quote made me think of you....HtmEml.html:11:To: "Sarah Howell. Director-Actor-Freelance E-Journalist. Panama Columbia Peru Signapore " <>
    /home/bill/web/Cool emails/150506 Subject: RE: This quote made me think of you....HtmEml.html:9:From: "Bill Howell. Retired from NRCan. now in Alberta Canada" <>
    /home/bill/web/Cool emails/150513 Subject: Fwd: ?do we need rapid population reduction?.HtmEml.html:9:From: "Bill Howell. Retired from NRCan. now in Alberta Canada" <>
    /home/bill/web/Cool emails/150526 Subject: Brazilian lightning & electrical grid.HtmEml.html:11:To: "Marley Vellasco. INNS BoG. Head of E-Eng Dept. PUC-Rio. Rio de Janiero. Brazil" <>
    /home/bill/web/Cool emails/150526 Subject: Brazilian lightning & electrical grid.HtmEml.html:9:From: "Bill Howell. Retired from NRCan. now in Alberta Canada" <>
    /home/bill/web/Cool emails/150606 Subject: Quebec_Conference,_1943.HtmEml.html:11:To: "Gordon Ball. PEng Aircraft Landing Nav Systems. Ottawa" <>
    /home/bill/web/Cool emails/150606 Subject: Quebec_Conference,_1943.HtmEml.html:9:From: "Bill Howell. Retired from NRCan. now in Alberta Canada" <>
    /home/bill/web/Cool emails/150616 Subject: Deep Learning Workshop - Possibility of live webcast?.HtmEml.html:11:To: "Asim Roy. General Co-Chair. INNS BigData2015. Arizona StateU. USA" <>,
    /home/bill/web/Cool emails/150616 Subject: Deep Learning Workshop - Possibility of live webcast?.HtmEml.html:9:From: "Bill Howell. Retired from NRCan. now in Alberta Canada" <>
    /home/bill/web/Cool emails/150619 Subject: RE: NOVA | Earth From Space.HtmEml.html:11:To: "Peter Temple. FOS member. Speaker-Futurist-Cycles Expert-Market Analyst. Calgary" <>
    /home/bill/web/Cool emails/150619 Subject: RE: NOVA | Earth From Space.HtmEml.html:9:From: "Bill Howell. Retired from NRCan. now in Alberta Canada" <>
    /home/bill/web/Cool emails/161109 Subject: Climate leadership, CCR Technologies, ghostly stories.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/170302 Subject: Re: IJCNN2017 Confirmation of camera-ready (final) paper submission, registration.HtmEml.html:11:To: Irwin King <>
    /home/bill/web/Cool emails/170302 Subject: Re: IJCNN2017 Confirmation of camera-ready (final) paper submission, registration.HtmEml.html:12:Cc: Bill Howell <>,

    /home/bill/web/Cool emails/170302 Subject: Re: IJCNN2017 Confirmation of camera-ready (final) paper submission, registration.HtmEml.html:9:From: Yoonsuck Choe <>
    /home/bill/web/Cool emails/170510 Subject: RE: Audio file - Soviet supersonic Tupolev passanger aircraft &.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/170608 Subject: Experts Say When We will See Human-Level AI; Can We Quantify Machine Consciousness?; Radio-Controlled Genes; and more.HtmEml.html:9:From: "IEEE Spectrum Tech Alert" <>
    /home/bill/web/Cool emails/170610 Subject: Climate and history.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/170610 Subject: The Dark and Bright sides.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/170629 Subject: RE: Publicity co-chair for the IEEE conference series on Developmnet.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/170704 Subject: Re: Mass mails for INNS BigData 2018.HtmEml.html:11:To: Jose Antonio Iglesias <>
    /home/bill/web/Cool emails/170704 Subject: Re: Mass mails for INNS BigData 2018.HtmEml.html:12:Cc: Simone Scardapane <>,

    /home/bill/web/Cool emails/170704 Subject: Re: Mass mails for INNS BigData 2018.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/171223 Subject: Merry Xmas - Ukrainians started the Deep Learning revolution.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/180201 Subject: Public Policy and engineering.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/180216 Subject: RE : Cryto-currencies, national stagnation, historical and future.HtmEml.html:11:To: Gordon Ball <>
    /home/bill/web/Cool emails/180216 Subject: RE : Cryto-currencies, national stagnation, historical and future.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/180307 Subject: Barbarrossa - the not-so-surprising attack.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/180313 Subject: Missed meeting, payment, EU2017 conference comments.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/180520 Subject: Solar activity periodicities and the ~150 year ongoing failures of.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/180619 Subject: Epigenetics, reincarnation, and MindCode.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/180623 Subject: RE: de Jager, Nieuwenhuizen, Nieuwenhuijzen, Duhau - northern.HtmEml.html:11:To: "Cornelis de Jager. Astrophysicist. Netherlands" <>,
    /home/bill/web/Cool emails/180623 Subject: RE: de Jager, Nieuwenhuizen, Nieuwenhuijzen, Duhau - northern.HtmEml.html:12:Cc: "Albert Jacobs. Retd Geologist. Calgary" <>,

    /home/bill/web/Cool emails/180623 Subject: RE: de Jager, Nieuwenhuizen, Nieuwenhuijzen, Duhau - northern.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/180814 Subject: Paradigm Shift examples.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/180814 Subject: RE: Happy Birthday.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/180817 Subject: Sci-fi film proposal.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/180829 Subject: Memory enhancements. Telepathic rats? Fashion not function - advanced.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/181106 Subject: Schulich Engineering Students Society - budget cuts and the alumni.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/181109 Subject: Re: Bill Howell 46755.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/181126 Subject: Re: Professor Valentina Zharkova Breaks Her Silence and CONFIRMS.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/181229 Subject: RE: Greetings for the Holidays.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/190224 Subject: Kondriatieff time for the markets?.HtmEml.html:11:To: "Tin Man. OH3. Ottawa" <>
    /home/bill/web/Cool emails/190224 Subject: Kondriatieff time for the markets?.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/190308 Subject: Re: WCCI2018 stats, beyond WCCI2020.HtmEml.html:11:To: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/190308 Subject: Re: WCCI2018 stats, beyond WCCI2020.HtmEml.html:9:From: "Marley Vellasco" <>
    /home/bill/web/Cool emails/190309 Subject: IJCNN2019 paper #19557, PID5815197 - problems with the IEEE.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/190311 Subject: Puetz & Borchardt - 512 year cycle and Collapses of Civilisations.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/190323 Subject: RE: Freedom of speech.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/190419 Subject: Memory.com, from-computer-bits-to-human-creativity-and-back.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/190505 Subject: RE: Neil Howells birthday - Climate change summaries.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/190510 Subject: =?UTF-8?Q?RE=3a_Top_Scientist_Resigns=3a_=27Global_Warming_is_a_=24?=.HtmEml.html:12:Cc: anonymous2<>

    /home/bill/web/Cool emails/190510 Subject: =?UTF-8?Q?RE=3a_Top_Scientist_Resigns=3a_=27Global_Warming_is_a_=24?=.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/190531 Subject: RE: StepEncog.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/190630 Subject: RE: aliens.HtmEml.html:11:To: Sarah Howell <>
    /home/bill/web/Cool emails/190630 Subject: RE: aliens.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/190725 Subject: IJCNN2019 discussion - Spiking Neural Networks and [DNA, RNA, etc].HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/190817 Subject: IJCNN2019 converstation : Your paper on a classical to theoretical.HtmEml.html:11:To: "Johan Suykens. KU Leuven" <>
    /home/bill/web/Cool emails/190817 Subject: IJCNN2019 converstation : Your paper on a classical to theoretical.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/190914 Subject: Grand Solar Minimum, solar micro-nova, SAFIRE revolution of [physics,.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/191104 Subject: RE: 1001 Level 1 Test.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/191104 Subject: Re: Climate Change--Interesting.HtmEml.html:11:To: Ben Armstrong <>
    /home/bill/web/Cool emails/191104 Subject: Re: Climate Change--Interesting.HtmEml.html:12:Cc: "Chris. / Krista Armstrong" <>,

    /home/bill/web/Cool emails/191104 Subject: Re: Climate Change--Interesting.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/191107 Subject: IEEE-CIS ListServer for CIBCB conference publicity?.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/200313 Subject: Chinese Coal-To-Liquids projects - small so far, but....HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/200318 Subject: RE: TED talk - light-actted neurons and [stress, anxiety, PTSD].HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/200404 Subject: RE: March Financial Report - casino Q3-2023.HtmEml.html:9:From: "Bill Howell. Hussar. Alberta. Canada" <>
    /home/bill/web/Cool emails/240416 Overlapping circles in [myth, religion, math].HtmNwp.HtmEml.html:105: Another ah-hah! Bill Lucas relates these best to fundamental theoretical physics even though he doesn't go far into fractals. But of course there are many other applications of fractal mathematics going far beyond the golden spiral and Fibonacci numbers, which I used in this example on my home page (see 1872-2020 SP500 index, ratio of opening price to semi-log detrended price), and webPage. (ongoing work maybe) /home/bill/web/Cool emails/240416 Overlapping circles in [myth, religion, math].HtmNwp.HtmEml.html:30:12 Sacred Geometry Symbols & What They Mean: /home/bill/web/Cool emails/240416 Overlapping circles in [myth, religion, math].HtmNwp.HtmEml.html:47:
  • Vesica Piscis is related to Christianity as the "fish symbol". /home/bill/web/economics, markets/0_BillHowell market news/210421 BillHowell.ca market news.HtmWeb.html:124:https://www.tradingview.com/script/12M8Jqu6-Function-Polynomial-Regression/
    /home/bill/web/economics, markets/0_BillHowell market news/210421 BillHowell.ca market news.HtmWeb.html:126:
    /home/bill/web/economics, markets/0_BillHowell market news/210421 BillHowell.ca market news.HtmWeb.html:135: /home/bill/web/economics, markets/0_BillHowell market news/210421 BillHowell.ca market news.HtmWeb.html:137: /home/bill/web/economics, markets/0_BillHowell market news/210421 BillHowell.ca market news.HtmWeb.html:154: /home/bill/web/economics, markets/0_BillHowell market news/210421 BillHowell.ca market news.HtmWeb.html:156: /home/bill/web/economics, markets/0_BillHowell market news/210421 BillHowell.ca market news.HtmWeb.html:173: /home/bill/web/economics, markets/0_BillHowell market news/210421 BillHowell.ca market news.HtmWeb.html:175: /home/bill/web/economics, markets/0_BillHowell market news/210421 BillHowell.ca market news.HtmWeb.html:191: /home/bill/web/economics, markets/0_BillHowell market news/210421 BillHowell.ca market news.HtmWeb.html:193: /home/bill/web/economics, markets/0_BillHowell market news/210421 BillHowell.ca market news.HtmWeb.html:209: /home/bill/web/economics, markets/0_BillHowell market news/210421 BillHowell.ca market news.HtmWeb.html:20: /home/bill/web/economics, markets/0_BillHowell market news/210421 BillHowell.ca market news.HtmWeb.html:211: /home/bill/web/economics, markets/0_BillHowell market news/210421 BillHowell.ca market news.HtmWeb.html:22:directory /home/bill/web/economics, markets/0_BillHowell market news/210421 BillHowell.ca market news.HtmWeb.html:23:status & updates /home/bill/web/economics, markets/0_BillHowell market news/210421 BillHowell.ca market news.HtmWeb.html:24:copyrights /home/bill/web/economics, markets/0_BillHowell market news/210421 BillHowell.ca market news.HtmWeb.html:39:
  • Special comments /home/bill/web/economics, markets/0_BillHowell market news/210421 BillHowell.ca market news.HtmWeb.html:41:
  • /home/bill/web/economics, markets/0_BillHowell market news/210421 BillHowell.ca market news.HtmWeb.html:43:
  • /home/bill/web/economics, markets/0_BillHowell market news/210421 BillHowell.ca market news.HtmWeb.html:46:
  • Regular [1,6] month market views /home/bill/web/economics, markets/0_BillHowell market news/210421 BillHowell.ca market news.HtmWeb.html:48:
  • /home/bill/web/economics, markets/0_BillHowell market news/210421 BillHowell.ca market news.HtmWeb.html:50:
  • /home/bill/web/economics, markets/0_BillHowell market news/210421 BillHowell.ca market news.HtmWeb.html:52:
  • /home/bill/web/economics, markets/0_BillHowell market news/210421 BillHowell.ca market news.HtmWeb.html:54:
  • /home/bill/web/economics, markets/0_BillHowell market news/210421 BillHowell.ca market news.HtmWeb.html:56:
  • /home/bill/web/economics, markets/0_BillHowell market news/210421 BillHowell.ca market news.HtmWeb.html:70:
    /home/bill/web/economics, markets/0_BillHowell market news/210421 BillHowell.ca market news.HtmWeb.html:78:https://tr.tradingview.com/script/pB5nv16J/?utm_source=notification_email&utm_medium=email&utm_campaign=notification_pubscript_update
    /home/bill/web/economics, markets/0_BillHowell market news/210421 BillHowell.ca market news.HtmWeb.html:86:
    /home/bill/web/economics, markets/currency-crypto/Cryptos versus [currencies, 10 year [rates, bonds]].HtmWeb.html:20: /home/bill/web/economics, markets/currency-crypto/Cryptos versus [currencies, 10 year [rates, bonds]].HtmWeb.html:22:directory /home/bill/web/economics, markets/currency-crypto/Cryptos versus [currencies, 10 year [rates, bonds]].HtmWeb.html:23:status & updates /home/bill/web/economics, markets/currency-crypto/Cryptos versus [currencies, 10 year [rates, bonds]].HtmWeb.html:24:copyrights /home/bill/web/economics, markets/currency-crypto/Cryptos versus [currencies, 10 year [rates, bonds]].HtmWeb.html:36: /home/bill/web/economics, markets/currency-crypto/Cryptos versus [currencies, 10 year [rates, bonds]].HtmWeb.html:44:
    /home/bill/web/economics, markets/currency-crypto/Cryptos versus [currencies, 10 year [rates, bonds]].HtmWeb.html:49:
    /home/bill/web/economics, markets/currency-crypto/Cryptos versus [currencies, 10 year [rates, bonds]].HtmWeb.html:54:
    /home/bill/web/economics, markets/currency-crypto/Cryptos versus [currencies, 10 year [rates, bonds]].HtmWeb.html:59:
    /home/bill/web/economics, markets/currency-crypto/Cryptos versus [currencies, 10 year [rates, bonds]].HtmWeb.html:66:
    /home/bill/web/economics, markets/currency-crypto/Cryptos versus [currencies, 10 year [rates, bonds]].HtmWeb.html:71:
    /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:102:NOTE : The model here is DEFINITELY NOT suitable for application to [trade, invest]ing! It's far too [early, incomplete, immature, erroneous]! See Steven Puetz's http://www.uct-news.com/ for the investment side of the UWS.
    /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:20: /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:22:directory /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:23:status & updates /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:24:copyrights /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:39:
  • /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:42:
  • /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:45:
  • /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:48:
  • /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:51:
  • /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:53:
  • /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:552:I typically use a LibreOffice Calc spreadsheets to [collect, rearrange, simple transform] data. For this project : 1_Fischer 1200-2020.ods
    /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:55:
  • /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:565:This is susceptible to serious bias in selecting the [start, end] dates for each segment. See the spreadsheet 1_Fischer 1200-2020.ods.
    /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:570:The year ~1926 was taken as the [start, end] point for my 1872-2020 detrend StockMkt Indices 1871-2022 PuetzUWS2011 [start, end] point, so I use it here as well. (23Feb2023 - original text said 1940, perhaps it is still like that?)
    /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:57:
  • /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:580:This is easy with the spreadsheet - one column of regression results I use 10 year intervals per segment, but you only really need the [start, end] dates [-,+] 20 years. The extra 20 years extends the segments at both ends for visual clarity. For an example, see the spreadsheet 1_Fischer 1200-2020.ods, sheet "Fig 0.01 SLRsegments".
    /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:582:Save the "SLRsegments" to a data file that canused by GNUplot. Example : Fig 0.01 line segments for GNUplots.dat. Notice that olumn titles can use free-format text, except for the comma, which separates columns.
    /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:588:
  • Save data of 1_Fischer 1200-2020.ods to data file , example Fig 0.01 linear regression raw data.dat /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:590:
  • For each curve, Fischer linear regressions.ndf (23Feb2023 no longer exists?) - a special operator (proceedure) is created to select a segments's dataFile, handles [data, results] file [input, output], and calls fit_linearRegress.ndf /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:59:
  • /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:604:
  • text data file : Fig 0.01 Price of consumables in England 1201-1993.dat /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:605:
  • gnuplot script : Fig 0.01 Price of consumables in England 1201-1993.plt /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:606:
  • graph output : Fig 0.01 Price of consumables in England 1201-1993.png /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:61:
  • /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:622:
  • Fig 0.01 Price of consumables in England 1201-1993 detrended.plt - This covers the medieval to mdern era, and is used to colect curves for different data. The restricted t-frame provides a more accurate view of that period. /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:623:
  • 1850 BC to 2020 AD prices detrended.plt - Obviously this covers a variety of regions, time-frames. What I really need is data going 7,500 years (~3 cycles of 2,400 years (Halstatt cycle) corsponding to a 2006 project on the rise and fall of civilisations _Civilisations and the sun, and if I find [time to do it, data] this would be nice. /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:64:
  • /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:66:
  • /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:68:
  • /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:707: https://www.digitizeit.xyz/ /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:709: https://www.gimp.org /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:712: http://www.gnuplot.info/ /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:71:
  • /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:73:
  • /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:75:
  • /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:77:
  • /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:79:
  • /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:81:
  • /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:83:
  • /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:85:
  • /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:87:
  • /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:89:
  • /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:91:
  • /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:93:
  • /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:95:
  • /home/bill/web/economics, markets/Fischer, David 1996 The Great Wave/0_Fischer - The Great pricing Waves 1200-1990 AD.HtmWeb.html:97:
  • /home/bill/web/economics, markets/Freeman 27Oct2000 The Quality Adjustment Method, How Statistical Fakery Wipes Out Inflation.HtmWeb.html:20: /home/bill/web/economics, markets/Freeman 27Oct2000 The Quality Adjustment Method, How Statistical Fakery Wipes Out Inflation.HtmWeb.html:22:directory /home/bill/web/economics, markets/Freeman 27Oct2000 The Quality Adjustment Method, How Statistical Fakery Wipes Out Inflation.HtmWeb.html:23:status & updates /home/bill/web/economics, markets/Freeman 27Oct2000 The Quality Adjustment Method, How Statistical Fakery Wipes Out Inflation.HtmWeb.html:24:copyrights /home/bill/web/economics, markets/[interest, exchange] rates/[interest, exchange] rate models.HtmWeb.html:20: /home/bill/web/economics, markets/[interest, exchange] rates/[interest, exchange] rate models.HtmWeb.html:22:directory /home/bill/web/economics, markets/[interest, exchange] rates/[interest, exchange] rate models.HtmWeb.html:23:status & updates /home/bill/web/economics, markets/[interest, exchange] rates/[interest, exchange] rate models.HtmWeb.html:24:copyrights /home/bill/web/economics, markets/[interest, exchange] rates/[interest, exchange] rate models.HtmWeb.html:37:
  • /home/bill/web/economics, markets/[interest, exchange] rates/[interest, exchange] rate models.HtmWeb.html:39:
  • /home/bill/web/economics, markets/[interest, exchange] rates/[interest, exchange] rate models.HtmWeb.html:51:
    /home/bill/web/economics, markets/Long term market indexes & PPI 0582.HtmWeb.html:20: /home/bill/web/economics, markets/Long term market indexes & PPI 0582.HtmWeb.html:22:directory /home/bill/web/economics, markets/Long term market indexes & PPI 0582.HtmWeb.html:23:status & updates /home/bill/web/economics, markets/Long term market indexes & PPI 0582.HtmWeb.html:24:copyrights /home/bill/web/economics, markets/market data/SLregress/200912 semi-log/1872-2020 SP500 index, ratio of opening price to semi-log detrended price.HtmWeb.html:142:
    /home/bill/web/economics, markets/market data/SLregress/200912 semi-log/1872-2020 SP500 index, ratio of opening price to semi-log detrended price.HtmWeb.html:145:
    /home/bill/web/economics, markets/market data/SLregress/200912 semi-log/1872-2020 SP500 index, ratio of opening price to semi-log detrended price.HtmWeb.html:152:While most results are provided in sections above, links to data [spreadsheets, text files] and software [???, source code] are listed below along with brief comments. A full listing of files (including other SP500 web-pages) can be seen via
    this Directory's listing. Hopefully this will help those who want to do something different, as the programs etc mayhelp with [learning, debugging].
    /home/bill/web/economics, markets/market data/SLregress/200912 semi-log/1872-2020 SP500 index, ratio of opening price to semi-log detrended price.HtmWeb.html:156:
  • TradingView data text file and spreadsheet - I had to upgrade my TradingView subscription to Pro+ to download the data for years prior to 1928, as I couldn't find another source. Note that the S&P500 index started in 1926, so I assume that proxy [data, index memberships] were used for prior years. I used the spreadsheet to [gather, view, process] data, and copied the resulting tables to text files for use by gnuplot (see "Software" below). /home/bill/web/economics, markets/market data/SLregress/200912 semi-log/1872-2020 SP500 index, ratio of opening price to semi-log detrended price.HtmWeb.html:157:
  • Yahoo finance data (23Feb2023 the text file has been lost, but the data is in the linked spreadsheet with TradingView data). I was happy to have another "somewhat independent" data source, even if they are both from the same S&P or other source. This really helps as a check on my data treatment (see the section above "Comparison of [TradingView, Yahoo finance] data"). /home/bill/web/economics, markets/market data/SLregress/200912 semi-log/1872-2020 SP500 index, ratio of opening price to semi-log detrended price.HtmWeb.html:162:
  • TradingView Pine language You are probably wondering why I didn't provide a Pine script, which would make this much more useful to the TradingView community. Laziness is the rule - especially as I am hoping that a Pine Scripter (maybe you?) might do this. /home/bill/web/economics, markets/market data/SLregress/200912 semi-log/1872-2020 SP500 index, ratio of opening price to semi-log detrended price.HtmWeb.html:163:
  • gnuplot I've used the unofficial extension .plt to designate gnuplot scripts for each of the graphs. You can see these files in the market data subdirectories (eg 200913 for 13Sep2020, 220802 for 02Aug2022). /home/bill/web/economics, markets/market data/SLregress/200912 semi-log/1872-2020 SP500 index, ratio of opening price to semi-log detrended price.HtmWeb.html:164:
  • gimp (GNU image manipulation program) is what I used for the SP500 time-section transparencies. For more details, see the section above "Play with the [time, mind]-bending perspective yourself". /home/bill/web/economics, markets/market data/SLregress/200912 semi-log/1872-2020 SP500 index, ratio of opening price to semi-log detrended price.HtmWeb.html:165:
  • gnuplot.sh is the tiny bash script used to select gnuplot scripts. My other bash scripts can be found here. /home/bill/web/economics, markets/market data/SLregress/200912 semi-log/1872-2020 SP500 index, ratio of opening price to semi-log detrended price.HtmWeb.html:166:
  • QNial programming language - Quenn's University Nested Interactive Array Language (Q'Nial) is my top prefered programming language for modestly complex to insane programming challenges, along with at least 3 other people in the world. Bash scripts make a great companion to QNial. semi-log formula.ndf is the tiny "program" used to set up the semi-log line fits. More generally : here are many of my QNial programs. Subdirectories provide programs for various projects etc. /home/bill/web/economics, markets/market data/SLregress/200912 semi-log/1872-2020 SP500 index, ratio of opening price to semi-log detrended price.HtmWeb.html:20: /home/bill/web/economics, markets/market data/SLregress/200912 semi-log/1872-2020 SP500 index, ratio of opening price to semi-log detrended price.HtmWeb.html:22:directory /home/bill/web/economics, markets/market data/SLregress/200912 semi-log/1872-2020 SP500 index, ratio of opening price to semi-log detrended price.HtmWeb.html:23:status & updates /home/bill/web/economics, markets/market data/SLregress/200912 semi-log/1872-2020 SP500 index, ratio of opening price to semi-log detrended price.HtmWeb.html:24:copyrights /home/bill/web/economics, markets/market data/SLregress/200912 semi-log/1872-2020 SP500 index, ratio of opening price to semi-log detrended price.HtmWeb.html:39:
  • Key [results, comments] /home/bill/web/economics, markets/market data/SLregress/200912 semi-log/1872-2020 SP500 index, ratio of opening price to semi-log detrended price.HtmWeb.html:40:
  • Play with the [time, mind]-bending perspective yourself /home/bill/web/economics, markets/market data/SLregress/200912 semi-log/1872-2020 SP500 index, ratio of opening price to semi-log detrended price.HtmWeb.html:41:
  • Ratio of actual to semi-log detrended data : [advantages, disadvantages] /home/bill/web/economics, markets/market data/SLregress/200912 semi-log/1872-2020 SP500 index, ratio of opening price to semi-log detrended price.HtmWeb.html:42:
  • Future potential work /home/bill/web/economics, markets/market data/SLregress/200912 semi-log/1872-2020 SP500 index, ratio of opening price to semi-log detrended price.HtmWeb.html:43:
  • Comparison of [TradingView, Yahoo finance] data /home/bill/web/economics, markets/market data/SLregress/200912 semi-log/1872-2020 SP500 index, ratio of opening price to semi-log detrended price.HtmWeb.html:44:
  • [data, software] cart [description, links] /home/bill/web/economics, markets/market data/SLregress/200912 semi-log/1872-2020 SP500 index, ratio of opening price to semi-log detrended price.HtmWeb.html:54:
    /home/bill/web/economics, markets/market data/SLregress/200912 semi-log/1872-2020 SP500 index, ratio of opening price to semi-log detrended price.HtmWeb.html:58:
    /home/bill/web/economics, markets/market data/SLregress/200912 semi-log/1872-2020 SP500 index, ratio of opening price to semi-log detrended price.HtmWeb.html:62:
    /home/bill/web/economics, markets/market data/SLregress/200912 semi-log/1872-2020 SP500 index, ratio of opening price to semi-log detrended price.HtmWeb.html:64:Wow! Even knowing that the [eyes, mind] often see patterns that aren't really there (as per random noise), one can basically re-create the essence of the 1872-1960 timeframe simply by copying ONLY 4 chunks of the 1950-2020 time-frame!! Of course, there is nothing new about this - TradingView members are often comparing current market squiggles to the past, over different timescales. I would actually be surprised if the aqbove graph hasn't already been done hundreds of times before. [System_T, Amjad Farooq, TexasWestCapital, perhaps Harry Dent] and others are examples of recent pattern-matching for the SP500, with their comparisons of the 2020 crash to other time periods. But my overlays in the graph above above did not involve [re-scaling, rotating] or other transformation of the time segments, transparencies], so that is [noteworthy, simple, pure]. Scale is important, even if only to confirm the applicability of multi-scalar processes.
    /home/bill/web/economics, markets/market data/SLregress/200912 semi-log/1872-2020 SP500 index, ratio of opening price to semi-log detrended price.HtmWeb.html:75:While you probably don't have the gimp (GNU image manipulation program) installed on your computer (yet), it is available for free (I think on MS Windows as well, not just Linux?). With gimp, you will be able to work with my .xcf format file SP500 time-section transparencies. If you are new to gimp, be prepared to lose a lot of hair and gain a lot of wrinkles - it's not the the easiest learning curve, but it is powerful (and cheap!).
    /home/bill/web/economics, markets/market data/SLregress/200912 semi-log/1872-2020 SP500 index, ratio of opening price to semi-log detrended price.HtmWeb.html:81:
  • 7,500 years of history - This is the same challenge that I had with a [lunitic, scattered, naive] model of history by my father and I, where it was necessary to cut ?150? years out of a 7,500 year time series to "kind of make it fit". Steven Yaskall recognized us as the "two fools who rushed in" in his book "Grand Phases on the Sun". We were justifiably proud of that. /home/bill/web/economics, markets/market data/SLregress/200912 semi-log/1872-2020 SP500 index, ratio of opening price to semi-log detrended price.HtmWeb.html:83:
  • Smooth sinusoidal curves and regular periodicities - I seems that mathematicians and scientists still [think, apply] models assuming ideal waveforms, even when [their tools, reality] do not. Stephen Puetz's "Universal Waves Series" (UWS) is the most [powerful, fantastic] meta-level model for [natual, human] cycles that I have ever seen, by far. It even has an awesome probablistic-ranked list of expected timings of events at different timescales. However, perhaps more remains to be done on subtleshifts in real time series? I don't know - I'm just guessing. /home/bill/web/economics, markets/Nuclear for tar sands 23Sep05.HtmWeb.html:197: /home/bill/web/economics, markets/Nuclear for tar sands 23Sep05.HtmWeb.html:20: /home/bill/web/economics, markets/Nuclear for tar sands 23Sep05.HtmWeb.html:22:directory /home/bill/web/economics, markets/Nuclear for tar sands 23Sep05.HtmWeb.html:230: /home/bill/web/economics, markets/Nuclear for tar sands 23Sep05.HtmWeb.html:23:status & updates /home/bill/web/economics, markets/Nuclear for tar sands 23Sep05.HtmWeb.html:24:copyrights /home/bill/web/economics, markets/Nuclear for tar sands 23Sep05.HtmWeb.html:254: /home/bill/web/economics, markets/Nuclear for tar sands 23Sep05.HtmWeb.html:296: /home/bill/web/economics, markets/PE Schiller forward vs 10yr Tbills/S&P 500 Shiller-forward PE versus 10y Treasury bond rates.HtmWeb.html:135:see Howell - SP500 PE Shiller ratios versus 10 year Treasury bond yields, with earnings growth & discount factors.ods
    /home/bill/web/economics, markets/PE Schiller forward vs 10yr Tbills/S&P 500 Shiller-forward PE versus 10y Treasury bond rates.HtmWeb.html:175:
  • time-varying [SP500_growFuture, etc] - there is little chance of growth rates lasting more than a year or two, especially || > 20%. Frankly, they are constantly changing year-to-year in a big way. The time series approach mentioned below is a simple basis for anticipating this in a statistic manner as a start. Other approaches get more into predictions based on some concept or another. /home/bill/web/economics, markets/PE Schiller forward vs 10yr Tbills/S&P 500 Shiller-forward PE versus 10y Treasury bond rates.HtmWeb.html:177:
  • SP500 index, variable [dividends, internal investment & stock buybacks, earnings] - I won't be looking at these any time soon .... /home/bill/web/economics, markets/PE Schiller forward vs 10yr Tbills/S&P 500 Shiller-forward PE versus 10y Treasury bond rates.HtmWeb.html:180:
  • Elliot Wave Theory, notable Robert Prechter (including Socionomics). Amoung many, many fun topics, the arguments presented about how the Fed FOLLOWSnterest rates, only gng the impression of leading, is espectially relevant to theis web-page. /home/bill/web/economics, markets/PE Schiller forward vs 10yr Tbills/S&P 500 Shiller-forward PE versus 10y Treasury bond rates.HtmWeb.html:181:
  • Harry S. Dent Jr - demographics, with astounding successes in the past (at least twice on decade-or-longer-out basis, perhaps a bit muffled with the last decade. /home/bill/web/economics, markets/PE Schiller forward vs 10yr Tbills/S&P 500 Shiller-forward PE versus 10y Treasury bond rates.HtmWeb.html:182:
  • Stephen Puetz - Universal Wave Series stunning results across a huge swath of subject areas!! eminds me of the system of 20+ Mayan calendars. /home/bill/web/economics, markets/PE Schiller forward vs 10yr Tbills/S&P 500 Shiller-forward PE versus 10y Treasury bond rates.HtmWeb.html:183:
  • Brian Frank of Frank funds - "Slaughterhouse-Five (Hundred), Passive Investing and its Effects on the U.S. Stock Market" - Index fund [distortion, eventual destabilization] of the markets. This was a recent fascinating read for me. (MarketWatch 10Apr2020) /home/bill/web/economics, markets/PE Schiller forward vs 10yr Tbills/S&P 500 Shiller-forward PE versus 10y Treasury bond rates.HtmWeb.html:20: /home/bill/web/economics, markets/PE Schiller forward vs 10yr Tbills/S&P 500 Shiller-forward PE versus 10y Treasury bond rates.HtmWeb.html:22:directory /home/bill/web/economics, markets/PE Schiller forward vs 10yr Tbills/S&P 500 Shiller-forward PE versus 10y Treasury bond rates.HtmWeb.html:23:status & updates /home/bill/web/economics, markets/PE Schiller forward vs 10yr Tbills/S&P 500 Shiller-forward PE versus 10y Treasury bond rates.HtmWeb.html:24:copyrights /home/bill/web/economics, markets/PE Schiller forward vs 10yr Tbills/S&P 500 Shiller-forward PE versus 10y Treasury bond rates.HtmWeb.html:41:
  • /home/bill/web/economics, markets/PE Schiller forward vs 10yr Tbills/S&P 500 Shiller-forward PE versus 10y Treasury bond rates.HtmWeb.html:44:
  • /home/bill/web/economics, markets/PE Schiller forward vs 10yr Tbills/S&P 500 Shiller-forward PE versus 10y Treasury bond rates.HtmWeb.html:47:
  • multpl.com /home/bill/web/economics, markets/PE Schiller forward vs 10yr Tbills/S&P 500 Shiller-forward PE versus 10y Treasury bond rates.HtmWeb.html:52:
  • /home/bill/web/economics, markets/PE Schiller forward vs 10yr Tbills/S&P 500 Shiller-forward PE versus 10y Treasury bond rates.HtmWeb.html:53: Qiang Zhang 30Jan2021 Price Earning Ratio model - This is similar to, but better than, my own model below. His github has several other interesting investment-related postings, including Black-Scholes derivative pricing. /home/bill/web/economics, markets/PE Schiller forward vs 10yr Tbills/S&P 500 Shiller-forward PE versus 10y Treasury bond rates.HtmWeb.html:61: /home/bill/web/home.HtmWeb.html:101: /home/bill/web/home.HtmWeb.html:106: /home/bill/web/home.HtmWeb.html:114: /home/bill/web/home.HtmWeb.html:117: "Mega-Life, Mega-Death, and the invisible hand of the Sun: Towards a quasi-predictive model for the rise and fall of civilisations", /home/bill/web/home.HtmWeb.html:120:Click to see a full-sized image of the chart in your browser.. (~3.5 feet squared on my kitchen wall. My printed out version includes hand annotated comparisons to the Mayan calendar and other references.) /home/bill/web/home.HtmWeb.html:20: /home/bill/web/home.HtmWeb.html:22:directory /home/bill/web/home.HtmWeb.html:23:status & updates /home/bill/web/home.HtmWeb.html:24:copyrights /home/bill/web/home.HtmWeb.html:32: /home/bill/web/home.HtmWeb.html:46:I will change this every six months or year, just to profile my different projects past and ongoing. See also past home page highlights, Howell's blog, my assorted blogs.
    /home/bill/web/home.HtmWeb.html:50:

    04Jul202 Edo Kaal periodic table of the elements

    /home/bill/web/home.HtmWeb.html:53: /home/bill/web/home.HtmWeb.html:57:

    Icebreaker Unchained : we should have lost WWII

    /home/bill/web/home.HtmWeb.html:58: /home/bill/web/home.HtmWeb.html:61:I have not yet made a webPage for this project (so many years after it was shelved in Aug2015!), but [documentation, information, unfinished scripts] are provided in the Stalin supported Hitler (video production) directory and Icebreaker directory (which should be combined into one). Two very simple animations took sooooo loooong to produce. They total only ~ 1 minute for both "A year of stunning victories" map scan-zooms of the Poland, false war, lowlands, France and Dunkirk). Worse, the unfinished part 1 of 6 videos (!1 hour length) wasn't saved to a complete file, and the software to it needs massive updating. The vidoes are in ogv format (.ogg)- use the VLC media player to view (some other media programs also work).
    /home/bill/web/home.HtmWeb.html:73: /home/bill/web/home.HtmWeb.html:75: /home/bill/web/home.HtmWeb.html:78:25May2021 Here are two example graphs of TSLA options that I have been working on. I am far from getting into options trading, I just want to learn more about the market. For more details (but no webPage yet), see QNial software coding for options data processing (also "winURL yahoo finance news download.ndf" in the same directory for yahoo finance news downloads), and several graphs of Tesla options.
    /home/bill/web/home.HtmWeb.html:82:

    1872-2020 SP500 index, ratio of opening price to semi-log detrended price

    /home/bill/web/home.HtmWeb.html:84: /home/bill/web/home.HtmWeb.html:88:

    David Fischer - The Great pricing Waves 1200-1990 AD

    /home/bill/web/home.HtmWeb.html:92: /home/bill/web/home.HtmWeb.html:96:

    /home/bill/web/homePage highlights.HtmWeb.html:20: /home/bill/web/homePage highlights.HtmWeb.html:22:directory /home/bill/web/homePage highlights.HtmWeb.html:23:status & updates /home/bill/web/homePage highlights.HtmWeb.html:24:copyrights /home/bill/web/homePage highlights.HtmWeb.html:38:

    /home/bill/web/homePage highlights.HtmWeb.html:42: /home/bill/web/homePage highlights.HtmWeb.html:47: /home/bill/web/homePage highlights.HtmWeb.html:55: /home/bill/web/homePage highlights.HtmWeb.html:58: "Mega-Life, Mega-Death, and the invisible hand of the Sun: Towards a quasi-predictive model for the rise and fall of civilisations", /home/bill/web/homePage highlights.HtmWeb.html:61:Click to see a full-sized image of the chart in your browser.. (~3.5 feet squared on my kitchen wall. My printed out version includes hand annotated comparisons to the Mayan calendar and other references.) /home/bill/web/homePage highlights.HtmWeb.html:65:

    12Sep2020: 1872-2020 SP500 index, ratio of opening price to semi-log detrended price

    /home/bill/web/homePage highlights.HtmWeb.html:67: /home/bill/web/homePage highlights.HtmWeb.html:71:

    /home/bill/web/homePage highlights.HtmWeb.html:76: /home/bill/web/My sports & clubs/natural- SAFIRE/231226 SAFIRE III email, [liquid hand-sized] plasma reactor with Edo Kaal concepts.HtmNwp.html:6:Vimeo Link on the Aureon website : https://aureon.ca/
    /home/bill/web/My sports & clubs/natural- SAFIRE/231226 SAFIRE III email, [liquid hand-sized] plasma reactor with Edo Kaal concepts.HtmNwp.html:9:Aureon YouTube site : https://www.youtube.com/watch?v=uNk0a5je9G8
    /home/bill/web/My sports & clubs/natural- Thunderbolts/240116 emto Schirott: permissions for my Kaal-related webPages.HtmWeb.html:13:
  • Kaal Structured Atom Model vs Quantum Mechanics /home/bill/web/My sports & clubs/natural- Thunderbolts/240116 emto Schirott: permissions for my Kaal-related webPages.HtmWeb.html:44:
  • Edo Kaal & SAM team /home/bill/web/My sports & clubs/natural- Thunderbolts/240116 emto Schirott: permissions for my Kaal-related webPages.HtmWeb.html:46: /home/bill/web/My sports & clubs/natural- Thunderbolts/240116 emto Schirott: permissions for my Kaal-related webPages.HtmWeb.html:48:
  • /home/bill/web/My sports & clubs/natural- Thunderbolts/240116 emto Schirott: permissions for my Kaal-related webPages.HtmWeb.html:55:
    /home/bill/web/My sports & clubs/natural- Thunderbolts/240116 emto Schirott: permissions for my Kaal-related webPages.HtmWeb.html:57: from the
    "The water planet" video, time ???
    /home/bill/web/My sports & clubs/natural- Thunderbolts/240116 emto Schirott: permissions for my Kaal-related webPages.HtmWeb.html:58:
    /home/bill/web/My sports & clubs/natural- Thunderbolts/240116 emto Schirott: permissions for my Kaal-related webPages.HtmWeb.html:71:
  • Thunderbolts.info /home/bill/web/My sports & clubs/natural- Thunderbolts/240116 emto Schirott: permissions for my Kaal-related webPages.HtmWeb.html:72:
  • YouTube /home/bill/web/My sports & clubs/natural- Thunderbolts/240116 emto Schirott: permissions for my Kaal-related webPages.HtmWeb.html:83:
  • "full video transcript: Kaal, The Proton-Electron Atom" For permissions, this is perhaps most relevant. It's the first time that I've used Google Transcript, so I am curious to see how this works out over time. It's definitely [easier, better] than voice dictation systems [IBM Voicetype, Kurzweil Voice, Dragon Dictate] that I sold via my [small, short-lived] "nights-and-weekends" buissiness in the early-to-mid-1990s. But after I've further evolved my bash scripts to better handle output, I will be better able to judge if it's an effective eveeryday working tool. /home/bill/web/My sports & clubs/natural- Thunderbolts/Thunderbolts.info: the Electric Universe.HtmWeb.html:104:
    /home/bill/web/My sports & clubs/natural- Thunderbolts/Thunderbolts.info: the Electric Universe.HtmWeb.html:111:
    /home/bill/web/My sports & clubs/natural- Thunderbolts/Thunderbolts.info: the Electric Universe.HtmWeb.html:115:
    /home/bill/web/My sports & clubs/natural- Thunderbolts/Thunderbolts.info: the Electric Universe.HtmWeb.html:120:Wallace Thornhill, David Talbot 2002 “Thunderbolts of the Gods" Mikamar Publishing, Portland OR 2007 edition 122pp
    /home/bill/web/My sports & clubs/natural- Thunderbolts/Thunderbolts.info: the Electric Universe.HtmWeb.html:134: I commented, and added some local Alberta, Canada examples. /home/bill/web/My sports & clubs/natural- Thunderbolts/Thunderbolts.info: the Electric Universe.HtmWeb.html:136: /home/bill/web/My sports & clubs/natural- Thunderbolts/Thunderbolts.info: the Electric Universe.HtmWeb.html:139: "The Earth in an Electric Solar System" Energy & Environment v20 n1 /home/bill/web/My sports & clubs/natural- Thunderbolts/Thunderbolts.info: the Electric Universe.HtmWeb.html:140:
  • Oliver Manuel's /home/bill/web/My sports & clubs/natural- Thunderbolts/Thunderbolts.info: the Electric Universe.HtmWeb.html:148:
  • p43 of Kuroda's autobiography (see "My Early Days at the Imperial University of Tokyo")
    /home/bill/web/My sports & clubs/natural- Thunderbolts/Thunderbolts.info: the Electric Universe.HtmWeb.html:149: "... Four years before the discovery of fission by Hahn and Strassmann, Dr. Ida Noddack published an extremely important paper in Angewandte Chemie, volume 47, page 653, 1934, in which she pointed out the possibility that when the uranium atom is irradiated with slow neutrons, it may break up into more than one fairly large fragments ... If she had done so, she might have been able to discover the process of fission much earlier than Hahn and Strassmann [December 1938] and she could have won the Nobel Prize. ...". /home/bill/web/My sports & clubs/natural- Thunderbolts/Thunderbolts.info: the Electric Universe.HtmWeb.html:150:
  • WWII Japanese He also commented about three Japanese WWI nuclear bomb programs. His PhD thesis supervisor worked on one of these, if I remember correctly :
    /home/bill/web/My sports & clubs/natural- Thunderbolts/Thunderbolts.info: the Electric Universe.HtmWeb.html:21: /home/bill/web/My sports & clubs/natural- Thunderbolts/Thunderbolts.info: the Electric Universe.HtmWeb.html:23:directory /home/bill/web/My sports & clubs/natural- Thunderbolts/Thunderbolts.info: the Electric Universe.HtmWeb.html:24:status & updates /home/bill/web/My sports & clubs/natural- Thunderbolts/Thunderbolts.info: the Electric Universe.HtmWeb.html:25:copyrights /home/bill/web/My sports & clubs/natural- Thunderbolts/Thunderbolts.info: the Electric Universe.HtmWeb.html:32:

    Thunderbolts.info: the Electric Universe

    /home/bill/web/My sports & clubs/natural- Thunderbolts/Thunderbolts.info: the Electric Universe.HtmWeb.html:46:David Talbot, a mythologist inspired by Immanuel Velikovsky, ... He had even given up trying to tie his mythology concepts to physics (apart from help from ?Gedi Ben Low? on the possibility of planetary alignments in orbit) in ?1991?, throwing away much material that he had gather over the decades, and after he wrote the book "The Saturn myth". A few years later, Australian amateur physicist Wal Thornhill came with the idea that plasma physics could substantiate mythology. Given the opening of research labs by Ronald Regan, he was able to sit down with Anthony Peratt, author of a textbook on plasma physics, who was inspired enough to spend about a decade touring petroglyphs around the world that resembled laboratory plasmas (also Part II Directionality and source).
    /home/bill/web/My sports & clubs/natural- Thunderbolts/Thunderbolts.info: the Electric Universe.HtmWeb.html:48:Thunderbolts.info was establish in ?date?, and has been an ongoing community allowing [amateur, scientist]s to find an audience willing to listen to ideas that have long been heretical to the mainstream. [SAFIRE, Kaal's "Structured Atom Model"] are only two of the examples.
    /home/bill/web/My sports & clubs/natural- Thunderbolts/Thunderbolts.info: the Electric Universe.HtmWeb.html:60: /home/bill/web/My sports & clubs/natural- Thunderbolts/Thunderbolts.info: the Electric Universe.HtmWeb.html:62: /home/bill/web/My sports & clubs/natural- Thunderbolts/Thunderbolts.info: the Electric Universe.HtmWeb.html:64: /home/bill/web/My sports & clubs/natural- Thunderbolts/Thunderbolts.info: the Electric Universe.HtmWeb.html:66: /home/bill/web/My sports & clubs/natural- Thunderbolts/Thunderbolts.info: the Electric Universe.HtmWeb.html:69:Wallace Thornhill, David Talbot 2002 “Thunderbolts of the Gods" Mikamar Publishing, Portland OR 2007 edition 122pp
    /home/bill/web/My sports & clubs/natural- Thunderbolts/Thunderbolts.info: the Electric Universe.HtmWeb.html:81: /home/bill/web/My sports & clubs/natural- Thunderbolts/Thunderbolts.info: the Electric Universe.HtmWeb.html:83: /home/bill/web/My sports & clubs/natural- Thunderbolts/Thunderbolts.info: the Electric Universe.HtmWeb.html:85:(taken from eixdelmon.com, which in turn comes from Peratt's paper linked above)
    /home/bill/web/My sports & clubs/politic- Perry Kincaide/240723 OpenAI founder Sam Altman: Universal basic income.HtmWeb.html:34: /home/bill/web/My sports & clubs/politic- Perry Kincaide/240723 OpenAI founder Sam Altman: Universal basic income.HtmWeb.html:36: /home/bill/web/My sports & clubs/politic- Perry Kincaide/240723 OpenAI founder Sam Altman: Universal basic income.HtmWeb.html:42:
  • /home/bill/web/My sports & clubs/politic- Perry Kincaide/240723 OpenAI founder Sam Altman: Universal basic income.HtmWeb.html:44:
  • /home/bill/web/My sports & clubs/politic- Perry Kincaide/240723 OpenAI founder Sam Altman: Universal basic income.HtmWeb.html:46:
  • /home/bill/web/My sports & clubs/politic- Perry Kincaide/240723 OpenAI founder Sam Altman: Universal basic income.HtmWeb.html:49: from : Mark Hulbert 15Jul2024 /home/bill/web/My sports & clubs/politic- Perry Kincaide/240723 OpenAI founder Sam Altman: Universal basic income.HtmWeb.html:8: /home/bill/web/My sports & clubs/politic- Perry Kincaide/250124 Kincaide Order of Alberta response.HtmNlk.html:7:WARNING: I am working to recover from "severe" Identity Theft dating back to at least 11Nov2025, and possibly even since a year before that, 30Dec2023, based on my computer system log files. My cellPhone is still corrupted (I won't be getting a new one, given how easily this occurs), my email is still on-and-off compromised, and my computer(s) have been wiped clean and operating system re-installed a couple of times. Ongoing supplier fraud is occurring daily, and even non-supplier contacts have been compromised. More details will be available at Howell IDtheft, once I am able to upload the webPage and supporting materials.
    /home/bill/web/Neural nets/callerID-SNNs/callerID-SNNs.HtmWeb.html:110:
  • extra-neruon [Turing, von Neuman]-like computations based on the local neural network [structure, connection]s. This was a focus of my previous MindCode and earlier work (eg. Genetic specification of recurrent neural networks (draft version of a WCCI2006 conference paper), but isn't a currently active part of my work, as a priority for me is to search for a [Lamarckian, Mendellian] hereditary basis for neural networks, tied into cellular processes. This has long been a focus of Juyang Weng /home/bill/web/Neural nets/callerID-SNNs/callerID-SNNs.HtmWeb.html:111:
  • intra-neuron [Turing, von Neuman]-like computations based on the "focus" neuron's [DNA, RNA, methylation, sequence processing mechanisms]. This is a separate subject addressed by my MindCode 2023 concept. /home/bill/web/Neural nets/callerID-SNNs/callerID-SNNs.HtmWeb.html:192: /home/bill/web/Neural nets/callerID-SNNs/callerID-SNNs.HtmWeb.html:212:An mid-term objective is to tie caller-IDs to the work of Stephen Grossberg as described in my webPage Overview - Stephen Grossberg's 2021 "Conscious Mind, Resonant Brain". Gail Carpenter worked with his concepts from the Spiking Neural Network perspective. Theresa Ludemuir (???), Jose Principe (Reproducing Kernel Hilbert Spaces), and others have also done interesting work with SNNs, but not tied to Grossberg's framework.
    /home/bill/web/Neural nets/callerID-SNNs/callerID-SNNs.HtmWeb.html:21: /home/bill/web/Neural nets/callerID-SNNs/callerID-SNNs.HtmWeb.html:222:For now, I can't find my earlier musings (see very incomplete Fractal notes. as I remember, the plan was to build fractal dendrites (as the main inter-neuron synaptic information transfer) for callerID-SNNs. Axons as well, but perhaps more specialised for [power transmission or something.
    /home/bill/web/Neural nets/callerID-SNNs/callerID-SNNs.HtmWeb.html:224:10Nov2023 Maybe I can use a prime number basis for [time, synapse] fractals, as a contrast to Stephen Puetz's "Universal Wave Series" amazing "factor-of-three" series, combined with his half series. For example, with roughly a factor-of-three [1, 3, 7, 23, ...], or maybe factor-of-two or just all primes.
    /home/bill/web/Neural nets/callerID-SNNs/callerID-SNNs.HtmWeb.html:23:directory /home/bill/web/Neural nets/callerID-SNNs/callerID-SNNs.HtmWeb.html:24:status & updates /home/bill/web/Neural nets/callerID-SNNs/callerID-SNNs.HtmWeb.html:25:copyrights /home/bill/web/Neural nets/callerID-SNNs/callerID-SNNs.HtmWeb.html:37:
  • /home/bill/web/Neural nets/callerID-SNNs/callerID-SNNs.HtmWeb.html:39:
  • /home/bill/web/Neural nets/callerID-SNNs/callerID-SNNs.HtmWeb.html:42:
  • /home/bill/web/Neural nets/callerID-SNNs/callerID-SNNs.HtmWeb.html:44:
  • /home/bill/web/Neural nets/callerID-SNNs/callerID-SNNs.HtmWeb.html:46:
  • /home/bill/web/Neural nets/callerID-SNNs/callerID-SNNs.HtmWeb.html:49:
  • /home/bill/web/Neural nets/callerID-SNNs/callerID-SNNs.HtmWeb.html:51:
  • /home/bill/web/Neural nets/callerID-SNNs/callerID-SNNs.HtmWeb.html:54:
  • /home/bill/web/Neural nets/callerID-SNNs/callerID-SNNs.HtmWeb.html:56:
  • /home/bill/web/Neural nets/callerID-SNNs/callerID-SNNs.HtmWeb.html:59:
  • /home/bill/web/Neural nets/callerID-SNNs/callerID-SNNs.HtmWeb.html:83:
  • help identify program coding, as distinct from, or hybridized with, protein coding within [DNA, mRNA]. While this is mostly an issue for my MindCode project, callerID-SNNs fit nicely into, and may pragmatically help, that context. /home/bill/web/Neural nets/callerID-SNNs/nomenclature.HtmWeb.html:20: /home/bill/web/Neural nets/callerID-SNNs/nomenclature.HtmWeb.html:22:directory /home/bill/web/Neural nets/callerID-SNNs/nomenclature.HtmWeb.html:23:status & updates /home/bill/web/Neural nets/callerID-SNNs/nomenclature.HtmWeb.html:24:copyrights /home/bill/web/Neural nets/Computational neuro-genetic modelling.HtmWeb.html:20: /home/bill/web/Neural nets/Computational neuro-genetic modelling.HtmWeb.html:22:directory /home/bill/web/Neural nets/Computational neuro-genetic modelling.HtmWeb.html:23:status & updates /home/bill/web/Neural nets/Computational neuro-genetic modelling.HtmWeb.html:24:copyrights /home/bill/web/Neural nets/Computational neuro-genetic modelling.HtmWeb.html:79:
  • Genetic /home/bill/web/Neural nets/Computational neuro-genetic modelling.HtmWeb.html:85:

  • Genetic /home/bill/web/Neural nets/Computational neuro-genetic modelling.HtmWeb.html:95:

  • Junk /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:111:Only a very small number of theories of consciousness are listed on this webPage, compared to the vast number of [paper, book]s on the subject coming out all of the time. "Popular theories" as listed on Wikipedia, are shown, assuming that this will be important for non-experts. But the only ones that really count for this webSite are the "Priority model of consciousness".
    /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:117:Readers will have completely different [interest, priority]s than I, so they would normally have different "Priority model of consciousness", and rankings of the conscousness theories. To understand my selections and rankings, see Introduction to this webSite.
    /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:195:

  • this webSite's Questions: Grossberg's c-ART, Transformer NNs, and consciousness?. /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:20: /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:212:I like the description in Wikipedia (Wiki2023):
    /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:220:The following additional definitions are also quoted from (Wiki2023) :
    /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:23: /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:25: /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:262:..." (Wiki2023)
    /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:269:..." (Wiki2023)
    /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:279:..." (Wiki2023)
    /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:27: /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:301:Grossberg's concepts are NOT normally listed in [compilations, reviews] of consciousness, which is a [puzzle, failure] that I address separately.
    /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:304:16Jul2023 I am currently lacking a coherent overall webPage for Grossberg's Consciousness. In the meantime refer to the very detailed listing of consciousness and other themes as a starting point to peruse for Grossberg's ideas. This webPage is a compilation of themes extracted from files listing [chapter, section, figure, table, comment]s.
    /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:307:The following listing is taken from What is consciousness: from historical to Grossberg, and repeats some of the points in this section above : /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:310: /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:311: conscious ART (cART), etc
    /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:327:
  • A surprisingly small number of neural architectures can simulate [extensive, diverse] [neuro, pyscho]logical data at BOTH the [sub, ]conscious levels, and for [perception, action] of [sight, auditory, touch, language, cognition, emotion, etc]. This is similar to what we see in physics. /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:330:
  • [extensive, diverse] ex-bio applications have been successfully [developed, applied], based on Grossberg etal's computational models. /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:340:
  • see simple grepStr search results : 'ART|cART|pART|ARTMAP|ARTSTREAM|ARTPHONE|ARTSCAN|dARTSCAN|pARTSCAN|ARTSCENE|ARTSTREAM|ARTWORD|cARTWORD|LAMINART|PARSE|SMART|START|nSTART' /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:36:"... Consciousness, at its simplest, is sentience and awareness of internal and external existence.[1] However, its nature has led to millennia of analyses, explanations and debates by philosophers, theologians, linguists, and scientists. Opinions differ about what exactly needs to be studied or even considered consciousness. ..."(Wiki2023) /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:386:..."(Wiki2023)
    /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:403:Byoung-Kyong Min 2010 "A Thalamic reticular networking model of consciousness"
    /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:405:(Wiki2023)
    /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:418:Wikipedia: Models of consciousness, retrieved Apr2023 (Wiki2023)
    /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:430:..." (Wiki2023)
    /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:45:
  • /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:467:..." (Wiki2023)
    /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:47:
  • /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:49:
  • /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:501:"... The Neural correlates of consciousness (NCC) formalism is used as a major step towards explaining consciousness. The NCC are defined to constitute the minimal set of neuronal events and mechanisms sufficient for a specific conscious percept, and consequently sufficient for consciousness. In this formalism, consciousness is viewed as a state-dependent property of some undefined complex, adaptive, and highly interconnected biological system.[3][4][5] ..." (Wiki2023, full article: Wiki2023 - Neural_correlates_of_consciousness, also cited by Grossberg 2021)
    /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:505:Another idea that has drawn attention for several decades is that consciousness is associated with high-frequency (gamma band) oscillations in brain activity. This idea arose from proposals in the 1980s, by Christof von der Malsburg and Wolf Singer, that gamma oscillations could solve the so-called binding problem, by linking information represented in different parts of the brain into a unified experience.[80] Rodolfo Llinás, for example, proposed that consciousness results from recurrent thalamo-cortical resonance where the specific thalamocortical systems (content) and the non-specific (centromedial thalamus) thalamocortical systems (context) interact in the gamma band frequency via synchronous oscillations.[81] ..." (Wiki2023 - Consciousness#Neural_correlates)
    /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:507:Howell 19Jul2023 Note that Grossberg's ART predictions are supported by experiments by a number of researchers including Wolf Singer (see Quoted text from (Grossberg 2021)).
    /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:515:"... Integrated Information Theory (IIT) offers an explanation for the nature and source of consciousness. Initially proposed by Giulio Tononi in 2004, it claims that consciousness is identical to a certain kind of information, the realization of which requires physical, not merely functional, integration, and which can be measured mathematically according to the phi metric. ..." (UTM - Integrated information theory)
    /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:517:"... Integrated information theory (IIT) attempts to provide a framework capable of explaining why some physical systems (such as human brains) are conscious,[1] why they feel the particular way they do in particular states (e.g. why our visual field appears extended when we gaze out at the night sky),[2] and what it would take for other physical systems to be conscious (Are other animals conscious? Might the whole Universe be?).[3] ... In IIT, a system's consciousness (what it is like subjectively) is conjectured to be identical to its causal properties (what it is like objectively). Therefore it should be possible to account for the conscious experience of a physical system by unfolding its complete causal powers (see Central identity).[4] ... Specifically, IIT moves from phenomenology to mechanism by attempting to identify the essential properties of conscious experience (dubbed "axioms") and, from there, the essential properties of conscious physical systems (dubbed "postulates"). 3..." (Wiki2023 - Integrated information theory)
    /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:519:Wikipedia lists numerous criticisms of IIT, but I have not yet quoted from that, other than to mention the authors : /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:52:
  • /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:536:Wikipedia: Models of consciousness
    /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:548:"... Sociology of human consciousness uses the theories and methodology of sociology to explain human consciousness. The theory and its models emphasize the importance of language, collective representations, self-conceptions, and self-reflectivity. It argues that the shape and feel of human consciousness is heavily social. ..."(Wiki2023, full webPage Wiki2023
    /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:54:
  • /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:555:"... Daniel Dennett proposed a physicalist, information processing based multiple drafts model of consciousness described more fully in his 1991 book, Consciousness Explained. ..." (Wiki2023, full webPage Wiki2023)
    /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:560:..." (Wiki2023)
    /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:56:
  • /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:574:"... Functionalism is a view in the theory of the mind. It states that mental states (beliefs, desires, being in pain, etc.) are constituted solely by their functional role – that is, they have causal relations to other mental states, numerous sensory inputs, and behavioral outputs. ..." (Wiki2023, full webPage Wiki2023)
    /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:595:"... Electromagnetic theories of consciousness propose that consciousness can be understood as an electromagnetic phenomenon that occurs when a brain produces an electromagnetic field with specific characteristics.[7][8] Some electromagnetic theories are also quantum mind theories of consciousness.[9] ..." (Wiki2023)
    /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:598:"... "No serious researcher I know believes in an electromagnetic theory of consciousness,"[16] Bernard Baars wrote in an e-mail.[better source needed] Baars is a neurobiologist and co-editor of Consciousness and Cognition, another scientific journal in the field. "It's not really worth talking about scientifically,"[16] he was quoted as saying. ..." (Wiki2023)
    /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:59:
  • /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:615:Stuart Hameroff separately worked in cancer research and anesthesia, which gave him an interest in brain processes. Hameroff read Penrose's book and suggested to him that microtubules within neurons were suitable candidate sites for quantum processing, and ultimately for consciousness.[30][31] Throughout the 1990s, the two collaborated on the Orch OR theory, which Penrose published in Shadows of the Mind (1994).[19] ..."Wiki2023
    /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:620:rationalwiki.org presents a hard-nosed critique of various "quantum consciousness" theories, from which the following quote is taken :
    /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:62:
  • /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:64:
  • /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:67:
  • /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:70:
  • /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:72:
  • /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:75:
  • /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:78:
  • /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:80:
  • /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:82:
  • /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:84:
  • /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:86:
  • /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:88:
  • /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:90:
  • /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:93:
  • /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:96:
  • /home/bill/web/Neural nets/consciousness/[definitions, models] of consciousness.HtmWeb.html:98:
  • /home/bill/web/Neural nets/consciousness/For whom the bell tolls.HtmWeb.html:20: /home/bill/web/Neural nets/consciousness/For whom the bell tolls.HtmWeb.html:23: /home/bill/web/Neural nets/consciousness/For whom the bell tolls.HtmWeb.html:25: /home/bill/web/Neural nets/consciousness/For whom the bell tolls.HtmWeb.html:27: /home/bill/web/Neural nets/consciousness/For whom the bell tolls.HtmWeb.html:42:
  • /home/bill/web/Neural nets/consciousness/For whom the bell tolls.HtmWeb.html:44:
  • /home/bill/web/Neural nets/consciousness/For whom the bell tolls.HtmWeb.html:57:"... Large Language Models (LLMs) have been transformative. They are pre-trained foundational models that are self-supervised and can be adapted with fine-tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. GPT-3 and more recently LaMDA can carry on dialogs with humans on many topics after minimal priming with a few examples. However, there has been a wide range of reactions and debate on whether these LLMs understand what they are saying or exhibit signs of intelligence. This high variance is exhibited in three interviews with LLMs reaching wildly different conclusions. A new possibility was uncovered that could explain this divergence. What appears to be intelligence in LLMs may in fact be a mirror that reflects the intelligence of the interviewer, a remarkable twist that could be considered a Reverse Turing Test. If so, then by studying interviews we may be learning more about the intelligence and beliefs of the interviewer than the intelligence of the LLMs. As LLMs become more capable they may transform the way we interact with machines and how they interact with each other. Increasingly, LLMs are being coupled with sensorimotor devices. LLMs can talk the talk, but can they walk the walk? A road map for achieving artificial general autonomy is outlined with seven major improvements inspired by brain systems and how LLMs could in turn be used to uncover new insights into brain function. ..." (Sejnowski 2022)
    /home/bill/web/Neural nets/consciousness/For whom the bell tolls.HtmWeb.html:62:Sejnowski's idea is very interesting, judging by many [science, computer, philosophy, engineering, policy, public] commentators, for whom this is a very emotionally-laden subject that seems to drive [fear, suppression]. The case of Blake Lemoine is a good example. How far can LLMs go in assessing human intelligence, given their huge "codified databases"? Would they be able to go beyond our traditional measures of intelligence in both [depth, diversity]?
    /home/bill/web/Neural nets/consciousness/Introduction.HtmWeb.html:104:A few common definitions of consciousness are provided my webPage [definitions, models] of [consciousness, sentience]. However, for reasons given on that webpage, only Stephen Grossberg's concept provide a workable basis that is tied to [].
    /home/bill/web/Neural nets/consciousness/Introduction.HtmWeb.html:106:A few models of consciousness are summarized on my webPage A quick comparison of Consciousness Theories. Only a few concepts are listed, almost randomly selected except for [Grossberg, Taylor]'s, as there are a huge [number, diversity] of concepts. /home/bill/web/Neural nets/consciousness/Introduction.HtmWeb.html:109:Stephen Grossberg may have the ONLY definition of consciousness that is directly tied to quantitative models for lower-level [neuron, general neurology, psychology] data. Foundational models, similar in nature to the small number of general theories in physics to describe a vast range of phenomena, were derived over a period of ?4-5? decades BEFORE they were found to apply to consciousness. That paralleled their use in very widespread applications in [science, engineering, etc]. As such, this is the only solidly-based EMERGENT theory of consciousness that I know of. Grossberg's book provides a wonderful description :
    /home/bill/web/Neural nets/consciousness/Introduction.HtmWeb.html:125:
  • /home/bill/web/Neural nets/consciousness/Introduction.HtmWeb.html:127:
  • /home/bill/web/Neural nets/consciousness/Introduction.HtmWeb.html:128: John Taylor's concepts - The only other concept for consiousness that felt even somewhat comfortable with was the late John Taylor's. It seemed to me that it emerged from "Approximate Dynamic Programming" theories of Paul Werbos, which was inspired by Sigmund Freud's theories (which I dodn't actually like in general, but had to admit their widespread adoption at one time, and their inspirational use) with a tremendous based of [theoretical, practical] applications to system [identification ????]. While I do provide a very brief summary on a separate webPage, it is not my current focus. /home/bill/web/Neural nets/consciousness/Introduction.HtmWeb.html:129:
  • /home/bill/web/Neural nets/consciousness/Introduction.HtmWeb.html:131:
  • /home/bill/web/Neural nets/consciousness/Introduction.HtmWeb.html:133:
  • references- /home/bill/web/Neural nets/consciousness/Introduction.HtmWeb.html:134: Grossberg and /home/bill/web/Neural nets/consciousness/Introduction.HtmWeb.html:20: /home/bill/web/Neural nets/consciousness/Introduction.HtmWeb.html:23: /home/bill/web/Neural nets/consciousness/Introduction.HtmWeb.html:25: /home/bill/web/Neural nets/consciousness/Introduction.HtmWeb.html:27: /home/bill/web/Neural nets/consciousness/Introduction.HtmWeb.html:34: /home/bill/web/Neural nets/consciousness/Introduction.HtmWeb.html:40:
  • /home/bill/web/Neural nets/consciousness/Introduction.HtmWeb.html:42:
  • /home/bill/web/Neural nets/consciousness/Introduction.HtmWeb.html:44:
  • /home/bill/web/Neural nets/consciousness/Introduction.HtmWeb.html:46:
  • /home/bill/web/Neural nets/consciousness/Introduction.HtmWeb.html:62:
  • data from [neuroscience, psychology] : quick list, more details /home/bill/web/Neural nets/consciousness/Introduction.HtmWeb.html:63:
  • success in real world advanced [science, engineering] applications (non-[bio, psycho]logical) /home/bill/web/Neural nets/consciousness/machine consciousness, the need.HtmWeb.html:160:Howell 30Dec2011, page 39 "Part VI - Far beyond current toolsets"
    /home/bill/web/Neural nets/consciousness/machine consciousness, the need.HtmWeb.html:20: /home/bill/web/Neural nets/consciousness/machine consciousness, the need.HtmWeb.html:23: /home/bill/web/Neural nets/consciousness/machine consciousness, the need.HtmWeb.html:25: /home/bill/web/Neural nets/consciousness/machine consciousness, the need.HtmWeb.html:27: /home/bill/web/Neural nets/consciousness/machine consciousness, the need.HtmWeb.html:50:
  • /home/bill/web/Neural nets/consciousness/machine consciousness, the need.HtmWeb.html:52:
  • /home/bill/web/Neural nets/consciousness/machine consciousness, the need.HtmWeb.html:54:
  • /home/bill/web/Neural nets/consciousness/machine consciousness, the need.HtmWeb.html:72:see Grossberg 2021: the biological need for machine consciousness
    /home/bill/web/Neural nets/consciousness/opinions- Blake Lemoine, others.HtmWeb.html:104:

    22Jun2022 We’re All Different and That’s Okay

    /home/bill/web/Neural nets/consciousness/opinions- Blake Lemoine, others.HtmWeb.html:121:

    11Jun2022 What is LaMDA and What Does it Want?

    /home/bill/web/Neural nets/consciousness/opinions- Blake Lemoine, others.HtmWeb.html:134:

    14Aug2022 What is sentience and why does it matter?

    /home/bill/web/Neural nets/consciousness/opinions- Blake Lemoine, others.HtmWeb.html:164:More detail following from Sejnnowski's thinking is on the webPage For whom the bell tolls. The following comment comes from that webPage.
    /home/bill/web/Neural nets/consciousness/opinions- Blake Lemoine, others.HtmWeb.html:20: /home/bill/web/Neural nets/consciousness/opinions- Blake Lemoine, others.HtmWeb.html:23: /home/bill/web/Neural nets/consciousness/opinions- Blake Lemoine, others.HtmWeb.html:25: /home/bill/web/Neural nets/consciousness/opinions- Blake Lemoine, others.HtmWeb.html:27: /home/bill/web/Neural nets/consciousness/opinions- Blake Lemoine, others.HtmWeb.html:38:13.3K Followers ..."(Blake Lemoine, 2022) /home/bill/web/Neural nets/consciousness/opinions- Blake Lemoine, others.HtmWeb.html:44:
  • /home/bill/web/Neural nets/consciousness/opinions- Blake Lemoine, others.HtmWeb.html:46:
  • /home/bill/web/Neural nets/consciousness/opinions- Blake Lemoine, others.HtmWeb.html:49:
  • /home/bill/web/Neural nets/consciousness/opinions- Blake Lemoine, others.HtmWeb.html:51:
  • /home/bill/web/Neural nets/consciousness/opinions- Blake Lemoine, others.HtmWeb.html:53:
  • /home/bill/web/Neural nets/consciousness/opinions- Blake Lemoine, others.HtmWeb.html:55:
  • /home/bill/web/Neural nets/consciousness/opinions- Blake Lemoine, others.HtmWeb.html:58:
  • /home/bill/web/Neural nets/consciousness/opinions- Blake Lemoine, others.HtmWeb.html:60:
  • /home/bill/web/Neural nets/consciousness/opinions- Blake Lemoine, others.HtmWeb.html:62:
  • /home/bill/web/Neural nets/consciousness/opinions- Blake Lemoine, others.HtmWeb.html:65:
  • /home/bill/web/Neural nets/consciousness/opinions- Blake Lemoine, others.HtmWeb.html:93:

    11Jun2022 Is LaMDA Sentient? — an Interview

    /home/bill/web/Neural nets/consciousness/Pribram 1993 quantum fields and consciousness proceedings.HtmWeb.html:200:
  • Historical thinking about consciousness. /home/bill/web/Neural nets/consciousness/Pribram 1993 quantum fields and consciousness proceedings.HtmWeb.html:201:
  • Historical thinking about quantum [neurophysiology, consciousness] /home/bill/web/Neural nets/consciousness/Pribram 1993 quantum fields and consciousness proceedings.HtmWeb.html:20: /home/bill/web/Neural nets/consciousness/Pribram 1993 quantum fields and consciousness proceedings.HtmWeb.html:23: /home/bill/web/Neural nets/consciousness/Pribram 1993 quantum fields and consciousness proceedings.HtmWeb.html:25: /home/bill/web/Neural nets/consciousness/Pribram 1993 quantum fields and consciousness proceedings.HtmWeb.html:27: /home/bill/web/Neural nets/consciousness/Pribram 1993 quantum fields and consciousness proceedings.HtmWeb.html:37: /home/bill/web/Neural nets/consciousness/Pribram 1993 quantum fields and consciousness proceedings.HtmWeb.html:50:
  • /home/bill/web/Neural nets/consciousness/Pribram 1993 quantum fields and consciousness proceedings.HtmWeb.html:53:
  • /home/bill/web/Neural nets/consciousness/Pribram 1993 quantum fields and consciousness proceedings.HtmWeb.html:55:
  • /home/bill/web/Neural nets/consciousness/Pribram 1993 quantum fields and consciousness proceedings.HtmWeb.html:57:
  • /home/bill/web/Neural nets/consciousness/Pribram 1993 quantum fields and consciousness proceedings.HtmWeb.html:59:
  • /home/bill/web/Neural nets/consciousness/Pribram 1993 quantum fields and consciousness proceedings.HtmWeb.html:61:
  • /home/bill/web/Neural nets/consciousness/Pribram 1993 quantum fields and consciousness proceedings.HtmWeb.html:63:
  • /home/bill/web/Neural nets/consciousness/Pribram 1993 quantum fields and consciousness proceedings.HtmWeb.html:66:
  • /home/bill/web/Neural nets/consciousness/Pribram 1993 quantum fields and consciousness proceedings.HtmWeb.html:68:
  • /home/bill/web/Neural nets/consciousness/Pribram 1993 quantum fields and consciousness proceedings.HtmWeb.html:70:
  • /home/bill/web/Neural nets/consciousness/Quantum consciousness.HtmWeb.html:101:Pribram 1993 quantum fields and consciousness proceedings provides references back to 1960, and Jibu, Yasue comment that : /home/bill/web/Neural nets/consciousness/Quantum consciousness.HtmWeb.html:106:
  • Howells questions about 1993 conference proceedings /home/bill/web/Neural nets/consciousness/Quantum consciousness.HtmWeb.html:20: /home/bill/web/Neural nets/consciousness/Quantum consciousness.HtmWeb.html:23: /home/bill/web/Neural nets/consciousness/Quantum consciousness.HtmWeb.html:25: /home/bill/web/Neural nets/consciousness/Quantum consciousness.HtmWeb.html:27: /home/bill/web/Neural nets/consciousness/Quantum consciousness.HtmWeb.html:42:
  • /home/bill/web/Neural nets/consciousness/Quantum consciousness.HtmWeb.html:44:
  • /home/bill/web/Neural nets/consciousness/Quantum consciousness.HtmWeb.html:46:
  • /home/bill/web/Neural nets/consciousness/Quantum consciousness.HtmWeb.html:49:
  • /home/bill/web/Neural nets/consciousness/Quantum consciousness.HtmWeb.html:51:
  • /home/bill/web/Neural nets/consciousness/Quantum consciousness.HtmWeb.html:53:
  • /home/bill/web/Neural nets/consciousness/Quantum consciousness.HtmWeb.html:80:
  • WRONG!! It may help the ready to re-visit comments about the historical thinking about consciousness, which is not limited to quantum consciousness. This complements items below. /home/bill/web/Neural nets/consciousness/Quantum consciousness.HtmWeb.html:93:Early era of [General Relativity, Quantum Mechanics]: I would be greatly surprised if there wasn't some thinking about quantum consciousness at least back to the "modern inception" of quantum mechanics by Max Planc in 1901. Schrodinger seems to have gone at least partially in that direction by 1944 (see Historical thinking about quantum [neurophysiology, consciousness]). But as with the ancient Greeks, I would be surprised if others in the quantum mechanics community weren't thinking of mind in addition to matter in the early 1900s. To me, this would not be a solid assumption to make even if the lack of documentation is glaring.
    /home/bill/web/Neural nets/consciousness/Taylors consciousness.HtmWeb.html:20: /home/bill/web/Neural nets/consciousness/Taylors consciousness.HtmWeb.html:23: /home/bill/web/Neural nets/consciousness/Taylors consciousness.HtmWeb.html:25: /home/bill/web/Neural nets/consciousness/Taylors consciousness.HtmWeb.html:27: /home/bill/web/Neural nets/consciousness/Taylors consciousness.HtmWeb.html:40:
  • /home/bill/web/Neural nets/consciousness/Taylors consciousness.HtmWeb.html:42:
  • /home/bill/web/Neural nets/consciousness/TrNN controls need consciousness.HtmWeb.html:20: /home/bill/web/Neural nets/consciousness/TrNN controls need consciousness.HtmWeb.html:23: /home/bill/web/Neural nets/consciousness/TrNN controls need consciousness.HtmWeb.html:25: /home/bill/web/Neural nets/consciousness/TrNN controls need consciousness.HtmWeb.html:27: /home/bill/web/Neural nets/consciousness/TrNN controls need consciousness.HtmWeb.html:40:
  • /home/bill/web/Neural nets/consciousness/TrNN controls need consciousness.HtmWeb.html:42:
  • /home/bill/web/Neural nets/consciousness/TrNN controls need consciousness.HtmWeb.html:49:from the section Grossberg's c-ART, Transformer NNs, and consciousness?:
    /home/bill/web/Neural nets/consciousness/TrNNs augment by Grossbergs cART.HtmWeb.html:20: /home/bill/web/Neural nets/consciousness/TrNNs augment by Grossbergs cART.HtmWeb.html:23: /home/bill/web/Neural nets/consciousness/TrNNs augment by Grossbergs cART.HtmWeb.html:25: /home/bill/web/Neural nets/consciousness/TrNNs augment by Grossbergs cART.HtmWeb.html:27: /home/bill/web/Neural nets/consciousness/TrNNs augment by Grossbergs cART.HtmWeb.html:40:
  • /home/bill/web/Neural nets/consciousness/TrNNs augment by Grossbergs cART.HtmWeb.html:42:
  • /home/bill/web/Neural nets/consciousness/TrNNs augment by Grossbergs cART.HtmWeb.html:48:As per the second question from the section Grossberg's c-ART, Transformer NNs, and consciousness?:
    /home/bill/web/Neural nets/consciousness/TrNNs augment by Grossbergs cART.HtmWeb.html:50:2. How difficult would it be to augment "Transformer Neural Networks" (TrNNs) with Grossberg's [concept, architecture]s, including the emergent systems for consciousness? Perhaps this would combine the scalability of the former with the [robust, extendable] foundations of the latter, which is supported by [broad, diverse, deep] data from [neuroscience, psychology], as well success in real world advanced [science, engineering] applications?
    /home/bill/web/Neural nets/consciousness/TrNNs have incipient consciousness.HtmWeb.html:20: /home/bill/web/Neural nets/consciousness/TrNNs have incipient consciousness.HtmWeb.html:23: /home/bill/web/Neural nets/consciousness/TrNNs have incipient consciousness.HtmWeb.html:25: /home/bill/web/Neural nets/consciousness/TrNNs have incipient consciousness.HtmWeb.html:27: /home/bill/web/Neural nets/consciousness/TrNNs have incipient consciousness.HtmWeb.html:40:
  • /home/bill/web/Neural nets/consciousness/TrNNs have incipient consciousness.HtmWeb.html:42:
  • /home/bill/web/Neural nets/consciousness/TrNNs have incipient consciousness.HtmWeb.html:48:As per the first question from the section Grossberg's c-ART, Transformer NNs, and consciousness?:
    /home/bill/web/Neural nets/consciousness/Walter Freemans chaos.HtmWeb.html:20: /home/bill/web/Neural nets/consciousness/Walter Freemans chaos.HtmWeb.html:23: /home/bill/web/Neural nets/consciousness/Walter Freemans chaos.HtmWeb.html:25: /home/bill/web/Neural nets/consciousness/Walter Freemans chaos.HtmWeb.html:27: /home/bill/web/Neural nets/consciousness/Walter Freemans chaos.HtmWeb.html:42:
  • /home/bill/web/Neural nets/consciousness/Walter Freemans chaos.HtmWeb.html:44:
  • /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:105: /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:113: /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:114: conscious ART (cART), etc
    /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:130:
  • A surprisingly small number of neural architectures can simulate [extensive, diverse] [neuro, pyscho]logical data at BOTH the [sub, ]conscious levels, and for [perception, action] of [sight, auditory, touch, language, cognition, emotion, etc]. This is similar to what we see in physics. /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:133:
  • [extensive, diverse] ex-bio applications have been successfully [developed, applied], based on Grossberg etal's computational models. /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:143:
  • see simple grepStr search results : 'ART|cART|pART|ARTMAP|ARTSTREAM|ARTPHONE|ARTSCAN|dARTSCAN|pARTSCAN|ARTSCENE|ARTSTREAM|ARTWORD|cARTWORD|LAMINART|PARSE|SMART|START|nSTART' /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:147: Grossberg's concepts are NOT normally listed in [compilations, reviews] of consciousness, which is a [puzzle, failure] that I address separately. /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:150: /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:160: /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:162: (Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin 2017) /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:171: /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:182: /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:191: /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:200: /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:204: Byoung-Kyong Min 2010 "A Thalamic reticular networking model of consciousness"
    /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:206: "... The model suggests consciousness as a "mental state embodied through TRN-modulated synchronization of thalamocortical networks". In this model the thalamic reticular nucleus (TRN) is suggested as ideally suited for controlling the entire cerebral network, and responsible (via GABAergic networking) for synchronization of neural activity. ..." (Wiki2023) /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:20: /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:211: /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:219: /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:227: /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:235: /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:23: /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:243: /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:251: /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:259: /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:25: /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:267: /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:275: /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:27: /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:283: /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:67: /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:76: /home/bill/web/Neural nets/consciousness/What is consciousness: from historical to Grossberg.HtmWeb.html:90: /home/bill/web/Neural nets/dendrite/dendrite.HtmWeb.html:22: /home/bill/web/Neural nets/dendrite/dendrite.HtmWeb.html:24:directory /home/bill/web/Neural nets/dendrite/dendrite.HtmWeb.html:25:status & updates /home/bill/web/Neural nets/dendrite/dendrite.HtmWeb.html:26:copyrights /home/bill/web/Neural nets/dendrite/dendrite.HtmWeb.html:38:
  • /home/bill/web/Neural nets/dendrite/dendrite.HtmWeb.html:40:
  • /home/bill/web/Neural nets/emotion/Howell: what is emotion.HtmWeb.html:12: /home/bill/web/Neural nets/emotion/Howell: what is emotion.HtmWeb.html:14:directory /home/bill/web/Neural nets/emotion/Howell: what is emotion.HtmWeb.html:15:status & updates /home/bill/web/Neural nets/emotion/Howell: what is emotion.HtmWeb.html:16:copyrights /home/bill/web/Neural nets/emotion/Howell: what is emotion.HtmWeb.html:34:

    /home/bill/web/Neural nets/emotion/Howell: what is emotion.HtmWeb.html:38:
  • /home/bill/web/Neural nets/emotion/Howell: what is emotion.HtmWeb.html:40:
  • /home/bill/web/Neural nets/genetic mechanisms/genetic mechanisms for [protein, program] code.HtmWeb.html:22: /home/bill/web/Neural nets/genetic mechanisms/genetic mechanisms for [protein, program] code.HtmWeb.html:24:directory /home/bill/web/Neural nets/genetic mechanisms/genetic mechanisms for [protein, program] code.HtmWeb.html:25:status & updates /home/bill/web/Neural nets/genetic mechanisms/genetic mechanisms for [protein, program] code.HtmWeb.html:26:copyrights /home/bill/web/Neural nets/genetic mechanisms/genetic mechanisms for [protein, program] code.HtmWeb.html:40:
  • /home/bill/web/Neural nets/genetic mechanisms/genetic mechanisms for [protein, program] code.HtmWeb.html:42:
  • /home/bill/web/Neural nets/Grossberg/ART assess theories of consciousness.HtmWeb.html:20: /home/bill/web/Neural nets/Grossberg/ART assess theories of consciousness.HtmWeb.html:23: /home/bill/web/Neural nets/Grossberg/ART assess theories of consciousness.HtmWeb.html:25: /home/bill/web/Neural nets/Grossberg/ART assess theories of consciousness.HtmWeb.html:27: /home/bill/web/Neural nets/Grossberg/ART assess theories of consciousness.HtmWeb.html:43:
  • /home/bill/web/Neural nets/Grossberg/ART assess theories of consciousness.HtmWeb.html:45:
  • /home/bill/web/Neural nets/Grossberg/ART augmentation of other research.HtmWeb.html:20: /home/bill/web/Neural nets/Grossberg/ART augmentation of other research.HtmWeb.html:23: /home/bill/web/Neural nets/Grossberg/ART augmentation of other research.HtmWeb.html:25: /home/bill/web/Neural nets/Grossberg/ART augmentation of other research.HtmWeb.html:27: /home/bill/web/Neural nets/Grossberg/ART augmentation of other research.HtmWeb.html:42:
  • /home/bill/web/Neural nets/Grossberg/ART augmentation of other research.HtmWeb.html:44:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs ART- Adaptive Resonance Theory.HtmWeb.html:20: /home/bill/web/Neural nets/Grossberg/Grossbergs ART- Adaptive Resonance Theory.HtmWeb.html:23: /home/bill/web/Neural nets/Grossberg/Grossbergs ART- Adaptive Resonance Theory.HtmWeb.html:25: /home/bill/web/Neural nets/Grossberg/Grossbergs ART- Adaptive Resonance Theory.HtmWeb.html:27: /home/bill/web/Neural nets/Grossberg/Grossbergs ART- Adaptive Resonance Theory.HtmWeb.html:41:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:100:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1053:
  • Grossberg 2021 p081c2h0.66 sub-section: How does one evolve a computational brain?
    The above discussion illustrates that no single step of theoretical derivation can derive a whole brain. One needs a method for deriving a brain in stages, or cycles, much as evolution has incrementally discovered ever more complex brains over many thousands of years. The following theoretical method has been successfully applied many times since I first used it in 1957. It embodies a kind of conceptual evolutionary process for deriving a brain.

    Because "brain evolution needs to achieve behavioural success", we need to start with data that embodiey indices of behavioral success. That is why, as illustrated in
    Figure 2.37 Modelling method and cycle, one starts with Behavioral Data from scores or hundreds of psychological experiments. These data are analyszed as the result of an individual adapting autonomously in real time to a changing world. This is the Arty of Modeling. It requires that one be able to infer from static data curves the dynamical processes that control individual behaviors occuring in real time. One of the hardest things that I teach to my students to do is "how to think in real time" to be able to carry out this speculative leap.

    Properly carried out, this analysis leads to the discovery of new Design Principles that are embodied by these behavioral processes. The Design Principles highlight the functional meaning of the data, and clarify how individual behaviors occurring in real time give rise to these static data curves.

    These principles are then converted into the simplest Mathematical Model using a method of minimal anatomies, which is a form of Occam's Razor, or principle of parsimony. Such a mathematical model embodies the psychological principles using the simplest possible differential equations. By "simplest" I mean that, if any part of the derived model is removed, then a significant fraction of the targeted data could no longer be explained. One then analyzes the model mathematically and simulates it on the computer, showing along the way how variations on the minimal anatomy can realize the design principles in different individuals or species.

    This analysis has always provided functional explanations and Behavioral Predictions for much larger behavioral data bases than those used to discover the Design Principles. The most remarkable fact is, however, that the behaviorally derived model always looks like part of a brain, thereby explaining a body of challenging Neural Data and making novel Brain Predictions.

    The derivation hereby links mind to brain via psychological organizational principles and their mechanistic realization as a mathematically defined neural network. This startling fact is what I first experienced as a college Freshman taking Introductory Psychology, and it changed my life forever.

    I conclude from having had this experience scores of times since 1957 that brains look the way they do because they embody a natural computational realization for controlling autonomous adaptation in real-time to a changing world. Moreover, the Behavior -> Principles -> Model -> Neural derivation predicts new functional roles for both known and unknown brain mechanisms by linking the brain data to how it helps to ensure behavioral success. As I noted above, the power of this method is illustrated by the fact that scores of these predictions about brain and behavior have been supported by experimental data 5-30 years after they were first published.

    Having made the link from behavior to brain, one can then "burn the candle from both ends" by pressing both top-down from Behavioral Data and bottom-up from Brain Data to clarify what the model can and cannot explain at its current stage of derivation. No model can explain everything. At each stage of development, the model can cope with certain environmental challenges but not others. An important part of the mathematical and computational analysis is to characterize the boundary between the known and unknown; that is which challenges the model can cope with and which it cannot. The shape of this boundary between the known and unknown helps to direct the theorist's attention to new design principles that have been omitted from previous analysis.

    The next step is to show how these new design principles can be incorporated into the evolved model in a self-consistent way, without undermining its previous mechanisms, thereby leading to a progressively more realistic model, one that can explain and predict ever more behavioral and neural data. In this way, the model undergoes a type of evolutionary development, as it becomes able to cope behaviorally with environmental constraints of ever increasing subtlety and complexity. The Method of Minimal Anatomies may hereby be viewed as way to functionally understand how increasingly demanding combinations of environmental pressures were incorporated into brains during the evolutionary process.

    If such an Embedding Principle cannot be carried out - that is, if the model cannot be unlumped or refined in a self-consistent way - then the previous model was, put simply, wrong, and one needs to figure out which parts must be discarded. Such a model is, as it were, an evolutionary dead end. Fortunately, this has not happened to me since I began my work in 1957 because the theoretical method is so conservative. No theoretical addition is made unless it is supported by multiple experiments that cannot be explained in its absence. Where multiple mechanistic instantiations of some Design Principles were possible, they were all developed in models to better underestand their explanatory implications. Not all of these instantiations could survive the pressure of the evolutionary method, but some always could. As a happy result, all earlier models have been capable of incremental refinement and expansion.

    The cycle of model evolution has been carried out many times since 1957, leading today to increasing numbers of models that individually can explain and predict psychological, neurophysiological, anatomical, biophysical, and even biochemical data. In this specific sense, the classical mind-body problem is being incrementally solved.

    Howell: bold added for emphasis.
    (keys : Principles-Principia, behavior-mind-brain link, brain evolution, cycle of model evolution)
    see also quotes: Charles William Lucas "Universal Force" and others (not retyped yet). /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:105:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1074:
  • image p484fig13.04 The top-down feedback from the orbitofrontal cortex closes a feedback loop that supports a cognitive-emotional resonance. If this resonance can be sustained long enough, it enables us to have feelings at the same time that we experience the categories that caused them.
    || Cognitive-Emotional resonance. Basis of "core consciousness" and "the feeling of what happens". (Damasio 1999) derives heuristic version of CogEM model from his clinical data. Drive-> amygdala-> prefrontal cortex-> sensory cortex, resonance around the latter 3. How is this resonance maintained long enough to become conscious? /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1075:
  • image p514fig13.44 Analog of the COgEM model in Figure 6.1 of (Damasio 1999).
    || (a) map of object X-> map of proto-self at inaugural instant-> [, map of proto-self modified]-> assembly of second-order map. (b) map of object X enhanced-> second-order map imaged. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:108:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1104:
  • image p105fig03.23 The pointillist painting A Sunday on la Grande Jatte by Georges Seurat illustrates how we group together both large-scale coherence among the pixels of the painting, as well as forming small groupings around the individual dabs of color.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1105:
  • image p107fig03.25 The Roofs of Collioure by Matisse. See the text for details
    || p107c1h0.6 "... [Matisse] showed how patches of pure color, when laid down properly on a canvas, could be grouped by the brain into emergent boundarues, without the intervention of visible outlines. ... The trick was that these emergent boundaries, being invisible, or amodal, did not darken the colors in the surface representations. In this sense, Matisse intuitively realized that "all boundaries are invisible" through the masterful way in which he arranged his colors on canvas to generate boundaries that could support compelling surface representations. ..." /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1106:
  • image p108fig03.27 Matisse's painting Open Window, Collioure 1905 combines continuously colored surfaces with color patches that created surface representations using amodal boundaries, as in Figure 3.26. Both kinds of surfaces cooperate to form the final painterly percept.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1107:
  • image p110fig03.32 Claude Monet's painting of Poppies Near Argenteuil. See the text for details.
    || Claude Monet Poppies Near Argenteuil 1873. p110c2h0.35 "... the red poppies and the green field around them are painted to have almost the same luminescence; that is, they are almost equiluminant. As a result, the boundaries between the red and green regions are weak and positionally unstable, thereby facilitating an occasional impression of the poppies moving in a gentle breeze, especially as one's attention wanders over the scene. ...". /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1108:
  • image p120fig03.43 Four paintings by Monet of the Rouen cathedral under different lighting conditions (top row) and their monochromatic versions (bottom row). See the text for details.
    || p119c2h0.25 "... Monet uses nearby colors that are nearly equiluminant, and sharp, high-contrast luminance defined edges are sparse. He hereby creates weaker boundary signals within and between the parts of many forms, and stronger boundary signals between the forms. This combination facilitates color spreading within the forms and better separation of brightness and collor differences between forms. ... The grayscale versions of these paintings demonstrate the near equiluminance of the brushstrokes within forms, and places in which brightness and color differences significantly influence the groupings that differentiate between forms, including the differentiation between the cathedral and the sky. ..." /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1109:
  • image p120fig03.44 The Rouen cathedral at sunset generates very different boundary webs than it does in full sunlight, as illustrated by Figure 3.45.
    || Rouen Cathedral at sunset (Monet 1892-1894).
    • Lighting almost equiluminant
    • Most boundaries are thus caused by color differences, not luminance differences
    • Fine architectural details are obscured, leading to...
    • Coarser and more uniform boundary webs, so...
    • Less depth in the painting.
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:110:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1110:
  • image p121fig03.45 The Rouen cathedral in full sunlight.
    || Rouen Cathedral full sunlight (Monet 1892-1894).
    • Lighting is strongly non-uniform across most of the painting
    • Strong boundaries due to both luminance and color differences
    • Fine architectural details are much clearer, leading to...
    • Finer and more non-uniform boundary webs, so...
    • Much more detail and depth
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1111:
  • image p121fig03.46 The Rouen cathedral in full sunlight contains T-Junctions that are not salient in the painting of it at sunset. These are among the painting's features that give it a much more depthful appearance.
    || Rouen Cathedral full sunlight (Monet 1892-1894).
    • There are also more T-junctions where vertical boundaries occlude horizontal boundaries, or conversely...
    • Leading to more depth.
    p119c2h1.0 "... Such T-junction boundary occlusions ... can generate percepts of depth in the absence of any other visual clues. ...". /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1112:
  • image p171fig04.49 An example of DaVinci stereopsis in which the left eye sees more of the wall between A and C than the right eye does. The region between B and C is seen only by the left eye because the nearer wall between C and D occludes it from the right eye view. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1113:
  • image p377fig11.11 DaVinci stereopsis phenomena occur when only one eye can receive visual inputs from part of a 3D scene due to occlusion by a nearer surface.
    || How does monocular information contribute to depth perception? DaVinci steropsis (Gillam etal 1999). Only by utilizing monocular information can visual system create correct depth percept. [left, right] eye view /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1114:
  • image p379fig11.13 How the 3D LAMINART model explains DaVinci stereopsis. All the stages of boundary and surface formation are color coded to clarify their explanation. Although each mechanism is very simple, when all of them act together, the correct depthful surface representation is generated. See the text for details.
    || DaVinci stereopsis (Nakayama, Shimojo 1990). An emergent property of the previous simple mechanisms working together. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] pair [Binocular match: boundaries of thick bar -> Add monocular boundaries along lines-of-sight -> Line-of-sight inhibition kills weaker vertical boundaries -> 3D surface percept not just a disparity match!] pair [Binocular match: right edge of thin and thick bars -> Strongest boundaries: binocular and monocular boundaries add -> Vertical boundaries from monocular left edge of thin bar survive -> Filling-in contained by connected boundaries]. cart [very near, near, fixation plane, far, very far] /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1115:
  • image p380fig11.14 The model explanation of DaVinci stereopsis when the input stimuli have opposite contrast polarities.
    || Polarity-reversed Da Vinci stereopsis (Nakayama, Shimojo 1990). Same explanation! (... as Figure 11.13 ...) [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far] /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1116:
  • image p381fig11.15 The same model mechanisms explain the surface percept that is generated by the variant of DaVinci stereopsis that Gillam, Blackburn, and Nakayama studied in 1999.
    || DaVinci stereopsis (Gillam, Blackburn, Nakayama 1999). same model mechanisms. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far] /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1117:
  • image p382fig11.16 The version of DaVinci steropsis wherein three narrow rectangles are binocularly matched with one thick rectangle can also be explained is a similar way.
    || DaVinci stereopsis of [3 narrow, one thick] rectangles (Gillam, Blackburn, Nakayama 1999). [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far] /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1127:
  • p618 Chapter 17 A universal development code - Mental measurements embody universal laws of cell biology and physics /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:112:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1137:
  • image p073fig02.19 Computing with cells: infinity does not exist in biology!
    || Computing in a bounded activity domain, Gedanken experiment (Grossberg 1970). Vm sub-areas [xm, B - xm], I(all m)], m=[1, i, B].
    Bexcitable sites
    xi(t)excited sites (activity, potential)
    B - xi(t)unexcited sites
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1138:
  • image p082fig02.37 My models begin with behavioral data, since brains are designed to achieve behavioral success. The text explains how models evolve in stages, through a process of successive refinements, or unlumpings. These unlumpings together carry out a kind of conceptual evolution, leading to models that can explain and predict ever larger psychological and neurobiological databases.
    || Modelling method and cycle.
    Behavioral data -(art of modeling)-> Design principles <- Neural data <-(brain predictions)- Mathematical model and analysis -(behavioral predictions)-> Behavioural data
    Operationalizes "proper level of abstraction"
    Operationalizes that you cannot "derive a brain" in one step. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1139:
  • image p501fig13.26 A simple differential equation describes the processes of transmitter accumulation and release that do their best, at a finite rate, to carry out unbiased transduction.
    || Transmitter accumulation and release. Transmitter y cannot be restored at an infinite rate: T = S*ym y ~= B, Differential equations: d[dt: y] = A*(B - y) - S*y = accumulate - release. Transmitter y tries to recover to ensure unbiased transduction. What if it falls behind? Evolution has exploited the good properties that happen then. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1144:
  • Grossberg 2021 p081c2h0.66 sub-section: How does one evolve a computational brain?
    The above discussion illustrates that no single step of theoretical derivation can derive a whole brain. One needs a method for deriving a brain in stages, or cycles, much as evolution has incrementally discovered ever more complex brains over many thousands of years. The following theoretical method has been successfully applied many times since I first used it in 1957. It embodies a kind of conceptual evolutionary process for deriving a brain.

    Because "brain evolution needs to achieve behavioural success", we need to start with data that embodiey indices of behavioral success. That is why, as illustrated in Figure 2.37 Modelling method and cycle, one starts with Behavioral Data from scores or hundreds of psychological experiments. These data are analyszed as the result of an individual adapting autonomously in real time to a changing world. This is the Arty of Modeling. It requires that one be able to infer from static data curves the dynamical processes that control individual behaviors occuring in real time. One of the hardest things that I teach to my students to do is "how to think in real time" to be able to carry out this speculative leap.

    Properly carried out, this analysis leads to the discovery of new Design Principles that are embodied by these behavioral processes. The Design Principles highlight the functional meaning of the data, and clarify how individual behaviors occurring in real time give rise to these static data curves.

    These principles are then converted into the simplest Mathematical Model using a method of minimal anatomies, which is a form of Occam's Razor, or principle of parsimony. Such a mathematical model embodies the psychological principles using the simplest possible differential equations. By "simplest" I mean that, if any part of the derived model is removed, then a significant fraction of the targeted data could no longer be explained. One then analyzes the model mathematically and simulates it on the computer, showing along the way how variations on the minimal anatomy can realize the design principles in different individuals or species.

    This analysis has always provided functional explanations and Behavioral Predictions for much larger behavioral data bases than those used to discover the Design Principles. The most remarkable fact is, however, that the behaviorally derived model always looks like part of a brain, thereby explaining a body of challenging Neural Data and making novel Brain Predictions.

    The derivation hereby links mind to brain via psychological organizational principles and their mechanistic realization as a mathematically defined neural network. This startling fact is what I first experienced as a college Freshman taking Introductory Psychology, and it changed my life forever.

    I conclude from having had this experience scores of times since 1957 that brains look the way they do because they embody a natural computational realization for controlling autonomous adaptation in real-time to a changing world. Moreover, the Behavior -> Principles -> Model -> Neural derivation predicts new functional roles for both known and unknown brain mechanisms by linking the brain data to how it helps to ensure behavioral success. As I noted above, the power of this method is illustrated by the fact that scores of these predictions about brain and behavior have been supported by experimental data 5-30 years after they were first published.

    Having made the link from behavior to brain, one can then "burn the candle from both ends" by pressing both top-down from Behavioral Data and bottom-up from Brain Data to clarify what the model can and cannot explain at its current stage of derivation. No model can explain everything. At each stage of development, the model can cope with certain environmental challenges but not others. An important part of the mathematical and computational analysis is to characterize the boundary between the known and unknown; that is which challenges the model can cope with and which it cannot. The shape of this boundary between the known and unknown helps to direct the theorist's attention to new design principles that have been omitted from previous analysis.

    The next step is to show how these new design principles can be incorporated into the evolved model in a self-consistent way, without undermining its previous mechanisms, thereby leading to a progressively more realistic model, one that can explain and predict ever more behavioral and neural data. In this way, the model undergoes a type of evolutionary development, as it becomes able to cope behaviorally with environmental constraints of ever increasing subtlety and complexity. The Method of Minimal Anatomies may hereby be viewed as way to functionally understand how increasingly demanding combinations of environmental pressures were incorporated into brains during the evolutionary process.

    If such an Embedding Principle cannot be carried out - that is, if the model cannot be unlumped or refined in a self-consistent way - then the previous model was, put simply, wrong, and one needs to figure out which parts must be discarded. Such a model is, as it were, an evolutionary dead end. Fortunately, this has not happened to me since I began my work in 1957 because the theoretical method is so conservative. No theoretical addition is made unless it is supported by multiple experiments that cannot be explained in its absence. Where multiple mechanistic instantiations of some Design Principles were possible, they were all developed in models to better underestand their explanatory implications. Not all of these instantiations could survive the pressure of the evolutionary method, but some always could. As a happy result, all earlier models have been capable of incremental refinement and expansion.

    The cycle of model evolution has been carried out many times since 1957, leading today to increasing numbers of models that individually can explain and predict psychological, neurophysiological, anatomical, biophysical, and even biochemical data. In this specific sense, the classical mind-body problem is being incrementally solved.

    Howell: bold added for emphasis.
    (keys : Principles-Principia, behavior-mind-brain link, brain evolution, cycle of model evolution)
    see also quotes: Charles William Lucas "Universal Force" and others (not retyped yet). /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:114:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:116:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1180:
  • image p248fig05.49 This circuit of the LAMINART model helps to explain properties of Up and Down states during slow wave sleep, and how disturbances in ACh dynamics can disrupt them.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1181:
  • image p278fig06.28 If the feature-category resonances cannot form, say due to a lesion in IT, then a surface-shroud resonance can still support conscious seeing of an attended object, and looking at or reaching for it, even if the individual doing so knows nothing about the object, as occurs during visual agnosia. The surface-shroud resonance supports both spatial attention and releases commands that embody the intention to move towards the attended object.
    || What kinds of resonances support knowing vs seeing? visual agnosia: reaching without knowing Patient DF (Goodale etal 1991). Attention and intention both parietal cortical functions (Anderson, Essick, Siegel 1985; Gnadt, Andersen 1988; Synder, Batista, Andersen 1997, 1998) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1182:
  • image p557fig15.26 Brain regions and processes that contribute to autistic behavioral symptoms when they become imbalanced in prescribed ways.
    || Basal Gamglia prolonged gate opening <-> { Amygdala emotionally depressed-> [hippocampus- hyperspecific learning; Cerebellum- adaptive timing fails; hypofrontal blocking fails, no Theory of Mind]-> Neocortex; Neocortex- rewards not received-> Amygdala}. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1212:
  • image p189fig05.04 The hippocampus is one of several brain regions that are important in learning and remembering about objects and events that we experience throughout life. The book will describe several hippocampal processes that contribute to this achievement in different ways.
    || hypothalmic nuclei, amygdala, hippocampus, cingulate gyrus, corpus callosum, thalamus /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1213:
  • image p228fig05.37 A macrocircuit of the neurotrophic Spectrally Timed ART, or nSTART, model. I developed nSTART with my PhD student Daniel Franklin. It proposes how adaptively timed learning in the hippocampus, bolstered by Brain Derived Neurotrophic Factor, or BDNF, helps to ensure normal memory consolidation.
    || habituative gates, CS, US, Thalamus (sensory cortex, category learning, conditioned reinforcer learning, adaptively time learning and BDNF), Amygdala (incentive motivation learning), Hippocampus (BDNF), Prefrontal Cortex (attention), Pontine nuclei, Cerebellum (adaptively timed motor learning) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1214:
  • image p233fig05.42 Mismatch-induced beta oscillations have been reported in at least three parts of the brain: V1, V4, and hippocampus. Althpough there may be other reasons for beta oscillations in the brain, those that are caused by a mismatch should be studied in concert with the gamma oscillations that occur during a good enough match. See tyhe text for details.
    || Is there evidence for the [gamma, beta] prediction? Yes, in at least three parts of the brain, (Buffalo EA, Fries P, Ladman R, Buschman TJ, Desimone R 2011, PNAS 108, 11262-11267) Does this difference in average oscillation frequencies in the superficial and deep layers reflect layer 4 reset? Superficial recording γ (gamma), Deep recording β (beta) (Berke etal 2008, hippocampus; Buschman and Miller 2009, FEF) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1215:
  • image p520fig14.02 Macrocircuit of the main brain regions, and connections between them, that are modelled in the unified predictive Adaptive Resonance Theory (pART) of cognitive-emotional and working memory dynamics. Abbreviations in red denote brain regions used in cognitive-emotional dynamics. Those in green denote brain regions used in working memory dynamics. Black abbreviations denote brain regions that carry out visual perception, learning and recognition of visual object categories, and motion perception, spatial representation and target tracking. Arrow denote non-excitatory synapses. Hemidiscs denote adpative excitatory synapses. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown. Also not shown are output signals from cortical areas to motor responses. V1: striate, or primary, visual cortex; V2 and V4: areas of prestriate visual cortex; MT: Middle Temporal cortex; MST: Medial Superior Temporal area; ITp: posterior InferoTemporal cortex; ITa: anterior InferoTemporal cortex; PPC: Posterior parietal Cortex; LIP: Lateral InterParietal area; VPA: Ventral PreArculate gyrus; FEF: Frontal Eye Fields; PHC: ParaHippocampal Cortex; DLPFC: DorsoLateral Hippocampal Cortex; HIPPO: hippocampus; LH: Lateral Hypothalamus; BG: Basal Ganglia; AMYG: AMYGdala; OFC: OrbitoFrontal Cortex; PRC: PeriRhinal Cortex; VPS: Ventral bank of the Principal Sulcus; VLPFC: VentroLateral PreFrontal Cortex. See the text for further details.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1216:
  • image p532fig14.08 Macrocircuit of the ARTSCENE Search neural model for learning to search for desired objects by using the sequences of already experienced objects and their locations to predict what and where the desired object is. V1 = First visual area or primary visual cortex; V2 = Second visual area; V4 = Fourth visual area; PPC = Posterior Parietal Cortex; ITp = posterior InferoTemporal cortex; ITa = anterior InferoTemporal cortex; MTL = Medial Temporal Lobe; PHC = ParaHippoCampal cortex; PRC = PeriRhinal Cortex; PFC = PreFrontal Cortex; DLPFC = DorsoLateral PreFrontal Cortex; VPFC = Ventral PFC; SC = Superior Colliculus.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1217:
  • image p541fig15.02 The neurotrophic Spectrally Timed Adaptive Resonance Theory, or nSTART, model of (Franklin, Grossberg 2017) includes hippocampus to enable adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between CS and US.
    || Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. [CS, US] -> Sensory Cortex (SC) <- motivational attention <-> category learning -> Prefrontal Cortex (PFC). SC conditioned reinforcement learning-> Amygdala (cannot bridge the temporal gap) incentive motivational learning-> PFC. SC adaptively timer learning and BDNF-> Hippocampus (can bridge the temporal gap) BDNF-> PFC. PFC adaptively timed motor learning-> cerebellum. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1218:
  • image p543fig15.06 The circuit between dentate granule cells and CA1 hippocampal pyramid cells seems to compute spectrally timed responses. See the text for details.
    || Hippocampal interpretation. 1. Dentate granule cells (Berger, Berry, Thompson 1986): "increasing firing...in the CS period...the latency...was constant". 2. Pyramidal cells: "Temporal model" Dentate granule cells-> CA3 pyramids. 3. Convergence (Squire etal 1989): 1e6 granule cells, 1.6e5 CA3 pyramids. 80-to-1 (ri). /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1219:
  • image p549fig15.19 How the adaptively timed hippocampal spectrum T inhibits (red arrow) the orienting system A as motivated attention in orbitofrontal cortex Si(2) peaks at the ISI.
    || Conditioning, Attention, and Timing circuit. Hippocampus spectrum-> Amgdala orienting system-> neocortex motivational attention. Adaptive timing inhibits orienting system and maintains adaptively timed Motivated Attention on the CS. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:121:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1220:
  • image p557fig15.26 Brain regions and processes that contribute to autistic behavioral symptoms when they become imbalanced in prescribed ways.
    || Basal Gamglia prolonged gate opening <-> { Amygdala emotionally depressed-> [hippocampus- hyperspecific learning; Cerebellum- adaptive timing fails; hypofrontal blocking fails, no Theory of Mind]-> Neocortex; Neocortex- rewards not received-> Amygdala}. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1221:
  • image p573fig16.01 The experimental chamber (A) and neurophysiological recordings from a rat hippocampus (B) that led to the discovery of place cells. See the text for details.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1222:
  • image p575fig16.03 As a rat navigates in its experimental chamber (black curves), neurophysiological recordings disclose the firing patterns (in red) of (a) a hippocampal place cell and (b) an entrorhinal grid cell.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1223:
  • image p578fig16.04 Cross-sections of the hippocampal regions and the inputs to them. See the text for details.
    || EC-> CA1-> CA3-> DG. Layers [V/V1, II, II]. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1224:
  • image p583fig16.10 The GRIDSmap model is embedded into a more complete representation of the processing stages from receipt of angular head velocity and linear velocity signals to this learning of place cells.
    || GRIDSmap. Pre-wired 2D stripe cells, learns 2D grid cells. vestibular cells [angular head velocity-> head direction cells, linear velocity]-> stripe cells- small scale 1D periodic spatial code (ECIII)-> SOM grid cells entorhinal cortex- small scale 2D periodic spatial scale-> SOM place cells hippocampal cortex- large scale 2D spatial code (dentate/CA3). Unified hierarchy of SOMs. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1225:
  • image p600fig16.36 The entorhinal-hipppocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories. See the text for details.
    || Entorhinal-hippocampal interactions as an ART system. Hippocampal place cells as spatial categories. Angular head velocity-> head direction cells-> stripe cells- small scale 1D periodic code (ECIII) SOM-> grid cells- small scale 2D periodic code (ECII) SOM-> place cells- larger scale spatial map (DG/CA3)-> place cells (CA1)-> conjunctive-coding cells (EC V/VI)-> top-down feedback back to stripe cells- small scale 1D periodic code (ECIII). stripe cells- small scale 1D periodic code (ECIII)-> place cells (CA1). /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1226:
  • image p602fig16.37 Data showing the effect of hippocampal inactivation by muscimol on grid cell firing before, during, and six hours after the muscimol, reading from left to right.
    || Hippocampal inactivation disrupts grid cells (Bonnevie etal 2013). muscimole inactivation. spikes on trajectory: [before, after min [6-20, 20-40, 40-60, 6h]]. rate map (Hz) [18.6, 11.4, 9.5, 6.7, 10.8]. spatial autocorrelogram g=[1.12, 0.05, -0.34, 0.09, 1.27]. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1227:
  • image p603fig16.38 Role of hippocampal feedback in maintaining grid fields. (a) Data showing the effect of hippocampal inactivation before and during muscimol inhibition of hippocampal cells, as in Figure 16.37. (b) Model simulation with normal grid fields. (c) Model simulation that emulates the effect of hippocampal inhibition on grid fields.
    || (a) Data: hippocampal inactivation [before, after] cart [spikes on trajectory (p: [18.6, 6.7] Hz), spatial autocorrelogram (g= [1.12, 0.09])]. (b) Model: noise-free path integration, [spikes on trajectory (p: 14.56 Hz), rate map, spatial autocorrelogram (g= 1.41), dynamic autocorrelogram (g=0.6)]. (c) Model: noisy path integration + non-specific tonic inhibition, [spikes on trajectory (p: 11.33 Hz), rate map, spatial autocorrelogram (g= 0.05), dynamic autocorrelogram (g=0.047)]. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1228:
  • image p617fig16.50 The perirhinal and parahippocampal cortices enable adaptively timed reinforcement learning and spatial navigational processes that are modeled by Spectral Spacing models in the What and Where cortical streams, respectively, to be fused in the hippocampus.
    || What and Where inputs to the hippocampus (Diana, Yonelinas, Ranganath 2007). Adaptively timed conditioning and spatial naviga039tbl01.03 tion. Hippocampus <-> Entorhinal Cortex <-> [Perirhinal Cortex <-> what, Parahippocampal Cortex <-> where]. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1231:
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
    bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :
    whitegeneral microcircuit : a possible component of ART architecture
    lime greensensory perception [attention, expectation, learn]. Table includes [see, hear, !!*must add touch example*!!], no Grossberg [smell, taste] yet?
    light bluepost-perceptual cognition?
    pink"the feeling of what happens" and knowing what event caused that feeling
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1243:
  • image p419fig12.17 The auditory continuity illusion illustrates the ART Matching Rule at the level of auditory streaming. Its "backwards in time" effect of future context on past conscious perception is a signature of resonance.
    || Auditory continuity illusion. input, percept. Backwards in time - How does a future sound let past sound continue through noise? Resonance! - It takes a while to kick in. After it starts, a future tone can maintain it much more quickly. Why does this not happen if there is no noise? - ART Matching Rule! TD harmonic filter is maodulatory without BU input. It cannot create something out of nothing. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1244:
  • image p420fig12.18 The ARTSTREAM model explains and simulates the auditory continuity illusion as an example of a spectral-pitch resonance. Interactions of ART Matching Rule and asymmetric competition mechanisms in cortical strip maps explain how the tone selects the consistent frequency from the noise in its own stream while separating the rest of the noise into another stream.
    || ARTSTREAM model (Grossberg 1999; Grossberg, Govindarajan, Wyse, Cohen 2004). SPINET. Frequency and pitch strips. Bottom Up (BU) harmonic sieve. Top Down (TD) harmonic ART matching. Exclusive allocation. Learn pitch categories based on early harmonic processing. A stream is a Spectral-Pitch Resonance! /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1245:
  • image p425fig12.23 ARTSTREAM simulations of the auditory continuity illusion and other streaming properties (left column, top row). When two tones are separated by silence (Input), a percept of silence also separates them in a spectral-pitch resonance. (left column, bottom row). When two tones are separated by broadband noise, the percept of tone continues through the noise in one stream (stream 1) while the remainder of the noise occurs in a different stream (stream 2). (right column) Some of the other streaming properties that have been simulated by the ARTSTREAM model.
    || Auditory continuity does not occur without noise. Auditory continuity in noise. Other simulated streaming data. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1246:
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
    bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :
    whitegeneral microcircuit : a possible component of ART architecture
    lime greensensory perception [attention, expectation, learn]. Table includes [see, hear, !!*must add touch example*!!], no Grossberg [smell, taste] yet?
    light bluepost-perceptual cognition?
    pink"the feeling of what happens" and knowing what event caused that feeling
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:124:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1257:
  • p404 Chapter 12From seeing and reaching to hearing and speaking - Circular reaction, streaming, working memory, chunking, and number /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:126:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1277:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1279:
  • image p030tbl01.02 The What and Where cortical processing streams obey complementary laws. These laws enable the What stream to rapidly and stably learn invariant object categories without experiencing catastrophic forgetting, while the Where stream learns labile spatial and action representations to control actions that are aimed towards these objects.
    ||
    WHATWHERE
    spatially-invariant object learning and recognitionspatially-variant reaching and movement
    fast learning without catastrophic forgettingcontinually update sensory-motor maps and gains
    IT InterferoTemporal CortexPPC Posterior Parietal Cortex
    WhatWhere
    matchingexcitatoryinhibitory
    learningmatchmismatch
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1280:
  • image p032fig01.21 At least three parallel visual cortical streams respond to visual inputs that reach the retina. Two parvocellular streams process visual surfaces (blob stream) and visual boundaries (interblob stream). The magnocellular stream processes visual motion.
    || [Retina, LGNs, V[1,2,3,4], MT] to [What- inferotemporal areas, Where- parietal areas]: visual parallel streams [2x blob, 1x bound] /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1281:
  • image p039tbl01.03 The link between consciousness and movement
    ||
    VISUALseeing, knowing, and reaching
    AUDITORYhearing, knowing, and speaking
    EMOTIONALfeeling, knowing, and acting
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1282:
  • image p092fig03.05 A cross-section of the retinal layer. Note that light stimuli need to go through all retinal layers before they reach the photoreceptor layer at which the light signals are registered.
    || light stimuli ->
    retinal layerscellular composition
    inner limiting membrane
    retinal nerve fibreganglion nerve fibres
    ganglion cellganglion
    inner plexiformamacrine
    inner nuclearhorizontal
    outer plexiform
    outer limiting membrane
    photoreceptorrod
    photoreceptorcone
    retinal pigment epithelium
    <- signal transduction. http://brain.oxfordjournals.org/content/early/2011/01/20/brain.awq346 /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1283:
  • image p232fig05.41 (a)-(c). The sequence of interlaminar events that SMART predicts during a mismatch reset. (d) Some of the compatible neurophysiological data.
    || Mismatch causes layer 5 dendritic spike that trigger reset. (a) Arousal causes increase in nonspecific thalamic nuclei firing rate and layer 5 dendritic and later somatic spikes (Karkum and Zhu 2002, Williams and Stuart 1999) (b) Layer 5 spikes reach layer 4 via layer 6i and inhibitory neurons (Lund and Boothe 1975, Gilbert and Wiesel 1979) (c) habituative neurotransmitters in layer 6i shift the balance of active cells in layer 4 (Grossberg 1972, 1976) (d) Dendritic stimulation fires layer 5 (Larkum and Zhu 2002) stimulation apical dendrites of nonspecific thalamus /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1284:
  • image p278fig06.28 If the feature-category resonances cannot form, say due to a lesion in IT, then a surface-shroud resonance can still support conscious seeing of an attended object, and looking at or reaching for it, even if the individual doing so knows nothing about the object, as occurs during visual agnosia. The surface-shroud resonance supports both spatial attention and releases commands that embody the intention to move towards the attended object.
    || What kinds of resonances support knowing vs seeing? visual agnosia: reaching without knowing Patient DF (Goodale etal 1991). Attention and intention both parietal cortical functions (Anderson, Essick, Siegel 1985; Gnadt, Andersen 1988; Synder, Batista, Andersen 1997, 1998) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1285:
  • image p303fig08.20 The G-wave speeds up with the distance between flashes at a fixed delay, and has a consitent motion across multiple spatial scales.
    || G-wave properties (Grossberg 1977). Theorem 2 (Equal half-time property) The time at which the motion signal reaches position w=L/2. Apparent motion speed-up with distance: this half-time is independent of the distance L between the two flashes. Consistent motion across scales: half-time is independent of the scale size K. Method of proof: elementary algebra and calculus (Grossberg, Rudd 1989 appendix) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1286:
  • image p304fig08.21 A computer simulation of the equal half-time property whereby the apparent motions within different scales that respond to the same flashes all reach the half-way point in the motion trajectory at the same time.
    || Equal half-time property: how multiple scales cooperate to generate motion percept. Travelling waves from Gaussian filters of different sizes bridge the same distance in comparable time. The time needed to bridge half the distance between flashes is the same. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1287:
  • image p335fig08.61 Behavioral data (left image) and simulation (right image) about speed in correct and error trials of the RT task. See text for details.
    || Behavioral data: speed, correct and error trials (RT task) (Roltman, Shadien 2002). More coherence in the motion causes faster reaction time. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1288:
  • image p350fig09.22 How the negative Gaussian of an obstacle causes a peak shift to avoid the obstacle without losing sight of how to reach the goal.
    || Steering dynamics: obstacle avoidance. body-centered coordinates [obstacle, goal, heading] -> steering /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1289:
  • image p351fig09.25 By the time MT+ is reached, directional transient cells and directional filters have begun to extract more global directional information from the image.
    || M+ computes global motion estimate. Estimate global motion from noisy local motion estimates. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:128:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1292:
  • image p414fig12.11 Neurophysiological data from cortical areas 4 and 5 (every other column) and simulations thereof (other columns) during a reach.
    || activation vs time. (a) area 4 phasic RT (IFV) (b) area 4 tonic (OPV) (c) area 4 phasic-tonic (OFPV) (d) area 4 phasic MT (DVV) (e) area 5 phasic (DV) (f) area 5 tonic (PPV) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1293:
  • image p416fig12.13 The DIRECT model learns, using a circular reaction that is energized by an Endogenous Random Generator, or ERG, to make motor-equivalent volitionally-activated reaches. This circular reaction learns a spatial representation of a target in space. It can hereby make accurate reaches with clamped joints and on its first try using a tool under visual guidance; see Figure 12.16.
    || DIRECT model (Bulloch, Grossberg, Guenther 1993). learns by circular reaction. learns spatial reresentation to me4diate between vision and action. motor-equivalent reaching. can reach target with clamped joints. can reach target with a TOOL on the first try under visual guidance. How did tool use arise?! /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1294:
  • image p416fig12.14 Computer simulations of DIRECT reaches with (b) a tool, (c) a clamped elbow, and (d) with a blindfold, among other constraints.
    || Computer simulationsd of direct reaches [unconstrained, with TOOL, elbow clamped at 140°, blindfolded] /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1295:
  • image p417fig12.15 The DIRECT and DIVA models have homologous circuits to learn and control motor-equivalent reaching and speaking, with tool use and coarticulation resulting properties. See the text for why.
    || From Seeing and Reaching to Hearing and Speaking, Circular reactions (Piaget 1945, 1951, 1952). Homologous circuits for development and learning of motor-equivalent REACHING and SPEAKING. DIRECT TOOL use (Bullock, Grossberg, Guenther 1993), DIVA Coarticulation (Guenther 1995) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1296:
  • image p428fig12.25 (left architecture) Auditory-articulatory feedback loop whereby babbled sounds active learning in an imitative map that is later used to learn to reproduce the sounds of other speakers. An articulatory-to-auditory expectation renders learning possible by making the auditory and motor data dimensionally consistent, as in the motor theory of speech. (right architecture) Parallel streams in the ARTSPEECH model for learning speaker-independent speech and language meaning, including a mechanism for speaker normalization (right cortical stream) and for learning speaker-dependent vocalic qualities (left cortical stream).
    || left: Speaker-dependent vocalic qualities; right: Speaker-independent speech and language meaning /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1297:
  • image p430fig12.26 The NormNet model shows how speaker normalization can be achieved using specializations of the same mechanisms that create auditory streams. See the text for how.
    || [Anchor vs Stream] log frequency map. -> diagonals-> Speaker-independent acoustic item information-> [BU adaptive filter, TD learned expectation]-> leaned item recognition categories /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1298:
  • image p446fig12.44 (left column, top row) LIST PARSE can model linguistic data from human subjects. In this figure, model parameters are fixed to enable a close fit to data about error-type distributions in immediate free recall experiments, notably transposition errors. (right column, top row) Simulation and data showing bowing of the serial position curve, including an extended primacy gradient. (left column, bottom row) The simulation curve overlays data about list length effects, notably the increasing recall difficulty of longer lists during immediate serial recall (ISR). (right column, bottom row) Simulation (bottom image) and data (top image) of the limited temporal extent for recall.
    || (1. TL) Error-type distributions in immediate serial recall (Hanson etal 1996). % occurence vs serial position. Graph convention: Data- dashed lines; Simulations- solid lines. Six letter visual ISR. Order errors- transpositions of neighboring items are the most common. Model explanation: Noisy activation levels change relative oorder in primacy gradient. Similar activation of neighboring items most susceptible to noise. Model paramenters fitted on these data. (2. TR) Bowing of serial position curve (Cowan etal 1999). % correct vs serial position. Auditory ISR with various list lengths (graphs shifted rightward): For [, sub-]span lists- extended primacy, with one (or two) item recency; Auditory presentation- enhanced performance for last items. LIST PARSE: End effects- first and last items half as many members; Echoic memory- last presented item retained in separate store. (3. BL) List length effects, circles (Crannell, Parrish 1968), squares (Baddeley, Hitch 1975), solid line- simulation. % list correct vs list length. Variable list length ISR: longer lists are more difficult to recall. LIST PARSE: More items- closer activation levels and lower absolute activity level with enough inputs; Noise is more likely to produce order errors, Activity levels more likely to drop below threshold;. (4. BR) Limited temporal extent for recall (Murdoch 1961). % recalled vs retention interval (s). ISR task with distractor-filled retention intervals (to prevent rehersal): Increasing retention interval - decreases probabilty of recalling list correctly; Load dependence- longer list more affected by delays; Performance plateau- subjects reach apparent asymptote. LIST PARSE: Increase convergence of activities with time; loss of order information;. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1311:
  • image p231fig05.39 The SMART hypothesis testing and learning cycle predicts that vigilance increases when a mismatch in subcortical regions like the nonspecific thanlamus activates the nucleus basalis of Meynert which, in turn, broadcasts a burst of the neurotransmitter acetylcholine, or ACh, to deeper cortical layers. Due to the way in which LAMINART proposes that cortical matching and mismatching occurs, this ACh burst can increase vigilance and thereby trigger a memory search. See the text for details.
    || [BU input, [, non]specific thalamic nucleus, thalamic reticulate nucleus, neocortical laminar circuit] cart [Arousal, Reset, Search, Vigilance] /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1312:
  • image p232fig05.41 (a)-(c). The sequence of interlaminar events that SMART predicts during a mismatch reset. (d) Some of the compatible neurophysiological data.
    || Mismatch causes layer 5 dendritic spike that trigger reset. (a) Arousal causes increase in nonspecific thalamic nuclei firing rate and layer 5 dendritic and later somatic spikes (Karkum and Zhu 2002, Williams and Stuart 1999) (b) Layer 5 spikes reach layer 4 via layer 6i and inhibitory neurons (Lund and Boothe 1975, Gilbert and Wiesel 1979) (c) habituative neurotransmitters in layer 6i shift the balance of active cells in layer 4 (Grossberg 1972, 1976) (d) Dendritic stimulation fires layer 5 (Larkum and Zhu 2002) stimulation apical dendrites of nonspecific thalamus /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1313:
  • image p461fig12.58 The lisTELOS model built upon key processes that were earlier modeled by the TELOS model. See the text for details.
    || TELOS model (Brown, Bulloch, Grossberg 1999, 2004). shows [BG nigro-[thalamic, collicular], FEF, ITa, PFC, PNR-THAL, PPC, SEF, SC, V1, V4/ITp, Visual Cortex input] and [GABA]. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1314:
  • image p523fig14.03 (a) The MOTIVATOR neural model generalizes CogEM by also including the basal ganglia. It can hereby explain and simulate complementary functions of the amygdala and basal ganglia (SNc) during conditioning and learned performance. The basal ganglia generate Now Print signals in response to unexpected rewards. These signals modulate learning of new associations in many brain regions. The amygdala supports motivated attention to trigger actions that are expected to occur in response to conditioned or unconditioned stimuli. Object Categories represent visual or gustatory inputs in anterior inferotemporal (ITA) and rhinal (RHIN) cortices, respectively. Value Categories represent the value of anticipated outcomes on the basis of hunger and satiety inputs, in amygdala (AMYG) and lateral hypothalamus (LH). Object-Value Categories resolve the value of competing perceptual stimuli in medial (MORB) and lateral (ORB) orbitofrontal cortex. The Reward Expectation Filter detects the omission or delivery of rewards using a circuit that spans ventral striatum (VS), ventral pallidum (VP), striosomal delay (SD) cells in the ventral striatum, the pedunculopontine nucleus (PPTN) and midbrain dopaminergic neurons of the substantia nigra pars compacta/ventral tegmental area (SNc/VTA). The circuit that processes CS-related visual information (ITA, AMTG, ORB) operates in parallel with a circuit that processes US-related visual and gustatory information (RHIN, AMYG, MORB). (b) Reciprocal adaptive connections between hypothalamus and amygdala enable amygdala cells to become learned value categories. The bottom region represents hypothalmic cells, which receive converging taste and metabolite inputs whereby they become taste-drive cells. Bottom-up signals from activity patterns across these cells activate competing value category, or US Value Representations, in the amygdala. A winning value category learns to respond selectively to specific combinations of taste-drive activity patterns and sends adaptive top-down priming signals back to the taste-drive cells that activated it. CS-activated conditioned reinforcer signals are also associatively linked to value categories. Adpative connections end in (approximately) hemidiscs. See the text for details.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1315:
  • image p524fig14.04 (a) Model basal ganglia circuit for the control of dopaminergic Now Print signals from the substantia nigra pars compacta, or SNc, in response to unexpected rewards. Cortical inputs (Ii), activated by conditioned stimuli, learn to excite the SNc via a multi-stage pathway from the vantral striatum (S) to the ventral pallidum and then on to the PPTN (P) and the SNc (D). The inputs Ii excite the ventral striatum via adaptive weights WIS, and the ventral striatum excites the SNc with strength W_PD. The striosomes, which contain an adaptive spectral timing mechanism [xij, Gij, Yij, Zij], learn to generate adaptively timed signals that inhibit reward-related activation of the SNc. Primary reward signals (I_R) from the lateral hypothalamus both excite the PPTN directly (with strength W_RP) and act as training signals to the ventral striatum S (with strength W_RS) that trains the weights W_IS. Arrowheads denote excitatory pathways, circles denote inhibitory pathways, and hemidiscs denote synapses at which learning occurs. Thick pathways denote dopaminergic signals.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1316:
  • image p559fig15.27 Brain regions and processes that contribute to the release of dopaminergic Now Print signals by the substantia nigra pars compacta, or SNc, in response to unexpected reinforcing events. See the text for details.
    || Model of spectrally timed SNc learning (Brown, Bulloch, Grossberg 1999). Delayed inhibitory expectations of reward. Dopamine cells signal an error in reqard prediction timing or magnitude. Immediate excitatory predictions of reward. Lateral hypothalamus (Primary Reward Input)-> [(+)ventral striatum <-> ventral pallidium (+)-> PPTN(+)-> SNc]. SNc-> [dopamine signal -> ventral striatum, Striosomal cells]. Conditioned Stimuli (CS)(+)-> [ventral striatum, striosomal cells]. Striosomal cells(-)-> SNc. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1317:
  • image p560fig15.29 Excitatory pathways that support activation of the SNc by a US and the conditioning of a CS to the US.
    || Excitatory pathway. Primary reward (apple juice) briefly excites lateral hypothalamus. Hypothalamic-PPTN excitation causes SNc dopamine burst. Hypothalamic activity excites ventral striatum for training. Active CS working memory signals learn to excite ventral striatum. Lateral hypothalamus (Primary Reward Input)-> [(+)ventral striatum <-> ventral pallidium(+)-> PPTN(+)-> SNc]. SNc-> [dopamine signal -> ventral striatum. Conditioned Stimuli working memory trace (CS)(+)-> ventral striatum. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1318:
  • image p560fig15.30 The inhibitory pathway from striosomal cells to the SNc is able to inhibit the SNc when a reward occurs with expected timing and magnitude.
    || Inhibitory pathway. Learning: CS-striosomal LTP occurs due to a three-way coincidence [An active CS working memory input, a Ca2+ spike, a dopamine burst]; Signaling: The delayed Ca2+ spike facilitates striosomal-SNc inhibition;. Striosomal cells learn to predict both timing and magnitude of reward signal to cancel it: reward expectation;. Conditioned stimuli (CS) LTP-> Striosomal cells <- dopamine | (-)-> SNc->. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1319:
  • image p561fig15.32 The SNc can generate both dopamine bursts and dips in response to rewards whose amplitude is unexpectedly large or small.
    || Inhibitory pathway: expectation magnitude. 1. If reward is greater than expected, a dopamine burst causes striosomal expectation to increase. 2. If reward is less than expected, a dopamine dip causes striosomal expectation to decrease. 3. This is a negative feedback control system for learning. Conditioned stimuli (CS)-> Striosomal cells <- dopamine | (-)-> SNc->. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1320:
  • image p569fig15.40 The direct and indirect basal ganglia circuits that control GO and STOP movement signals. See the text for details.
    || [Direct path GO(+), Indirect path STOP(+), dopamine from SNc(+-)]-> striatum. GO-> GPi/SNr-> Thalamus (VA/Vlo) <-> frontal cortex. STOP-> GPe <-> STN-> GPi/SNr. NAc-> GPi/SNr. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1332:
  • image p375fig11.06 The contrast constraint on binocular fusion is realized by obligate cells in layer 3B of cortical area V1.
    || Model implements contrast constraint on binocular fusion (cf. "obligate" cells Poggio 1991). An ecological constraint on cortical development. [left, right] eye cart V1-[4 monocular simple, 3B binocular simple, complex2/3A] cells. Inhibitory cells (red) ensure that fusion occurs when contrasts in left and right eye are approximately equal. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1333:
  • image p376fig11.09 The disparity filter in V2 helps to solve the correspondence problem by eliminating spurious contrasts using line-of-sight inhibition.
    || Model V2 disparity filter solves the correspondence problem. An ecological constraint on cortical development. [left, right] eye view: False matches (black) suppressed by line-of-sight inhibition (green lines). "Cells that fire together wire together". /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:133:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1345:
  • image p581fig16.06 The learning of hexagonal grid cell receptive fields as an animal navigates an open field is a natural consequence of simple trigonometric properties of the positions at which the firing of stripe cells that are tuned to different directions will co-occur.
    || The Trigonometry of spatial navigation. Coactivation of stripe cells. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1346:
  • image p583fig16.09 The GRIDSmap model used algorithmically defined stripe cells to process realistic rat trajectories. The stripe cell outputs then formed inputs to the adaptive filter of a self-organizing map which learned hexagonal grid cell receptive fields.
    || GRIDSmap. Self-organizing map receives inputs from stripe cells and learns to respond to most frequent co-activation patterns. Stripe cells combine speed and head direction to create a periodic 1D position code. Virtual rat navigated using live rat trajectories from Moser Lab. Speed and head direction drives stripe cells. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1347:
  • image p584fig16.11 GRIDSmap simulation of the learning of hexagonal grid fields. See the text for details.
    || Simulation results. Multiple phases per scale. response vs lenght scale (0.5m+). /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1348:
  • image p585fig16.13 Hexagonal grid cell receptive fields develop if their stripe cell directional preferences are separated by 7, 10, 15, 20, or random numbers degrees. The number and directional selectivities of stripe cells can thus be chosen within broad limits without undermining grid cell development.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1349:
  • image p585fig16.14 Superimposing firing of stripe cells whose directional preferences differ by 60 degrees supports learning hexagonal grid cell receptive fields in GRIDSmap.
    || GRIDSmap: from stripe cells to grid cells. Grid-cell Regularity from Integrated Distance through Self-organizing map. Superimposing firing of stripe cells oriented at intervals of 60 degrees. Hexagonal grid! /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1350:
  • image p586fig16.15 Superimposing stripe cells oriented by 45 degrees does not lead to learning of rectangular grids in GRIDSmap, but it does in an oscillatory inference model.
    || Why is a hexagonal grid favored? Superimposing firing of stripe cells oriented at intervals of 45 degrees. Rectangular grid. This and many other possibilities do not happen in vivo. They do happen in the oscillatory inference model. How are they prevented in GRIDSmap? /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1351:
  • image p587fig16.17 A finer analysis of the 2D trigonometry of spatial navigation showed that both the frequency and amplitude of coactivations by stripe cells determine the learning of hexagonal grid fields.
    || A refined analysis: SOM amplifies most frequent and energetic coactivations (Pilly, Grossberg 2012). [linear track, 2D environment]. (left) Stripe fields separated by 90°. 25 coactivations by 2 inputs. (right) Stripe fields separated by 60°. 23 coactivations by 3 inputs. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1352:
  • image p588fig16.18 Simulations of coordinated learning of grid cell receptive fields (second row) and unimodal place cell receptive fields (third row) by the hierarchy of SOMs in the GridPlaceMap model. Note the exquisite regularity of the hexagonal grid cell firing fields.
    || [stripe, grid, place] cells vs [spikes on trajectory, unsmoothed rate map, smoothed rate map]. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1353:
  • image p605fig16.39 Data showing effects of medial septum (MS) inactivation on grid cells and network theta oscillations in medial entorhinal cortex (MEC). (A) Examples of disruption in the spatial expression of the hexagonal grid structure for two grid cells (Brandon etal 2011). (B) Temporal reduction in the power and frequency of network theta oscillations (Koenig etal 2011). (C) Temporary reduction in the gridness score, mean firing rate, and patial stability of grid cells (Koenig etal 2011).
    || Disruptive effects of Medial Septum inactivation in Medial Entorhinal Cortex (Brandon etal 2011; Koenig etal 2011). (A) Rate map [rate map, spatial autocorrelations, trajectory] vs [baseline, sub-sampled, medial septum inactivation, 3-6 hour recovery, 24 hour recovery], [rate map (Hz- m, p), spatial autocorrelations (gridness)][ 1.2, 7.2, 1.1; 0.25, 1.7, 0.6; 0.25, 2.5, -0.53; 0.7, 5.1, 0.55; 1.0, 5.3, 1.3; 2.1, 15, 0.19; 1.7, 12, 0.71; 1.7, 3.2, -0.22; 1.8, 9.1, 0.68; 2.5, 13, 0.46]. (B) [normalized power at 7-9 Hz, frequency (Hz)] vs 5-minute periods. (C) [mean gridness score (+-SEM), mean firing rate (% of baseline), mean correlation coeff (+-SEM)] vs 10-minute periods. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1367:
  • p353 Chapter 10 Laminar computing by cerebral cortex - Towards a unified theory of biologucal and artificial intelligence /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:136:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1382:
  • image p011fig01.07 The choice of signal function f determines how an initial activity pattern will be transformed and stored in short-term memory (STM). Among [same, slower, faster]-than-linear signal functions, only the last one can suppress noise. It does so as it chooses the population that receives the largest input for storage, while suppressing the activities of all other population, thereby giving rise to a winner-take-all choice.
    || initial pattern (xi(0) vs i):
    fXi(∞)= xi(∞)/sum[j: xj(∞)]x(∞)
    linearperfect storage of any patternamplifies noise (or no storage)
    slower-than-linearsaturatesamplifies noise
    faster-than-linearchooses max [winner-take-all, Bayesian], categorical perceptionsuppresses noise, [normalizes, quantizes] total activity, finite state machine
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1383:
  • image p357fig10.04 Laminar Computing achieves its properties by computing in a new way that sythesizes the best properties of feedforward and feedback interactions, analog and digital computations, and preattentive and attentive learning. The property of analog coherence enables coherent groupings and decisions to form without losing sensitivity to the amount of evidence that supports them.
    || Laminar Computing: a new way to compute. 1. Feedforward and feedback: a) Fast feedforward processing when data are unambiguous (eg Thorpe etal), b) slower feedback chooses among ambiguous alternatives [self-normalizing property, real-time probabiligy theory], c) A self-organizing system that trades certainty against speed: Goes beyond Bayesian models! 2. Analog and Digital: Analog Coherence combines the stability of digital with the sensitivity of analog. 3. Preattentive and Attentive Learning: Reconciles the differences of (eg) Helmholtz and Kanizsa, "A preattentive grouping is its own 'attentional' prime" /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:138:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1408:
  • image p230fig05.38 The Synchronous Matching ART, or SMART, model includes spiking neurons in a laminar cortical hierarchy. I developed SMART with my PhD student Massimiliano Versace. By unlumping LAMINART to include spiking neurons, finer details of neurodynamics, such as the existence of faster gamma oscillations during good enough matches, and slower beta oscillations during bad enough mismatches, could be shown as emergent properties of network interactions.
    || Second order thalamus -> specific thalamic nucleus -> Thalamic reticulate nucleus -> neocortical laminar circuit [6ll, 6l, 5, 2/3, 1] -> Higher order cortex. Similar for First order thalamus -> First order cortex, with interconnection to Second order, nonspecific thalamic nucleus /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1409:
  • image p232fig05.40 Computer simulation of how the SMART mode generates (a) gamma oscillations if a good enough match occurs, or (c) beta oscillations if a bad enough match occurs. See the text for details.
    || Brain oscillations during match/mismatch, data, simulation. (a) TD corticothalamic feedback increases synchrony (Sillito etal 1994) (b) Match increases γ oscillations (c) Mismatch increases θ,β oscillations /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:140:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1410:
  • image p233fig05.42 Mismatch-induced beta oscillations have been reported in at least three parts of the brain: V1, V4, and hippocampus. Althpough there may be other reasons for beta oscillations in the brain, those that are caused by a mismatch should be studied in concert with the gamma oscillations that occur during a good enough match. See tyhe text for details.
    || Is there evidence for the [gamma, beta] prediction? Yes, in at least three parts of the brain, (Buffalo EA, Fries P, Ladman R, Buschman TJ, Desimone R 2011, PNAS 108, 11262-11267) Does this difference in average oscillation frequencies in the superficial and deep layers reflect layer 4 reset? Superficial recording γ (gamma), Deep recording β (beta) (Berke etal 2008, hippocampus; Buschman and Miller 2009, FEF) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1411:
  • image p296fig08.07 When two flashes turn on and off out of phase with the correct range of interstimulus intervals, and not too far from one another, then either beta motion of phi motion are perceived.
    || Beta and Phi motion percepts. Beta motion: percepts of continuous motion of a well-defined object across empty intervening space. Phi motion: sense of "pure" motion without a concurrent percept of moving object. (Exner 1875) http://www.yorku.ca/eye/balls.htm /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1412:
  • image p297fig08.08 When a second flash is more intense than the first flash, then apparent motion may occur from the second to the first flash.
    || Delta motion: motions from the second to the first flash. Data: (Kolers 1972; Korte 1915). Simulation: (Grossberg, Rudd 1992). This occurs when the luminance or contrast of the second flash is large compared to that of the first flash. Sustained and transient cells obey shunting dynamics whose averaging rates speed up with output intensity. The first flash to wane is the one that will be the source of the G-wave. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1413:
  • image p340fig09.07 Log polar remapping from the retina to cortical area V1 and beyond converts expansion, translation, and spiral flows on the retina into parallel flows, with different orientations, on the cortical map.
    || Log polar remapping of optic flow. retina -> cortex. Any combination of expansion and circular motion centered on the fovea maps to cortex as a single direction. Retinal Cartesian coordinates (x,y) map to cortical polar coordinates (r,theta). This makes it easy to compute directional receptive fields in the cortex! /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1414:
  • image p423fig12.20 The Spatial Pitch Network, or SPINET, model shows how a log polar spatial representation of the sound frequency spectrum can be derived from auditory signals occuring in time. The spatial representation allows the ARTSTREAM model to compute spatially distinct auditory streams.
    || SPINET model (Spatial Pitch Network) (Cohen, Grossberg, Wyse 1995). 1. input sound 2. Gamma-tone filter bank 3. Shaort-term average energy spectrum 4. MAP transfer function 5. On-center off-surround and rectification 6. Harmonic weighting 7. Harmonic summation and competition -> PITCH /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1415:
  • image p598fig16.34 The spiking GridPlaceMap model generates theta-modulated place and grid cell firing, unlike the rate-based model.
    || Theta-modulated cells in spiking model. [place, grid] cell vs [membrane potential (mV vs time), frequency vs inter-spike intervals (s), power spectra (normalized power vs frequency (Hz))]. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1416:
  • image p605fig16.39 Data showing effects of medial septum (MS) inactivation on grid cells and network theta oscillations in medial entorhinal cortex (MEC). (A) Examples of disruption in the spatial expression of the hexagonal grid structure for two grid cells (Brandon etal 2011). (B) Temporal reduction in the power and frequency of network theta oscillations (Koenig etal 2011). (C) Temporary reduction in the gridness score, mean firing rate, and patial stability of grid cells (Koenig etal 2011).
    || Disruptive effects of Medial Septum inactivation in Medial Entorhinal Cortex (Brandon etal 2011; Koenig etal 2011). (A) Rate map [rate map, spatial autocorrelations, trajectory] vs [baseline, sub-sampled, medial septum inactivation, 3-6 hour recovery, 24 hour recovery], [rate map (Hz- m, p), spatial autocorrelations (gridness)][ 1.2, 7.2, 1.1; 0.25, 1.7, 0.6; 0.25, 2.5, -0.53; 0.7, 5.1, 0.55; 1.0, 5.3, 1.3; 2.1, 15, 0.19; 1.7, 12, 0.71; 1.7, 3.2, -0.22; 1.8, 9.1, 0.68; 2.5, 13, 0.46]. (B) [normalized power at 7-9 Hz, frequency (Hz)] vs 5-minute periods. (C) [mean gridness score (+-SEM), mean firing rate (% of baseline), mean correlation coeff (+-SEM)] vs 10-minute periods. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1417:
  • Grossberg 2021 p229c2h0.60 SMART computer simulations demonstrate that a good enough match of a top-down expectation with a bottom-up feature pattern generates an attentive resonance during which the spikes of active cells synchronize in the gamma frequency range of 20-70 Hz (Figure 5.40). Many labs have reported a link between attention and gamma oscillations in the brain, including two articles published in 2001, one from the laboratory of Robert Desimone when he was at the the National Institute of Mental Health in Bethseda (Fries, Reynolds, Rorie, Desimone 2001), and the other from the laboratory of Wolf Singer in Frankfurt (Engel, Fries, Singer 2001). You'll note that Pascal Fries participated in both studies, and is an acknowledged leader in neurobiological studies of gamma oscillations; eg (Fries 2009). .." /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:142:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:144:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1455:
  • p618 Chapter 17 A universal development code - Mental measurements embody universal laws of cell biology and physics /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:146:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:148:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1507:
  • image p025fig01.16 (left panel) The main processing stages of the Cognitive-Emotional-Motor (CogEM) model have anatomical interpretations in terms of sensory cortex, amygdala, and prefrontal cortex. Chapter 13 will describe in greater detail how CS cues activate invariant object categories in the sensory cortex, value categories in the amygdala, and object-value categories in the prefrontal cortex, notably the orbitofrontal cortex. The amygdala is also modulated by internal drive inputs like hunger and satiety. (right panel) Anatomical data support this circuit, as do many neurophysiological data.
    || drive -> amygdala -> prefrontal cortex <-> sensory cortex -> amygdala. [visual, somatosensory, auditory, gustatory, olfactory] cortex -> [amygdala, Orbital Prefrontal Cortex]. amygdala -> Lateral Prefrontal Cortex /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1508:
  • image p058fig02.04 Serial learning paradigm: Learning the temporal order of events by practicing them in the order that they occur in time.
    || Learning a global arrow in time. How do we learn to encode the temporal order of events in LTM? serial learning. [w=intra, W=inter]trial interval. "... data about serial verbal learning (Figure 2.4) seemed to suggest that events can go "backwards in time". ..." /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1509:
  • image p059fig02.05 Bowed serial position curve. This kind of data emphasizes the importance of modelling how our brains give rise to our minds using nonlinear systems of differential equations.
    || Effects of [inter, intra]trial intervals (Hovland 1938). # of errors vs list position. [w (sec), W (sec)] = (2 6) (4 6) (2 126) (4 126). Nonoccurance of future items reduces the number of errors in response to past items. Thes data require a real-time theory for their explanation! that is, DIFFERENTIAL equations. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:150:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1510:
  • image p059fig02.06 The bowed serial position curve illustrates the sense in which "events can go backwards in time" during serial learning.
    || Bow due to backward effect in time. If the past influenced the future, but no conversely: # of errors vs list position; Data (Hoyland Hull, Underwood, etc). /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1511:
  • image p071fig02.16 To solve the noise-saturation dilemma, individual neurons in a network that is receiving a distributed spatial patterns of inputs need to remain sensitive to the ratio of input to them divided by all the inputs in that spatial pattern. Although the inputs are delivered to a finite number of neurons, the input and activity patterns are drawn continuously across the cells for simplicity.
    || Noise-Saturation Dilemma. [Ii, xi] vs t. [Input, Activity] pattern [small -> noise, large -> saturation]. Problem: remain sensitive to input RATIOS θi = Ii / sum[j: Ij] as total input I = sum[j: Ij] -> ∞. Many kinds of data exhibit sensitivity to ratios of inputs. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1512:
  • image p073fig02.19 Computing with cells: infinity does not exist in biology!
    || Computing in a bounded activity domain, Gedanken experiment (Grossberg 1970). Vm sub-areas [xm, B - xm], I(all m)], m=[1, i, B].
    Bexcitable sites
    xi(t)excited sites (activity, potential)
    B - xi(t)unexcited sites
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1513:
  • image p082fig02.37 My models begin with behavioral data, since brains are designed to achieve behavioral success. The text explains how models evolve in stages, through a process of successive refinements, or unlumpings. These unlumpings together carry out a kind of conceptual evolution, leading to models that can explain and predict ever larger psychological and neurobiological databases.
    || Modelling method and cycle.
    Behavioral data -(art of modeling)-> Design principles <- Neural data <-(brain predictions)- Mathematical model and analysis -(behavioral predictions)-> Behavioural data
    Operationalizes "proper level of abstraction"
    Operationalizes that you cannot "derive a brain" in one step. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1514:
  • image p085fig02.38 Our models have been used in many large-scale applications to engineering and technology. Linking brain to behavior explains how brain mechanisms give rise to psychological functions, and do so autonomously. The combination of mechanism, function, and autonomy helps to explain their value in helping to solve outstanding problems in technology.
    || Modeling method and cycle.
    Behavioral data -(art of modeling)-> Design principles <- Neural data <-(brain predictions)- Mathematical model and analysis -(behavioral predictions)-> Behavioural data
    Technology: Mathematical model and analysis <-> Technological applications
    At every stage, spin off new model designs and mechanisms to technologist who need autonomous intelligent applications. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1515:
  • image p134fig04.14 The kinds of displays that Michael Paradiso and Ken Nakayamas used to catch filling-in "in the act" and which Karl Arrington then simulated using the Grossberg and Todorovic 1988 model.
    || Experiments on filling-in. Catching "filling0in" in the act (Paradiso, Nakayama 1991). (Arrington 1994 Vision Research 34, 3371-3387) simulated these data using the model of Grossberg and Todorovic 1988. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1516:
  • image p145fig04.23 If end gaps were not closed by end cuts, then color would flow out of every line end!
    || A perceptual disaster in the feature contour system. feature contour, line boundary. input -> [boundary, surface]. boundary -> surface. Color would flow out of every line end! as it does during neon color spreading. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1517:
  • image p151fig04.29 Experimental evidence of bipole cells in cortical area V2 was reported by Von der Heydt, Peterhans, and Baumgarter (1984).
    || Bipoles: first neurophysiological evidence (V2) (von der Heydt, Peterhans, Baumgartner 1984, Peterhans, von der Heydt 1988). (Grossberg 1984) prediction.
    Ordering:
    Stimulus (S)
    probe location *
    cells in V2
    response?
    ...(S)*...YES
    ...*...(S)NO
    (S)...*...NO
    (S)...*...(S)YES
    (S)...*...
    (more contrast)
    NO
    (S)...*.....(S)YES
    Evidence for receptive field. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1518:
  • image p151fig04.30 Anatomical evidence for long-range horizontal connections has also been reported, as illustrated by the example above from (Bosking etal 1997).
    || Anatomy: horizontal connections (V1) (Bosking etal 1997). tree shrew. [10, 20]*[20, 10, 0, -10, -20] (degrees). /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1519:
  • image p152fig04.31 The predicted bipole cell receptive field (upper left corner) has been supported by both neurophysiological data and psychophysical data, and used in various forms by many modelers. See the text for details.
    || Bipoles through the ages. (Grossberg 1984; Grossberg, Mongolla 1985). (Field, Hayes, Hess 1993) "association field". (Heitger, von der Heydt 1993). (Williams, Jacobs 1997). cf. "relatability" geometric constraints on which countours get to group (Kellman & Shipley 1991). Also "tensor voting" (Ullman, Zucker, Mumford, Guy, Medioni, ...). /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1520:
  • image p159fig04.36 Graffiti art by Banksy exploits properties of amodal boundary completion and spatial impenetrability.
    || p159c1h0.75 perceptual psychologist Nava Rubin "... When the wall is smooth, Banksy leaves the regions previously covered by stencil unpainted, relying of observers' perception to segregate figural regions from the (identically colored) background. But when the wall is patterned with large-scale luminance edges - eg due to bricks - Banksy takes the extra time to fill in unpainted figural regions with another color (Rubin 2015). ..." /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1521:
  • image p162fig04.38 How long-range cooperation among bipole cells and short-range competition by hypercomplex cells work together to generate the inverted-U in boundary strength that is found in the data of Figure 4.37 (right panel).
    || Cooperation and competition during grouping.
    few lineswide spacing, inputs outside spatial range of competition, more inputs cause higher bipole activity
    more linesnarrower spacing, slightly weakens net input to bipoles from each inducer
    increasing line densitycauses inhibition to reduce net total input to bipoles
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1522:
  • image p163fig04.39 A schematic of the LAMINART model that explains key aspects of laminar visual cortical anatomy and dynamics. LGN -> V1 [6, 4, 2/3] -> V2 [6, 4, 2/3]
    || p163c1h0.6 "... The first article abount laminar computing ... proposed how the laminar cortical model could process 2D pictures using bottom-up filtering and horizontal bipole grouping interactions (Grossbeerg, Mingolla, Ross 1997). In 1999, I was able to extend the model to also include top-down circuits for expectation and attention (Grossberg 1999)(right panel). Such a synthesis of laminar bottom-up, horizontal, and top-down circuits is characteristic of the cerebral cortex (left panel). I called it LAMINART because it began to show how properties of Adaptive Resonance Theory, or ART, notably the ART prediction about how top-down expectations and attention work, and are realized by identical cortical cells and circuits. You can immediately see from the schematic laminar circuit diagram ... (right panel) that circuits in V2 seem to repeat circuits in V1, albeit with a larger spatial scale, despite the fact that V1 and V2 carry out different functions, How this anatomical similarity can coexist with functional diversity will be clarified in subsequent sections and chapters. It enable dfifferent kinds of biological intelligence to communicate seamlessly while carrying out their different psychological functions. ..." /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1523:
  • image p165fig04.41 The Kanizsa-Minguzzi ring. See the text for details.
    || p165c1h0.6 "... (left panel), the annulus is divided by two line segments into annular sectors of unequal area. Careful viewing shows that the smaller sector looks a little brighter than the larger one. (Kanizsa, Minguzzi 1986) noted that "this unexpected effect is not easily explained. In fact, it cannot be accounted for by any simple psychological mechanism such as lateral inhibition or freuency filtering. Furthermore, it does not seem obvious to invoke oganizational factors, like figural belongingness or figure-ground articulation."". p165c2h0.35 "... (Grossberg, Todorovic 1988). Our main claim is that the two radial lines play two roles, one in the formation of boundaries with which to contain the filling-in process, and the other as a source of feature contour signals that are filled-in within the annular regions to create a surface brightness percept. ..." /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1524:
  • image p232fig05.40 Computer simulation of how the SMART mode generates (a) gamma oscillations if a good enough match occurs, or (c) beta oscillations if a bad enough match occurs. See the text for details.
    || Brain oscillations during match/mismatch, data, simulation. (a) TD corticothalamic feedback increases synchrony (Sillito etal 1994) (b) Match increases γ oscillations (c) Mismatch increases θ,β oscillations /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1525:
  • image p232fig05.41 (a)-(c). The sequence of interlaminar events that SMART predicts during a mismatch reset. (d) Some of the compatible neurophysiological data.
    || Mismatch causes layer 5 dendritic spike that trigger reset. (a) Arousal causes increase in nonspecific thalamic nuclei firing rate and layer 5 dendritic and later somatic spikes (Karkum and Zhu 2002, Williams and Stuart 1999) (b) Layer 5 spikes reach layer 4 via layer 6i and inhibitory neurons (Lund and Boothe 1975, Gilbert and Wiesel 1979) (c) habituative neurotransmitters in layer 6i shift the balance of active cells in layer 4 (Grossberg 1972, 1976) (d) Dendritic stimulation fires layer 5 (Larkum and Zhu 2002) stimulation apical dendrites of nonspecific thalamus /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1526:
  • image p252fig06.01 A surface-shroud resonance begins to form when the surface representations of objects bid for spatial attention. In addition to these topographic excitatory inputs, there is long-range inhibition of the spatial attention cells that determines which inputs will attract spatial attention.
    || Bottom-up spatial attention competition. [more, less] luminous perceptual surfaces -> competition -> spatial attention /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1527:
  • image p253fig06.02 After bottom-up surface inputs activate spatial attentional cells, they send top-down topographic excitatory signals back to the surface representations. This recurrent shunting on-center off-surround network contrast enhances larger attentional activities while approximately normalizing the total spatial attentional activity. A surface-shroud resonance hereby forms that selects an attentional shroud, enhances the perceived contrast of the attended surface (light blue region), and maintains spatial attention on it.
    || Surface-shroud resonance. perceptual surfaces -> competition -> spatial attention. (Carrasco, Penpeci-Talgar, and Eckstein 2000, Reynolds and Desimone 2003) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1528:
  • image p257fig06.05 A curve tracing task with monkeys was used by Roelfsema, Lamme, and Spekreijse in 1998 to demonstrate how spatial attention can flow along object boundaries. See the text for details.
    || Attention flows along curves: Roelfsema etal 1998: Macaque V1. fixation (300ms) -> stimulus (600ms RF - target curve, distractor) -> saccade. Crossed-curve condition: attention flows across junction between smoothly connected curve segments, Gestalt good continuation /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1529:
  • image p258fig06.06 Neurophysiological data and simulation of how attention can flow along a curve. See the text for details.
    || Simulation of Roelfsema etal 1998, data & simulation. Attention directed only to far end of curve. Propagates along active layer 2/3 grouping to distal neurons. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:152:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1530:
  • image p265fig06.13 The basal ganglia gate perceptual, cognitive, emotional, and more processes through parallel loops.
    || [motor, ocularmotor, dorsolateral, ventral-orbital, anterior cingulate] vs. [Thalamus, pallidum-subs, nigra, Striatum, Cortex] /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1531:
  • image p267fig06.14 Feedback from object surfaces to object boundaries uses surface contours. This feedback assures complementary consistency and enables figure-ground separation. A corollary discharge of the surface contours can be used to compite salient object feature positions.
    || Perceptual consistency and figure-ground separation. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1532:
  • image p271fig06.17 Persistent activity in IT cells is just what is needed to enable view-invariant object category learning by ARTSCAN to be generalized to [view, position, size]-invariant category learning by positional ARTSCAN, or pARTSCAN. See the text for details.
    || Persistent activity in IT. Physiological data show that persistent activity exist in IT (Fuester and Jervey 1981, Miyashita and Chang 1988, Tomita etal 1999). Adapted from (Tomita etal 1999 Nature) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1533:
  • image p273fig06.20 pARTSCAN can simulate the IT cell recoding that Li and DiCarlo reported in their swapping experiments because the swapping procedure happens without causing a parietal reset burst to occur. Thus the originally activated invariant category remains activated and can get associated with the swapped object features.
    || Simulation of Li and DiCarlo swapping data. data (Li and DiCarlo 2008), model (Cao, Grossberg, Markowitz 2011). normalized response vs. exposure (swaps and/or hours) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1534:
  • image p274fig06.21 pARTSCAN can also simulate the trade-off in IT cell responses between position invariance and selectivity that was reported by Zoccolan etal 2007. This trade-off limits the amount of position invariance that can be learned by a cortical area like V1 that is constrained by the cortical magnification factor.
    || Trade-off in IT cell response properties. Inferotemporal cortex cells with greater position invariance respond less selectively to natural objects. invariance-tolerance, selectivity-sparseness. data (Zoccolan etal 2007) model (Grossberg, Markowitzm, Cao 2011). position tolerance (PT, degrees) vs sparseness (S) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1535:
  • image p275fig06.23 Data from (Akrami etal 2009) and our simulation of it. See the text for details.
    || IT responses to image morphs. data vs model /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1536:
  • image p284fig07.02 Psychophysical data (top row) and simulation (bottom row) of how persistence decreases with flash illuminance and duration.
    || Persistence data and simulations. (Francis, Grossberg, Mingolia 1994 Vision Research, 34, 1089-1104). Persistence decreases with flash illuminance and duration (Bowen, Pola, Matin 1974; Breitmeyer 1984; Coltheart 1980). Higher luminance or longer duration habituates the gated dipole ON channel more. Causes larger and faster rebound in the OFF channel to shut persisting ON activity off. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1537:
  • image p285fig07.03 Persistence decreases with flash illuminance and duration due to the way in which habituative transmitters regulate the strength of the rebound in response to offset of a stimulating input, and how this rebound inhibits previously activated bipole cells.
    || Persistence data and simulations (Francis, Grossberg, Mingolia 1994 Vision Research, 34, 1089-1104). Persistence decreases with flash illuminance and duration. Horizontal input excites a horizontal bipole cell, which supports persistence. Offset of the horizontal input causes a rebound of activity in the vertical pathway, which inhibits the horizontal bipole cell, thereby terminating persistence. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1538:
  • image p286fig07.04 Illusory contours persist longer than real contours because real contours have more inducers whose rebound at contour offset can cause faster boundary reset. Illusory contours also take longer to form than real contours, which explains the increasing portion of the curve.
    || Persistence data and simulations (Meyer, Ming 1988; Reynolds 1981). Increasing portion of curve is due to formation time of the illusory contour. Longer persistence is due to fewer bottom-up inducers of an illusory contour that has the same length as a real contour: only illuminance-derived edges generate reset signals. When bottom-up inducers are inhibited by OFF cell rebounds, their offset gradually propagates to the center of the illusory contour. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1539:
  • image p286fig07.05 This figure shows the propagation through time of illusory contour offset from the rebounded cells that got direct inputs to the center of the contour.
    || Persistence data and simulations. Illusory contours persist longer than real contours (Meyer, Ming 1988; Reynolds 1981). When bottom-up inducers are inhibited by OFF cell rebounds, their offset gradually propagates to the center of the illusory contour. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1540:
  • image p287fig07.06 The relative durations of persistence that occur due to an adaptation stimulus of the same or orthogonal orientation follow from the properties of the habituative gated dipoles that are embedded in the boundary completion system.
    || Persistence data and simulations. Change in persistence depends on whether adaptation stimulus has same or orthogonal orientation as test grating (Meyer, Lawson, Cohen 1975). If adaptation stimulus and test stimulus have the same orientation, they cause cumulative habituation, which causes a stronger reset signal, hence less persistence. When they are orthogonal, the competition on the ON channel is less, hence more persistence. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1541:
  • image p287fig07.07 Persistence increases with distance between a target and a masking stimulus due to weakening of the spatial competition in the first competitive stage of hypercomplex cells.
    || Persistence data and simulations. Persistence increases with distance between a target and a masking stimulus (Farrell, Pavel, Sperling 1990). There is less spatial competition from the masker to the target when they are more distant, hence the target is more persistent. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1542:
  • image p297fig08.08 When a second flash is more intense than the first flash, then apparent motion may occur from the second to the first flash.
    || Delta motion: motions from the second to the first flash. Data: (Kolers 1972; Korte 1915). Simulation: (Grossberg, Rudd 1992). This occurs when the luminance or contrast of the second flash is large compared to that of the first flash. Sustained and transient cells obey shunting dynamics whose averaging rates speed up with output intensity. The first flash to wane is the one that will be the source of the G-wave. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1543:
  • image p297fig08.09 Simulation of motion in opposite directions that is perceived when two later flashes occur on either side of the first flash.
    || Split motion. Data: (H.R. Silva 1926), Simulation: (Grossberg, Rudd 1992) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1544:
  • image p298fig08.10 Simulation of the motion speed-up that is perceived when flash duration decreases.
    || "The less you see it, the faster it moves". Data: (Giaschi, Anstis 1989), Simulation: (Grossberg, Rudd 1992). ISI = 0, flash duration decreases; SOA = constant, flash duration decreases /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1545:
  • image p304fig08.22 Data (top image) and simulation (bottom image) of Korte's laws. The laws raise the question of how ISIs in the hundreds of milliseconds can cause apparent motion.
    || Korte's Laws, Data: (Korte 1915) Simulation: (Francis, Grossberg 1996) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1546:
  • image p311fig08.30 The data of (Castet etal 1993) in the left image was simulated in the right image by the 3D FORMOTION model that I developed with my PhD student Jonathan Chey. These data provide insight into how feature tracking signals propagate from the ends of a line to its interior, where they capture consistent motion directional signals and inhibit inconsistent ones.
    || Solving the aperture problem. A key design problem: How do amplified feature tracking signals propagate within depth to select the cirrect motion directions at ambiguous positions? This propagation from feature tracking signals to the line interior determines perceived speed in Castet etal data, which is why speed depends on line tilt and length. Data: (Castet etal 1993), Simulation: (Chey etal 1997) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1547:
  • image p319fig08.38 The neurophysiological data from MT (left image) confirms the prediction embodied in the simulation of MT (right image) concerning the fact that it takes a long time for MT to compute an object's real direction of motion.
    || Solving the aperture problem takes time. MT Data (Pack, Born 2001), MT simulation (Chey, Grossberg, Mingolia 1997) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1548:
  • image p333fig08.58 Neurophysiological data (left image) and simulation (right image) of LIP data during correct trials on the RT task. See the text for details.
    || LIP responses during RT task correct trials (Roltman, Shadlen 2002). More coherence in favored direction causes faster cell activation. More coherence in opposite direction causes faster cell inhibition. Coherence stops playing a role in the final stages of LIP firing. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1549:
  • image p334fig08.59 Neurophysiological data (left column) and simulations (right column) of LIP responses for the FD task during both [correct, error] trials. See the text for details.
    || LIP responses for the FD task during both [correct, error] trials (Shadlen, Newsome 2001). LIP encodes the perceptual decision regardless of the true direction of the dots. Predictiveness of LIP responses on error trials decreases with increasing coherence. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1550:
  • image p334fig08.60 Behavioral data (left image) and simulation (right image) about accuracy in both the RT and FD tasks. See text for details
    || Behavioral data: % correct vs % coherence (Mazurek etal 2003; Roltman, Shadien 2002). More coherence in the motion causes more accurate decisions. RT task accuracy at weaker coherence levels is slightly better than FD task accuracy. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1551:
  • image p335fig08.61 Behavioral data (left image) and simulation (right image) about speed in correct and error trials of the RT task. See text for details.
    || Behavioral data: speed, correct and error trials (RT task) (Roltman, Shadien 2002). More coherence in the motion causes faster reaction time. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1552:
  • image p335fig08.62 More remarkable simulation fits (right column) to LIP neurophysiology data (left column) about where and when to move the eyes.
    || LIP encodes not only where, but also when, to move the eyes. ...No Bayes(Roltman, Shadien 2002). Firing rate (sp/s) vs time (ms). Slope of firing rate (sp/s^2) vs % correct. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1553:
  • image p342fig09.11 Psychophysical data (left panel) and computer simulation (right column) of the importance of efference copy in real movements. See the text for details.
    || Heading: move to wall and fixate stationary object (adapted from Warren, Hannon 1990). Inaccurate for simulated eye rotation, accurate for real eye rotation, need confirmation by efference copy! /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1554:
  • image p343fig09.13 When one scans the three different types of pears in the left image, as illustrated by the jagged blue curve with red movement end positions, and transforms the resulting retinal images via the cortical magnification factor, or log polar mapping, the result is the series of images in the right column. How do our brains figure out from such confusing data which views belong to which pear?
    || View-invariant object learning and recognition Three pears: Anjou, Bartlett, Comice. Which is the Bartlett pear? During unsupervised scanning and learning about the world, no one tells the brain what views belong to which objects while it learns view-invariant object categories. Cortical magnificantion in V1. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1555:
  • image p349fig09.20 Using virtual reality displays (left image), (Fajen, Warren 2003) collected data (right two images) about how observers avoid obstacles (open circular disks) as a function of their distance and angular position as they navigate towards a fixed goal (x). These data illustrate how goals act as attractors while obstacles act as repellers.
    || Steering from optic flow (Fajen, Warren 2003). goals are attractors, obstacles are repellers. Damped spring model explains human steering data. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1556:
  • image p352fig09.26 The final stage of the model computes a beautiful expansion optic flow that permits an easy estimate of the heading direction, with an accuracy that matches that of human navigators.
    || The model generates accurate heading (Warren, Hannon 1990; Royden, Crowell, Banks 1994). Maximally active MSTd cell = heading estimate. Accuracy matches human data. Random dots [mean +-1.5°, worst +-3.8°], Random dots with rotation [accurate with rotations <1°/s, rotation increases, error decreases], OpenGL & Yosemite benchmark +-1.5°, Driving video +-3°. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1557:
  • image p356fig10.03 Laminar computing achieves at least three basic properties of visual processing that have analogs in all biologically intelligent behaviors. These properties may be found in all cortical circuits in specialized form.
    || What does Laminar Computing achieve? 1. Self-stabilizing development and learning; 2. Seamless fusion of a) pre-attentive automatic bottom-up processing, b) attentive task-selective top-down processing; 3. Analog coherence: Solution of Binding Problem for perceptual grouping without loss of analog sensitivity. Even the earliest visual cortical stages carry out active adaptive information processing: [learn, group, attention]ing /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1558:
  • image p357fig10.04 Laminar Computing achieves its properties by computing in a new way that sythesizes the best properties of feedforward and feedback interactions, analog and digital computations, and preattentive and attentive learning. The property of analog coherence enables coherent groupings and decisions to form without losing sensitivity to the amount of evidence that supports them.
    || Laminar Computing: a new way to compute. 1. Feedforward and feedback: a) Fast feedforward processing when data are unambiguous (eg Thorpe etal), b) slower feedback chooses among ambiguous alternatives [self-normalizing property, real-time probabiligy theory], c) A self-organizing system that trades certainty against speed: Goes beyond Bayesian models! 2. Analog and Digital: Analog Coherence combines the stability of digital with the sensitivity of analog. 3. Preattentive and Attentive Learning: Reconciles the differences of (eg) Helmholtz and Kanizsa, "A preattentive grouping is its own 'attentional' prime" /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1559:
  • image p360fig10.09 Perceptual grouping is carried out in layer 2/3 by long-range horizontal excitatory recurrent connections, supplemented by short-range disynaptic inhibitory connections that together realize the bipole grouping properties that are diagrammed in Figure 10.10.
    || Grouping starts in layer 2/3. LGN-> 6-> 4-> 2/3: 1. Long-range horizontal excitation links collinear, coaxial receptive fields (Gilbert, Wiesel 1989; Bosking etal 1997; Schmidt etal 1997) 2. Short-range disynaptic inhibition of target pyramidal via pool of intraneurons (Hirsch, Gilbert 1991) 3. Unambiguous groupings can form and generate feedforward outputs quickly (Thorpe etal 1996). /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1560:
  • image p361fig10.10 Bipole grouping is achieved by long-range horizontal recurrent connections that also give rise to short-range inhibitory interneurons which inhibit nearby bipole cells as well as each other.
    || Bipole property controls perceptual grouping. Collinear input on both sides. Excitatory inputs summate. Inhibitory inputs normalize, Shunting inhibition! Two-against-one. Cell is excited. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1561:
  • image p367fig10.15 Data (left column) and simulation (right column) of how attention prevents a masking stimulus from inhibiting the response to the on-center of the cell from which the recording was made.
    || Attention protects target from masking stimulus (Reynolds etal 1999; Grossberg, Raizada 2000). /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1562:
  • image p367fig10.16 Neurophysiological data (left image) and simulation (right image) of how a low-contrast target can be facilitated if it is surrounded by a paid (31May2023 Howell - is word correct?) of collinear flankers, and suppresssed by them if it has high contrast.
    || Flankers can enhance or suppress targets (Polat etal 1998; Grossberg, Raizada 2000). target alone, target + flankers, flankers alone. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1563:
  • image p368fig10.17 Neurophysiological data (left image) and simulation (right image) showing that attention has a greater effect on low contrast than high contrast targets.
    || Attention has greater effect on low contrast targets (DeWeerd etal 1999; Raizada, Grossberg 2001). Threshold increase (deg) vs Grating contrast (%), [no, with] attention /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1564:
  • image p368fig10.18 Neurophysiological data (left image) and simulation (right image) of relative on-cell activities when the input to that cell may also be surroubded by iso-orientation or perpendicular textures.
    || Texture reduces response to a bar: iso-orientation suppression (Knierim, van Essen 1992), perpendicular suppression (Raizada, Grossberg 2001) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1565:
  • image p369fig10.19 Data from (Watanabe etal 2001) showing perceptual learning of the coherent motion direction, despite the lack of extra-foveal attention and awareness of the moving stimuli.
    || Unconscious perceptual learning of motion direction, % correct for two tests, compared to chance level results. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1566:
  • image p393fig11.31 (Todd, Akerstrom 1987) created a series of 2D images from discrete black patches on a white disk and showed how the perceived depth varies with the factors summarized in the figure. The LIGHTSHAFT model quantitatively simulated their data.
    || Factors determining depth-from-texture percept. Perceived depth varies with texture element width, but only when elements are elongated and sufficiently aligned with one another to form long-range groupings. Data of (Todd, Akerstrom 1987) simulated by the LIGHTSHAFT model of (Grossberg, Kuhlmann 2007). [HP, LP, CCE, CCS, RO] /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1567:
  • image p399fig11.39 Simulation of the eye rivalry data of (Lee, Blake 1999).
    || [Binocular, [left, right] eye] activity /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1568:
  • image p402fig11.43 A pair of disparate images of a scene from the University of Tsukuba. Multiview imagre database.
    || input [left, right] /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1569:
  • image p407fig12.03 Neurophysiological data showing how motor cortical cells code different vectors that are sensitive to both the direction of the commanded movement and its length.
    || (a) Single primary motor cortex neuron, onset of movement -> on..., radial architecture... (b) Motor cortex neuronal population, radial architecture... /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1570:
  • image p409fig12.04 (top half) Neurophysiological data of vector cell responses in motor cortex. (bottom half) VITE model simulations of a simple movement in which the model's difference vector simulates the data as an emergent property of network interactions.
    || Neurophysiological data. VITE model [Present Position vector, Difference vector, Outflow velocity vector, go signal]. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1571:
  • image p410fig12.06 Monkeys seamlessly transformed a movement initiated towards the 2 o'clock target into one towards the 10 o'clock target when the later target was substituted 50 or 100 msec after activation of the first target light.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1572:
  • image p414fig12.11 Neurophysiological data from cortical areas 4 and 5 (every other column) and simulations thereof (other columns) during a reach.
    || activation vs time. (a) area 4 phasic RT (IFV) (b) area 4 tonic (OPV) (c) area 4 phasic-tonic (OFPV) (d) area 4 phasic MT (DVV) (e) area 5 phasic (DV) (f) area 5 tonic (PPV) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1573:
  • image p424fig12.21 One of the many types of data about pitch processing that are simulated by the SPINET model. See the text for details.
    || Pitch shifts with component shifts (Patterson, Wightman 1976; Schouten 1962). Pitch vs lowest harmonic number. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1574:
  • image p425fig12.23 ARTSTREAM simulations of the auditory continuity illusion and other streaming properties (left column, top row). When two tones are separated by silence (Input), a percept of silence also separates them in a spectral-pitch resonance. (left column, bottom row). When two tones are separated by broadband noise, the percept of tone continues through the noise in one stream (stream 1) while the remainder of the noise occurs in a different stream (stream 2). (right column) Some of the other streaming properties that have been simulated by the ARTSTREAM model.
    || Auditory continuity does not occur without noise. Auditory continuity in noise. Other simulated streaming data. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1575:
  • image p428fig12.25 (left architecture) Auditory-articulatory feedback loop whereby babbled sounds active learning in an imitative map that is later used to learn to reproduce the sounds of other speakers. An articulatory-to-auditory expectation renders learning possible by making the auditory and motor data dimensionally consistent, as in the motor theory of speech. (right architecture) Parallel streams in the ARTSPEECH model for learning speaker-independent speech and language meaning, including a mechanism for speaker normalization (right cortical stream) and for learning speaker-dependent vocalic qualities (left cortical stream).
    || left: Speaker-dependent vocalic qualities; right: Speaker-independent speech and language meaning /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1576:
  • image p432fig12.28 (left image) The SpaN model simulates how spatial representations of numerical quantities are generated in the parietal cortex. (right image) Behavior numerosity data and SpaN model simulations of it.
    || (Left) preprocessor-> spatial number map-> Comparison wave. (Right) data axis: number of lever presses; model axis: node position in the spatial number axis /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1577:
  • image p437fig12.32 Data from a free recall experiment illustrate the bowed serial position curve.
    || Serial position function for free recall Data: (Murdock 1962 JEP 64, 482-488). % correct vs position of word on a 40-word list. Primacy gradient can be a mixture of STM and LTM read-out. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1578:
  • image p437fig12.33 Item and Order working memory models explain free recall data, as well as many other psychological and neurobiological data, by simulating how temporal series of events are stored as evolving spatial patterns of activity at content-addressable item categories. The categories with the largest activities are rehearsed first, and self-inhibit their activity as they do so in order to prevent tem from being rehearsed perseveratively. The laws whereby the items are stored in working memory obey basic design principles concerning list categories, or chunks, of sequences of stored items can be stably remembered.
    || Working memory models: item and order, or competitive queuing (Grossberg 1978; Houghton 1990; Page, Norris 1998). Event sequence in time stored as an evolving spatial pattern of activity. Primacy gradient of working memory activation stores correct temporal order at content-addressable cells. Maximally activated cell populations is performed next when a rehearsal wave is turned on. Output signal from chosen cell population inhibits its own activity to prevent perseveration: inhibition of return. Iterate until entire sequence is performed. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1579:
  • image p443fig12.41 Neurophysiological data from the Averbeck etal sequential copying experiments show the predicted primacy gradient in working memory and the self-inhibition of activity as an item is stored. When only the last item remains stored, it has the highest activity becasuse it has been freed from inhibition by earlier items.
    || Neurophysiology of sequential copying /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:157:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1580:
  • image p444fig12.42 The LIST PARSE laminar cortical model of working memory and list chunking that I published with Lance Pearson in 2008 simulated the Averbeck etal data in Figure 12.41, as in the left column of the figure. It also simulated cognitive data about working memory storage by human subjects. See the text for details.
    || LIST PARSE: Laminar cortical model of working memory and list chunking (Grossberg, Pearson 2008). Simulates data about: [immediate, delayed, continuous] distractor free recall; immediate serial recall; and variable-speed sequential perfomance of motor acts. [velocity, acceleration] vs time (ms) from recall cue. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1581:
  • image p446fig12.44 (left column, top row) LIST PARSE can model linguistic data from human subjects. In this figure, model parameters are fixed to enable a close fit to data about error-type distributions in immediate free recall experiments, notably transposition errors. (right column, top row) Simulation and data showing bowing of the serial position curve, including an extended primacy gradient. (left column, bottom row) The simulation curve overlays data about list length effects, notably the increasing recall difficulty of longer lists during immediate serial recall (ISR). (right column, bottom row) Simulation (bottom image) and data (top image) of the limited temporal extent for recall.
    || (1. TL) Error-type distributions in immediate serial recall (Hanson etal 1996). % occurence vs serial position. Graph convention: Data- dashed lines; Simulations- solid lines. Six letter visual ISR. Order errors- transpositions of neighboring items are the most common. Model explanation: Noisy activation levels change relative oorder in primacy gradient. Similar activation of neighboring items most susceptible to noise. Model paramenters fitted on these data. (2. TR) Bowing of serial position curve (Cowan etal 1999). % correct vs serial position. Auditory ISR with various list lengths (graphs shifted rightward): For [, sub-]span lists- extended primacy, with one (or two) item recency; Auditory presentation- enhanced performance for last items. LIST PARSE: End effects- first and last items half as many members; Echoic memory- last presented item retained in separate store. (3. BL) List length effects, circles (Crannell, Parrish 1968), squares (Baddeley, Hitch 1975), solid line- simulation. % list correct vs list length. Variable list length ISR: longer lists are more difficult to recall. LIST PARSE: More items- closer activation levels and lower absolute activity level with enough inputs; Noise is more likely to produce order errors, Activity levels more likely to drop below threshold;. (4. BR) Limited temporal extent for recall (Murdoch 1961). % recalled vs retention interval (s). ISR task with distractor-filled retention intervals (to prevent rehersal): Increasing retention interval - decreases probabilty of recalling list correctly; Load dependence- longer list more affected by delays; Performance plateau- subjects reach apparent asymptote. LIST PARSE: Increase convergence of activities with time; loss of order information;. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1582:
  • image p447fig12.45 (left column) LIST PARSE simulations of the proportion of order errors as a function of serial position for 6 item lists with (a) an extended pause of 7 time units between the third and fourth items, and (b) pauses of 5 time units (solid curve) and 10 time units (dashed curve) between all items. (right column) Simulations (solid curves) and data (dashed curves) illustrating close model fits in various immediate free recall tasks.
    || (Left) Temporal grouping and presentation variability. Temporal grouping: Inserting an extended pause leads to inter-group bowing; Significantly different times of integration and activity levels across pause, fewer interchanges. (Right) Immediate free recall, and [delayed, continuous] distractor-free recall. Overt rehersal IFR task with super-span (ie 20 item) lists: Extended recency- even more extended with shorter ISIs; Increased probability of recall with diminished time from last rehearsal; Early items in list rehearsed most;. LIST PARSE (unique) for long lists: Incoming items from recency gradient; Rehearsal (re-presentation) based upon level of activity. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1583:
  • image p452fig12.48 (left column) In experiments of (Repp etal 1978), the silence duration between the words GRAY and SHIP was varied, as was the duration of the fricative noise in S, with surprising results. (right column) The red arrow directs our attention to surprising perceptual changes as silence and noise durations increase. See the text for details.
    || Perceptual integration of acoustic cues, data (Repp etal 1978). GRAY-> silence duration-> SHIP (noise duration from start of word). Noise duration vs silence duration: GRAY SHIP <-> [GREAT SHIP <-> GRAY CHIP] <-> GREAT CHIP. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1584:
  • image p453fig12.49 The ARTWORD model that I published in 2000 with my PhD student Christopher Myers simulates data such as the (Repp etal 1978) data in Figure 12.68. See the text for details.
    || ARTWORD model (Grossberg, Myers 2000). Input phonetic features-> Phonemic item working memory-> Masking Field unitized lists-> Automatic gain control-> Phonemic item working memory. [habituative gate, adaptive filter]s. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1585:
  • image p465fig12.63 Neurophysiological data (left image) and lisTELOS stimulation (right figure) showing how microstimulation biases saccadic performance order but not the positions to which the saccades will be directed. See the text for details.
    || Saccade trajectories converge to a single location in space. Microstimulation biased selection so saccade trajectories converged toward a single location in space. [Data, model] contra <-> Ipsi (msec) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1586:
  • image p468fig12.65 Linguistic properties of the PHONET model and some of the data that it simulates. The upper left image summarizes the asymmetric transient-to-sustained gain control that helps to create invariant intraword ratios during variable-rate speech. The lower left image summarizes the rate-dependent gain control of the ARTPHONE model that creates rate-invariant working memory representations in response to sequences of variable-rate speech. The right image summarizes the kind of paradoxical VC-CV category boundary data of (Repp 1980) that ARTPHONE simulates. See the text for details.
    || (left upper) [transient, sustained] [working memory, filter, category]. (left lower) phone inputs-> [input rate estimate, features], Features w <- habituative transmitter gates -> categories-> rate invariant phonetic output, input rate estimate-> gain control-> [features, categories] rate-dependent integration of categories and features. (right) % 2-stop vs VC-CV silent interval (msec): [ib-ga, ib-ba, iga, iba]. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1587:
  • image p469fig12.66 (left column) A schematic of how preserving relative duration, as in the first and third images, of consonant and vowel pairs can preserve a percept, in this case of /ba/, but not doing so, as in the first and second images, can cause a change in percept, as from /ba/ to /wa/, as in the data of (Miller, Liberman 1979) that PHONET simulates. (right column) Changing frequency extent can also cause a /ba/ - /wa/ transition, as shown in data of (Schwab, Sawusch, Nusbaum 1981) that PHONET also simulates.
    || (left image) Maintaining relative duration as speech speeds up preserves percept (Miller, Liberman 1979). frequency vs time- [/ba/, /wa/, /ba/] (right image) Changing frequency extent causes /b/-/wa/ transition (Schwab, Sawusch, Nusbaum 1981). frequency vs time- [/ba/, /wa/] Dt extent. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1588:
  • image p473fig12.69 Error rate and mean reaction time (RT) data from the lexical decision experiments of (Schvanevelt, McDonald 1981). ART Matching Rule properties explain these data in (Grossberg, Stone 1986).
    || (left) Error rate vs type of prime [R, N, U], [non,] word. (right) Mean RT (msec) vs type of prime [R, N, U], [non,] word. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1589:
  • image p474fig12.70 The kind of model macrocircuit that was used in (Grossberg, Stone 1986) to explain lexical decision task data.
    || inputs-> A1 <-> A2 oconic sensory features <-> A3 item and order in sensory STM <-> A4 list parsing in STM (masking field) <-> A5 semantic network (self-feedback). [A4, A5] <-> V* visual object recognition system. M1-> [outputs, A1]. M1 <-> M2 iconic motor features <-> M3 item and order in motor STM. A2-> M2. A3-> M3. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1590:
  • image p476fig12.71 Word frequency data of (Underwood, Freund 1970) that were explained in (Grossberg, Stone 1986).
    || percent errors vs frequency of old words [L-H to H-H, L-L to H-L]. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1591:
  • image p484fig13.04 The top-down feedback from the orbitofrontal cortex closes a feedback loop that supports a cognitive-emotional resonance. If this resonance can be sustained long enough, it enables us to have feelings at the same time that we experience the categories that caused them.
    || Cognitive-Emotional resonance. Basis of "core consciousness" and "the feeling of what happens". (Damasio 1999) derives heuristic version of CogEM model from his clinical data. Drive-> amygdala-> prefrontal cortex-> sensory cortex, resonance around the latter 3. How is this resonance maintained long enough to become conscious? /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1592:
  • image p485fig13.06 (left column) An inverted-U occurs in conditioned reinforcer strength as a function of the ISI between the CS and the US. Why is learning attenuated at 0 ISI? (right column) Some classical conditioning data that illustrate the inverted-U in conditioning as a function of the ISI.
    || InterStimulus Interval (ISI) effect. Data from (Dmith etal 1969; Schneiderman, Gormezano 1964). /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1593:
  • image p490fig13.15 The CogEM circuit is an ancient design that is found even in mollusks like Aplysia. See the text for details.
    || Aplysia (Buononamo, Baxter, Byrne, Neural Networks 1990; Grossberg, Behavioral and Brain Sciences 1983). Facilitator neuron ~ drive representation. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1594:
  • image p504fig13.31 Behavioral contrast can occur during reinforcement learning due to decreases in either positive or negative reinforcers. See Figure 13.32 for illustrative operant conditioning data.
    || Behavioral contrast: rebounds! Shock level vs trials. 1. A sudden decrease in frequency or amount of food can act as a negative reinforcer: Frustration. 2. A sudden decrease in frequency or amount of shock can act as a positive reinforcer: Relief. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1595:
  • image p523fig14.03 (a) The MOTIVATOR neural model generalizes CogEM by also including the basal ganglia. It can hereby explain and simulate complementary functions of the amygdala and basal ganglia (SNc) during conditioning and learned performance. The basal ganglia generate Now Print signals in response to unexpected rewards. These signals modulate learning of new associations in many brain regions. The amygdala supports motivated attention to trigger actions that are expected to occur in response to conditioned or unconditioned stimuli. Object Categories represent visual or gustatory inputs in anterior inferotemporal (ITA) and rhinal (RHIN) cortices, respectively. Value Categories represent the value of anticipated outcomes on the basis of hunger and satiety inputs, in amygdala (AMYG) and lateral hypothalamus (LH). Object-Value Categories resolve the value of competing perceptual stimuli in medial (MORB) and lateral (ORB) orbitofrontal cortex. The Reward Expectation Filter detects the omission or delivery of rewards using a circuit that spans ventral striatum (VS), ventral pallidum (VP), striosomal delay (SD) cells in the ventral striatum, the pedunculopontine nucleus (PPTN) and midbrain dopaminergic neurons of the substantia nigra pars compacta/ventral tegmental area (SNc/VTA). The circuit that processes CS-related visual information (ITA, AMTG, ORB) operates in parallel with a circuit that processes US-related visual and gustatory information (RHIN, AMYG, MORB). (b) Reciprocal adaptive connections between hypothalamus and amygdala enable amygdala cells to become learned value categories. The bottom region represents hypothalmic cells, which receive converging taste and metabolite inputs whereby they become taste-drive cells. Bottom-up signals from activity patterns across these cells activate competing value category, or US Value Representations, in the amygdala. A winning value category learns to respond selectively to specific combinations of taste-drive activity patterns and sends adaptive top-down priming signals back to the taste-drive cells that activated it. CS-activated conditioned reinforcer signals are also associatively linked to value categories. Adpative connections end in (approximately) hemidiscs. See the text for details.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1596:
  • image p533fig14.09 Search data and ARTSCENE Search simulations of them in each pair of images from (A) to (F). See the text for details.
    || 6*[data vs simulation], [Response time (ms) versus epoch]. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1597:
  • image p542fig15.04 Conditioning data from (Smith 1968; Millenson etal 1977). The former shows the kind of Weber Law and inverted U that were simulated in Figure 15.3. The latter shows that, if there are two ISIs during an experiment, then the animals learn to adaptively time their responses with two properly scaled Weber laws.
    || (left) One ISI (Smith 1968) [mean membrane extension (mm) versus time after CS onset (msec)]. (right) Two ISIs (Millenson etal 1977) [200, 100] msec CS test trials, [mean momentary CS amplitude (mm) vs time after CS onset (msec)]. (bottom) Conditioned eye blinks, made with nictitating membrane and/or eyelid, are adaptively timed: peak closure occurs at expected time(s) of arrival of the US following the CS and obeys a Weber Law. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1598:
  • image p543fig15.05 Simulation of conditioning with two ISIs that generate their own Weber Laws, as in the data shown in Figure 15.4.
    || Learning with two ISIs: simulation: R = sum[all: f(xi)*yi*xi] vs msec. Each peak obeys Weber Law! strong evidence for spectral learning. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1599:
  • image p556fig15.24 (a) Data showing normally timed responding (solid curve) and short latency responses after lesioning cerebellar cortex (dashed curve). (b) computer simulation of short latency response after ablation of model cerebellar cortex.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1600:
  • image p559fig15.28 Neurophysiological data (left column) and model simulations (right column) of SNc responses. See the text for details.
    || membrane potential vs time /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1601:
  • image p573fig16.01 The experimental chamber (A) and neurophysiological recordings from a rat hippocampus (B) that led to the discovery of place cells. See the text for details.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1602:
  • image p574fig16.02 Neurophysiological recordings of 18 different place cell receptive fields. See the text for details.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1603:
  • image p575fig16.03 As a rat navigates in its experimental chamber (black curves), neurophysiological recordings disclose the firing patterns (in red) of (a) a hippocampal place cell and (b) an entrorhinal grid cell.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1604:
  • image p582fig16.08 Some experimental evidence for stripe-like cell receptive fields has been reported. The band cells posited by Neil Burgess also exhibit the one-dimensional firing symmetry of stripe cells, but are modeled by oscillatory intererence. See the text for details.
    || Evidence for stripe-like cells. Entorhinal cortex data (Sargolini, Fyhn, Hafting, McNaughton, Witter, Moser, Moser 2006; Krupic, Burgess, O'Keefe 2012). Similar hypothetical construct used by Interference model but position is decoded by grid cell oscillatory interference- Band Cells (Burgess 2008). /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1605:
  • image p589fig16.19 Neurophysiological data showing the smaller dorsal grid cell scales and the larger ventral grid cell scales.
    || Spatial scale of grid cells increase along the MEC dorsoventral axis (Hafting etal 2005; Sargolini etal 2006; Brun etal 2008). [dorsal (left), ventral (right)] cart [rate map, autocortelogram]. How does the spatial scale increase along the MEC dorsoventral axis? /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1606:
  • image p593fig16.26 Data (left column) and simulations (right column) of the gradient of increasing grid cell spacing along the dorsoventral axis of MEC.
    || Gradient of grid spacing along dorsoventral axis of MEC (Brun etal 2008). data-[Distance (m?), Median grid spacing (m?)] simulations-[Grid spacing (cm), Grid spacing (cm)] vs response rate. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1607:
  • image p594fig16.27 Data (left column) and simulations (right column) of the gradient of increasing grid cell field width along the dorsoventral axis of MEC.
    || Gradient of field width along dorsoventral axis of MEC (Brun etal 2008). data-[Distance (m?), Width autocorr peak (m?)] simulations-[Grid field width (cm), Width autocorr peak (cm)] vs response rate. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1608:
  • image p595fig16.28 Data (left column) and simulations (right column) about peak and mean grid cell response rates along the dorsoventral axis of MEC.
    || Peak and mean rates at different locations along DV axis of MEC (Brun etal 2008). Peak rate (Hz) vs [data- DV quarter, simulations- Response rate]. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1609:
  • image p596fig16.29 Data (top row) and simulations (bottom row) showing decreasing frequency of subthreshold membrane potential oscillations along the DV axis of MEC.
    || Subthreshold membrane potential oscillations at different locations along DV axis of MEC (Giocomo etal 2020; Yoshida etal 2011). Data [oscillations (Hz) vs distance from dorsal surface (mm) @[-50, -45] mV, Frequency (Hz) vs [-58, -54, -50] mV]. Simulations MPO frequency (Hz) s [response, habituation] rate. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:160:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1610:
  • image p596fig16.30 Data (top row) and simulations (bottom row) of spatial phases of learned grid and place cells.
    || Spatial phases of learned grid and place cells (Hafting etal 2005). Data: Cross-correlogram of rate maps of two grid cells; Distribution of phase difference: distance from origin to nearest peak in cross-correlogram. Simulations: Grid cell histogram of spatial correlation coefficients; Place cell histogram of spatial correlation coefficients. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1611:
  • image p597fig16.31 Data (a) and simulations (b-d) about multimodal place cell receptive fields in large spaces. The simulations are the result of learned place fields.
    || Multimodal place cell firing in large spaces (Fenton etal 2008; Henriksen etal 2010; Park etal 2011). Number of cells (%) vs Number of place fields. [2, 3] place fields, 100*100 cm space. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1612:
  • image p597fig16.32 Data (top row) and simulations (bottom row) about grid cell development in juvenile rats. Grid score increases (a-b and d), whereas grid spacing remains fairly flat (c and e).
    || Model fits data about grid cell development (Wills etal 2010; Langston etal 2010). Data: [Gridness, grid score, inter-field distance (cm)]. Simulations: [Gridness score, Grid spacing (cm)] vs trial. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1613:
  • image p598fig16.33 Data (top row) and simulations (bottom row) of changes in place cell properties in juvenile rats, notably about spatial information (a,c) and inter-trial stability (b,d).
    || Model fits data about grid cell development (Wills etal 2010). [Data, Simulation] vs [spatial information, inter-trial stability]. x-axis [age (postnatal day), trial]. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1614:
  • image p599fig16.35 Data (a) and simulations (b,c) about anatomically overlapping grid cell modules. (a) shows the anatomical distribution of grid cells belonging to different modules in one animal. DV location (mm) vs postrhinal border. (b) shows the simulated distribution of learned grid cell spacings from two stripe cell scales. frequency (%) vs grid spacing (cm). mu = [1, 0.6]. (c) shows what happens when half the cells respond with one rate and half another rate. (d) shows the same with three rates. (e-g) show spatial maps and autocorrelograms of grid cells that arise from the different rates in (d). [rate map, autocorelogram] vs [score [1.07, 0.5, 0.67], spacing (cm) [23.58, 41, 63.64]].
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1615:
  • image p602fig16.37 Data showing the effect of hippocampal inactivation by muscimol on grid cell firing before, during, and six hours after the muscimol, reading from left to right.
    || Hippocampal inactivation disrupts grid cells (Bonnevie etal 2013). muscimole inactivation. spikes on trajectory: [before, after min [6-20, 20-40, 40-60, 6h]]. rate map (Hz) [18.6, 11.4, 9.5, 6.7, 10.8]. spatial autocorrelogram g=[1.12, 0.05, -0.34, 0.09, 1.27]. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1616:
  • image p603fig16.38 Role of hippocampal feedback in maintaining grid fields. (a) Data showing the effect of hippocampal inactivation before and during muscimol inhibition of hippocampal cells, as in Figure 16.37. (b) Model simulation with normal grid fields. (c) Model simulation that emulates the effect of hippocampal inhibition on grid fields.
    || (a) Data: hippocampal inactivation [before, after] cart [spikes on trajectory (p: [18.6, 6.7] Hz), spatial autocorrelogram (g= [1.12, 0.09])]. (b) Model: noise-free path integration, [spikes on trajectory (p: 14.56 Hz), rate map, spatial autocorrelogram (g= 1.41), dynamic autocorrelogram (g=0.6)]. (c) Model: noisy path integration + non-specific tonic inhibition, [spikes on trajectory (p: 11.33 Hz), rate map, spatial autocorrelogram (g= 0.05), dynamic autocorrelogram (g=0.047)]. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1617:
  • image p605fig16.39 Data showing effects of medial septum (MS) inactivation on grid cells and network theta oscillations in medial entorhinal cortex (MEC). (A) Examples of disruption in the spatial expression of the hexagonal grid structure for two grid cells (Brandon etal 2011). (B) Temporal reduction in the power and frequency of network theta oscillations (Koenig etal 2011). (C) Temporary reduction in the gridness score, mean firing rate, and patial stability of grid cells (Koenig etal 2011).
    || Disruptive effects of Medial Septum inactivation in Medial Entorhinal Cortex (Brandon etal 2011; Koenig etal 2011). (A) Rate map [rate map, spatial autocorrelations, trajectory] vs [baseline, sub-sampled, medial septum inactivation, 3-6 hour recovery, 24 hour recovery], [rate map (Hz- m, p), spatial autocorrelations (gridness)][ 1.2, 7.2, 1.1; 0.25, 1.7, 0.6; 0.25, 2.5, -0.53; 0.7, 5.1, 0.55; 1.0, 5.3, 1.3; 2.1, 15, 0.19; 1.7, 12, 0.71; 1.7, 3.2, -0.22; 1.8, 9.1, 0.68; 2.5, 13, 0.46]. (B) [normalized power at 7-9 Hz, frequency (Hz)] vs 5-minute periods. (C) [mean gridness score (+-SEM), mean firing rate (% of baseline), mean correlation coeff (+-SEM)] vs 10-minute periods. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1618:
  • image p607fig16.40 Effects of medial septum (MS) inactivation on grid cells. (a) Each row shows data and different data-derived measures of grid cell responsiveness, starting from the left with the baseline response to the middle column with maximal inhibition. (b) Data showing the temporary reduction in the gridness scores during MS inactivation, followed by recovery. (c) Simulation of the collapse in gridness, achieved by reduction in cell response rates to mimic reduced cholinergic transmission. (d,e) Simulations of the reduction in gridness scores in (d) by reduction of cell response rates, in (e) by changing the leak conductance. See the text for details.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1625:
  • Grossberg 2021 p229c2h0.60 SMART computer simulations demonstrate that a good enough match of a top-down expectation with a bottom-up feature pattern generates an attentive resonance during which the spikes of active cells synchronize in the gamma frequency range of 20-70 Hz (Figure 5.40). Many labs have reported a link between attention and gamma oscillations in the brain, including two articles published in 2001, one from the laboratory of Robert Desimone when he was at the the National Institute of Mental Health in Bethseda (Fries, Reynolds, Rorie, Desimone 2001), and the other from the laboratory of Wolf Singer in Frankfurt (Engel, Fries, Singer 2001). You'll note that Pascal Fries participated in both studies, and is an acknowledged leader in neurobiological studies of gamma oscillations; eg (Fries 2009). .." /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:162:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1630:
  • Grossberg 2021 p081c2h0.66 sub-section: How does one evolve a computational brain?
    The above discussion illustrates that no single step of theoretical derivation can derive a whole brain. One needs a method for deriving a brain in stages, or cycles, much as evolution has incrementally discovered ever more complex brains over many thousands of years. The following theoretical method has been successfully applied many times since I first used it in 1957. It embodies a kind of conceptual evolutionary process for deriving a brain.

    Because "brain evolution needs to achieve behavioural success", we need to start with data that embodiey indices of behavioral success. That is why, as illustrated in
    Figure 2.37 Modelling method and cycle, one starts with Behavioral Data from scores or hundreds of psychological experiments. These data are analyszed as the result of an individual adapting autonomously in real time to a changing world. This is the Arty of Modeling. It requires that one be able to infer from static data curves the dynamical processes that control individual behaviors occuring in real time. One of the hardest things that I teach to my students to do is "how to think in real time" to be able to carry out this speculative leap.

    Properly carried out, this analysis leads to the discovery of new Design Principles that are embodied by these behavioral processes. The Design Principles highlight the functional meaning of the data, and clarify how individual behaviors occurring in real time give rise to these static data curves.

    These principles are then converted into the simplest Mathematical Model using a method of minimal anatomies, which is a form of Occam's Razor, or principle of parsimony. Such a mathematical model embodies the psychological principles using the simplest possible differential equations. By "simplest" I mean that, if any part of the derived model is removed, then a significant fraction of the targeted data could no longer be explained. One then analyzes the model mathematically and simulates it on the computer, showing along the way how variations on the minimal anatomy can realize the design principles in different individuals or species.

    This analysis has always provided functional explanations and Behavioral Predictions for much larger behavioral data bases than those used to discover the Design Principles. The most remarkable fact is, however, that the behaviorally derived model always looks like part of a brain, thereby explaining a body of challenging Neural Data and making novel Brain Predictions.

    The derivation hereby links mind to brain via psychological organizational principles and their mechanistic realization as a mathematically defined neural network. This startling fact is what I first experienced as a college Freshman taking Introductory Psychology, and it changed my life forever.

    I conclude from having had this experience scores of times since 1957 that brains look the way they do because they embody a natural computational realization for controlling autonomous adaptation in real-time to a changing world. Moreover, the Behavior -> Principles -> Model -> Neural derivation predicts new functional roles for both known and unknown brain mechanisms by linking the brain data to how it helps to ensure behavioral success. As I noted above, the power of this method is illustrated by the fact that scores of these predictions about brain and behavior have been supported by experimental data 5-30 years after they were first published.

    Having made the link from behavior to brain, one can then "burn the candle from both ends" by pressing both top-down from Behavioral Data and bottom-up from Brain Data to clarify what the model can and cannot explain at its current stage of derivation. No model can explain everything. At each stage of development, the model can cope with certain environmental challenges but not others. An important part of the mathematical and computational analysis is to characterize the boundary between the known and unknown; that is which challenges the model can cope with and which it cannot. The shape of this boundary between the known and unknown helps to direct the theorist's attention to new design principles that have been omitted from previous analysis.

    The next step is to show how these new design principles can be incorporated into the evolved model in a self-consistent way, without undermining its previous mechanisms, thereby leading to a progressively more realistic model, one that can explain and predict ever more behavioral and neural data. In this way, the model undergoes a type of evolutionary development, as it becomes able to cope behaviorally with environmental constraints of ever increasing subtlety and complexity. The Method of Minimal Anatomies may hereby be viewed as way to functionally understand how increasingly demanding combinations of environmental pressures were incorporated into brains during the evolutionary process.

    If such an Embedding Principle cannot be carried out - that is, if the model cannot be unlumped or refined in a self-consistent way - then the previous model was, put simply, wrong, and one needs to figure out which parts must be discarded. Such a model is, as it were, an evolutionary dead end. Fortunately, this has not happened to me since I began my work in 1957 because the theoretical method is so conservative. No theoretical addition is made unless it is supported by multiple experiments that cannot be explained in its absence. Where multiple mechanistic instantiations of some Design Principles were possible, they were all developed in models to better underestand their explanatory implications. Not all of these instantiations could survive the pressure of the evolutionary method, but some always could. As a happy result, all earlier models have been capable of incremental refinement and expansion.

    The cycle of model evolution has been carried out many times since 1957, leading today to increasing numbers of models that individually can explain and predict psychological, neurophysiological, anatomical, biophysical, and even biochemical data. In this specific sense, the classical mind-body problem is being incrementally solved.

    Howell: bold added for emphasis.
    (keys : Principles-Principia, behavior-mind-brain link, brain evolution, cycle of model evolution)
    see also quotes: Charles William Lucas "Universal Force" and others (not retyped yet). /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1635:
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
    bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :
    whitegeneral microcircuit : a possible component of ART architecture
    lime greensensory perception [attention, expectation, learn]. Table includes [see, hear, !!*must add touch example*!!], no Grossberg [smell, taste] yet?
    light bluepost-perceptual cognition?
    pink"the feeling of what happens" and knowing what event caused that feeling
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:164:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1653:
  • image p085fig02.38 Our models have been used in many large-scale applications to engineering and technology. Linking brain to behavior explains how brain mechanisms give rise to psychological functions, and do so autonomously. The combination of mechanism, function, and autonomy helps to explain their value in helping to solve outstanding problems in technology.
    || Modeling method and cycle.
    Behavioral data -(art of modeling)-> Design principles <- Neural data <-(brain predictions)- Mathematical model and analysis -(behavioral predictions)-> Behavioural data
    Technology: Mathematical model and analysis <-> Technological applications
    At every stage, spin off new model designs and mechanisms to technologist who need autonomous intelligent applications. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1654:
  • image p225fig05.33 Some early ARTMAP benchmark studies. These successes led to the use of ARTMAP, and many variants that we and other groups have developed, in many large-scale applications in engineering and technology that has not abated even today.
    || see Early ARTMAP benchmark studies /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1655:
  • image p490fig13.15 The CogEM circuit is an ancient design that is found even in mollusks like Aplysia. See the text for details.
    || Aplysia (Buononamo, Baxter, Byrne, Neural Networks 1990; Grossberg, Behavioral and Brain Sciences 1983). Facilitator neuron ~ drive representation. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1656:
  • image p563fig15.33 The basal ganglia gate neural processing in many parts of the brain. The feedback loop through the lateral orbitofrontal cortex (blue arrow, lateral orbitofrontal) is the one that MOTIVATOR models.
    || MOTIVATOR models one of several thalamocortical loops through basal ganglia (Adapted from Fundamental Neuroscience. 2002 Copyright Elsevier). [cortex-> striatum-> pallidum S. nigra-> thalamus] vs [motor, oculomotor, dorsolateral prefrontal, lateral orbitofrontal, anterior cingulate]. thalamus-> [striatum, cortex]. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:1657:
  • image p563fig15.34 The colored regions are distinct parts of the basal ganglia in the loops depicted in Figure 15.33.
    || Distinct basal ganglia zones for each loop (Adapted from Fundamental Neuroscience. 2002 Copyright Elsevier). /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:166:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:168:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:173:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:176:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:178:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:192:
  • Navigation: [menu, link, directory]s
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:193:
  • Theme webPage generation by bash script
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:194:
  • Notation for [chapter, section, figure, table, index, note]s
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:195:
  • incorporate reader questions into theme webPages
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:221:
  • Grossberg 2021 p081c2h0.66 sub-section: How does one evolve a computational brain?
    The above discussion illustrates that no single step of theoretical derivation can derive a whole brain. One needs a method for deriving a brain in stages, or cycles, much as evolution has incrementally discovered ever more complex brains over many thousands of years. The following theoretical method has been successfully applied many times since I first used it in 1957. It embodies a kind of conceptual evolutionary process for deriving a brain.

    Because "brain evolution needs to achieve behavioural success", we need to start with data that embodiey indices of behavioral success. That is why, as illustrated in Figure 2.37 Modelling method and cycle, one starts with Behavioral Data from scores or hundreds of psychological experiments. These data are analyszed as the result of an individual adapting autonomously in real time to a changing world. This is the Arty of Modeling. It requires that one be able to infer from static data curves the dynamical processes that control individual behaviors occuring in real time. One of the hardest things that I teach to my students to do is "how to think in real time" to be able to carry out this speculative leap.

    Properly carried out, this analysis leads to the discovery of new Design Principles that are embodied by these behavioral processes. The Design Principles highlight the functional meaning of the data, and clarify how individual behaviors occurring in real time give rise to these static data curves.

    These principles are then converted into the simplest Mathematical Model using a method of minimal anatomies, which is a form of Occam's Razor, or principle of parsimony. Such a mathematical model embodies the psychological principles using the simplest possible differential equations. By "simplest" I mean that, if any part of the derived model is removed, then a significant fraction of the targeted data could no longer be explained. One then analyzes the model mathematically and simulates it on the computer, showing along the way how variations on the minimal anatomy can realize the design principles in different individuals or species.

    This analysis has always provided functional explanations and Behavioral Predictions for much larger behavioral data bases than those used to discover the Design Principles. The most remarkable fact is, however, that the behaviorally derived model always looks like part of a brain, thereby explaining a body of challenging Neural Data and making novel Brain Predictions.

    The derivation hereby links mind to brain via psychological organizational principles and their mechanistic realization as a mathematically defined neural network. This startling fact is what I first experienced as a college Freshman taking Introductory Psychology, and it changed my life forever.

    I conclude from having had this experience scores of times since 1957 that brains look the way they do because they embody a natural computational realization for controlling autonomous adaptation in real-time to a changing world. Moreover, the Behavior -> Principles -> Model -> Neural derivation predicts new functional roles for both known and unknown brain mechanisms by linking the brain data to how it helps to ensure behavioral success. As I noted above, the power of this method is illustrated by the fact that scores of these predictions about brain and behavior have been supported by experimental data 5-30 years after they were first published.

    Having made the link from behavior to brain, one can then "burn the candle from both ends" by pressing both top-down from Behavioral Data and bottom-up from Brain Data to clarify what the model can and cannot explain at its current stage of derivation. No model can explain everything. At each stage of development, the model can cope with certain environmental challenges but not others. An important part of the mathematical and computational analysis is to characterize the boundary between the known and unknown; that is which challenges the model can cope with and which it cannot. The shape of this boundary between the known and unknown helps to direct the theorist's attention to new design principles that have been omitted from previous analysis.

    The next step is to show how these new design principles can be incorporated into the evolved model in a self-consistent way, without undermining its previous mechanisms, thereby leading to a progressively more realistic model, one that can explain and predict ever more behavioral and neural data. In this way, the model undergoes a type of evolutionary development, as it becomes able to cope behaviorally with environmental constraints of ever increasing subtlety and complexity. The Method of Minimal Anatomies may hereby be viewed as way to functionally understand how increasingly demanding combinations of environmental pressures were incorporated into brains during the evolutionary process.

    If such an Embedding Principle cannot be carried out - that is, if the model cannot be unlumped or refined in a self-consistent way - then the previous model was, put simply, wrong, and one needs to figure out which parts must be discarded. Such a model is, as it were, an evolutionary dead end. Fortunately, this has not happened to me since I began my work in 1957 because the theoretical method is so conservative. No theoretical addition is made unless it is supported by multiple experiments that cannot be explained in its absence. Where multiple mechanistic instantiations of some Design Principles were possible, they were all developed in models to better underestand their explanatory implications. Not all of these instantiations could survive the pressure of the evolutionary method, but some always could. As a happy result, all earlier models have been capable of incremental refinement and expansion.

    The cycle of model evolution has been carried out many times since 1957, leading today to increasing numbers of models that individually can explain and predict psychological, neurophysiological, anatomical, biophysical, and even biochemical data. In this specific sense, the classical mind-body problem is being incrementally solved.

    Howell: bold added for emphasis.
    (keys : Principles-Principia, behavior-mind-brain link, brain evolution, cycle of model evolution)
    see also quotes: Charles William Lucas "Universal Force" and others (not retyped yet). /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:236:
  • image p059fig02.05 Bowed serial position curve. This kind of data emphasizes the importance of modelling how our brains give rise to our minds using nonlinear systems of differential equations.
    || Effects of [inter, intra]trial intervals (Hovland 1938). # of errors vs list position. [w (sec), W (sec)] = (2 6) (4 6) (2 126) (4 126). Nonoccurance of future items reduces the number of errors in response to past items. Thes data require a real-time theory for their explanation! that is, DIFFERENTIAL equations. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:237:
  • image p064fig02.10 The Shunting Model includes upper and lower bounds on neuronal activities. These bound have the effect of multiplying additive terms by excitatory and inhibitory automatic gain terms that enable such models to preserve their sensitivity to inputs whose size may vary greatly in size through time, while also approximately normalizing their total activities.
    || STM: Shunting Model (Grossberg, PNAS 1967, 1968). Mass action in membrane equations. Bi/Ci -> xi(t) -> O -> -Fi/Ei. Bounded activations, automatic gain control. d[dt: xi(t)] = - Ai*xi + (Bi - Ci*xi)sum[j=1 to n: fj(xj(t))*Dji*yji*zji + Ii] - (Ei*Xi + Fi)*sum[j=1 to n: gj(xj)*Gji*Yji*Zji + Ji]. Includes the Additive Model. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:238:
  • image p064fig02.11 Medium-Term Memory (MTM) and Long-Term Memory (LTM) equations complement the Additive and Shunting Models of STM. MTM is typically defined by a chemical transmitter that is released from the synaptic knobs of a neuron (Figure 2.03). Its release or inactivation in an activity-dependent way is also called habituation. LTM defines how associative learning occurs between a pair of neurons whose activities are approximately correlated through time. See the text for details.
    || Medium and Long Term memory.
    MTMhabituative transmitter gated[dt: yki(t)] = H*(K - yki) - L*fk(xk)*yki
    LTMgated steepest descent learningd[dt: zki(t)] = Mk*fk(xk)*(hi(xi) - zki)
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:239:
  • image p068fig02.14 Hodgkin and Huxley developed a model to explain how spikes travel down the squid giant axon.
    || Neurophysiology (single cell): spike potentials in squid giant axon (Hodgekin, Huxley 1952, Nobel Prize). time -> (dendrites -> cell body -> axon).
    C*dp[dt: V] = α*dp^2[dX^2: V] + (V(+) - V)*g(+) + (V(-) - V)*g(-) + (V^p - V)*g^p
    g(+) = G(+)(m,h), g(-) = G(-)(n), G^p = const, [m, h, n] - ionic processes, V - voltage
    Precursor of Shunting network model (Rail 1962). (Howell: see p075fig02.24 Membrane equations of neurophysiology. Shunting equation /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:23: /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:240:
  • image p074fig02.23 The equations for a shunting on-center off-surround network. Shunting terms lead to many beautiful and important properties of these networks, which are found ubiquitously, in one form or another, in all cellular tissues.
    || Shunting on-center off-surround network.
    Mass action: d[dt: xi] = -A*xi +(B - xi)*Ii -xi*sum[k≠i: Ik]
    Turn on unexcited sitesTurn off excited sites
    At equilibrium:
    0 = d[dt: xi] = -(A + Ii + sum[k≠i: Ik])*xi + B*Ii = -(A + I)*xi + B*Ii
    xi = B*Ii/(A + I) = B*θi*I/(A + I) = θi* B*I/(A + I)No saturation!
    Infinite dynamical range
    Automatic gain control
    Compute ratio scale
    Weber law
    x = sum[k-1 to n: xk] = B*I/(A + I) ≤ B Conserve total activity
    NORMALIZATION
    Limited capacty
    Real-time probability
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:241:
  • image p075fig02.24 The membrane equations of neurophysiology describe how cell voltages change in response to excitatory, inhibitory, and passive input channels. Each channel is described by a potential difference multiplied by a conductance. With the special choices shown in the lower right-hand corner, this equation defines a feedforward shuntin on-center off-surround network.
    || Membrane equations of neurophysiology.
    C*dp[dt] = (V(+) - V)*g(+) +(V(-) - V)*g(-) +(V(p) - V)*g(p)
    Shunting equation (not additive)
    V Voltage
    V(+), V(-), V(p) Saturating voltages
    g(+), g(-), g(p) Conductances
    V(+) = B, C = 1; V(-) = V(p) = 0; g(+) = Ii; g(-) = sum[k≠i: Ik];
    lower V: V(+) = V(p) Silent inhibition, upper V: V(+). (Howell: see p068fig02.14 Grossberg's comment that Hodgkin&Huxley model was a "... Precursor of Shunting network model (Rail 1962) ..."). /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:242:
  • image p079fig02.32 Matching amplifies the matched pattern due to automatic gain control. See terms I and J in the equation.
    || Substrate of resonance. Match (in phase) of BU and TD input patterns AMPLIFIES matched pattern due to automatic gain control by shunting terms. J = sum[i: Ji], I = sum[i: Ii], θi = (Ii + Ji)/(I + J)
    xi = (B + C)*(I + J)/(A + I + J)*[θi -C/(B + C)]
    Need top-down expectations to be MODULATORY. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:243:
  • image p202fig05.17 This figure summarizes the simplest equations whereby the adaptive weights of a winning category learn the input pattern that drove it to win, or more generally a time-average of all the input patterns that succeeded in doing so.
    || Geometry of choice and learning, learning trains the closest LTM vector /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:244:
  • image p359fig10.07 The two bottom-up pathways from LGN to layer 4 of V1 can together activate layer 4 and contrast-normalize layer 4 responses.
    || Bottom-up contrast normalization (Grossberg 1968, 1973; Sperling, Sondhi 1968; Heeger 1992; Douglas etal 1995; Shapley etal 2004). Together, direct LGN-to-4 path and 6-to-4 on-center off-surround provide contrast normalization if cells obey shunting or membrane equation dynamics. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:245:
  • image p501fig13.26 A simple differential equation describes the processes of transmitter accumulation and release that do their best, at a finite rate, to carry out unbiased transduction.
    || Transmitter accumulation and release. Transmitter y cannot be restored at an infinite rate: T = S*ym y ~= B, Differential equations: d[dt: y] = A*(B - y) - S*y = accumulate - release. Transmitter y tries to recover to ensure unbiased transduction. What if it falls behind? Evolution has exploited the good properties that happen then. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:246:
  • image p505fig13.33 An unexpected event can disconfirm ongoing processing by triggering a burst of nonspecific arousal that causes antagonistic rebounds in currently active gated dipoles, whether cognitive or affective.
    || Novelty reset: rebound to arousal onset. 1. Equilibrate to I and J: S1 = f(I+J); y1 = A*B/(A+S1); S2 = f(I+J); y2 = A*B/(A+S2);. 2. Keep phasic input J fixed; increase arousal I to I* = I + ∆I: (a) OFF reaction if T1 < T2; OFF = T2 - T1 = f(I*+J)*y2 - f(I*)*y1 = { A*B*(f(I*) - f(I*+J)) - B*(f(I*)*f(I+J) - f(I)*f(I*+J)) } / (A+f(I)) / (A + f(I+J)). 3. How to interpret this complicated equation? /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:247:
  • image p580fig16.05 Macrocircuit of the GridPlaceMap model, which can learn both 2D grid cells and place cells in response to realistic trajectories of navigating rats using a hierarchy of SOMs with identical equations.
    || GridPlaceMap model: rate-based and spiking (Pilly, Grossberg 2012). Pre-wired 1D stripe cells, learns both 2D frid and place cells! Same laws for both; both select most frequent and energetic inputs. Place cells emerge gradually in response to developing grid cells. [place-> grid-> stripe] cells-> path integration-> vestibular signals /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:248:
  • image p586fig16.16 In the place cell learning model of (Gorchetnikov, Grossberg 2007), three populations of five cells each of entorhinal grid cells (only two are shown) with different spatial periods input to the model's dentate gyrus. The grid cells are one-dimensional and defined algorithmically. A model dentate gyrus granule cell that receives strong projections from all three grid cell scales fires (green cell) and activates a recurrent inhibitory interneuron that inhibits other granule cells. It also generates back-propagating action potentials that trigger learning in the adaptive weights of the projections from the grid cells, thereby causing learning of place cell receptive fields.
    || Grid-to-place Self-Organizing map (Gorchetnikov, Grossberg 2007). Formation of place cell fields via grid-to-place cell learning. Least common multiple: [grid (cm), place (m)] scales: [40, 50, 60 (cm); 6m], [50, 60, 70 (cm); 21m], [41, 53, 59 (cm); 1.282 km]. Our simulations: [40, 50 (cm); 2m], [44, 52 (cm); 5.72m]. Our SOM: Spiking Hodgkin-Huxley membrane equations; Nonlinear choice by contrast-enhancing recurrent on-center off-surround net;. Choice triggers back-propagating action potentials that induce STDP-modulated learning on cell dendrites. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:249:
  • image p593fig16.25 Spectral Spacing Model STM, MTM, and LTM equations. The rate spectrum that determines the dorsoventral gradient of multiple grid cell properties is defined by μm.
    || Spectral Spacing Model equations. [STM, MTM, LTM]. μm = rate spectrum. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:250:
  • Grossberg 2021 p081c2h0.66 sub-section: How does one evolve a computational brain?
    The above discussion illustrates that no single step of theoretical derivation can derive a whole brain. One needs a method for deriving a brain in stages, or cycles, much as evolution has incrementally discovered ever more complex brains over many thousands of years. The following theoretical method has been successfully applied many times since I first used it in 1957. It embodies a kind of conceptual evolutionary process for deriving a brain.

    Because "brain evolution needs to achieve behavioural success", we need to start with data that embodiey indices of behavioral success. That is why, as illustrated in Figure 2.37 Modelling method and cycle, one starts with Behavioral Data from scores or hundreds of psychological experiments. These data are analyszed as the result of an individual adapting autonomously in real time to a changing world. This is the Arty of Modeling. It requires that one be able to infer from static data curves the dynamical processes that control individual behaviors occuring in real time. One of the hardest things that I teach to my students to do is "how to think in real time" to be able to carry out this speculative leap.

    Properly carried out, this analysis leads to the discovery of new Design Principles that are embodied by these behavioral processes. The Design Principles highlight the functional meaning of the data, and clarify how individual behaviors occurring in real time give rise to these static data curves.

    These principles are then converted into the simplest Mathematical Model using a method of minimal anatomies, which is a form of Occam's Razor, or principle of parsimony. Such a mathematical model embodies the psychological principles using the simplest possible differential equations. By "simplest" I mean that, if any part of the derived model is removed, then a significant fraction of the targeted data could no longer be explained. One then analyzes the model mathematically and simulates it on the computer, showing along the way how variations on the minimal anatomy can realize the design principles in different individuals or species.

    This analysis has always provided functional explanations and Behavioral Predictions for much larger behavioral data bases than those used to discover the Design Principles. The most remarkable fact is, however, that the behaviorally derived model always looks like part of a brain, thereby explaining a body of challenging Neural Data and making novel Brain Predictions.

    The derivation hereby links mind to brain via psychological organizational principles and their mechanistic realization as a mathematically defined neural network. This startling fact is what I first experienced as a college Freshman taking Introductory Psychology, and it changed my life forever.

    I conclude from having had this experience scores of times since 1957 that brains look the way they do because they embody a natural computational realization for controlling autonomous adaptation in real-time to a changing world. Moreover, the Behavior -> Principles -> Model -> Neural derivation predicts new functional roles for both known and unknown brain mechanisms by linking the brain data to how it helps to ensure behavioral success. As I noted above, the power of this method is illustrated by the fact that scores of these predictions about brain and behavior have been supported by experimental data 5-30 years after they were first published.

    Having made the link from behavior to brain, one can then "burn the candle from both ends" by pressing both top-down from Behavioral Data and bottom-up from Brain Data to clarify what the model can and cannot explain at its current stage of derivation. No model can explain everything. At each stage of development, the model can cope with certain environmental challenges but not others. An important part of the mathematical and computational analysis is to characterize the boundary between the known and unknown; that is which challenges the model can cope with and which it cannot. The shape of this boundary between the known and unknown helps to direct the theorist's attention to new design principles that have been omitted from previous analysis.

    The next step is to show how these new design principles can be incorporated into the evolved model in a self-consistent way, without undermining its previous mechanisms, thereby leading to a progressively more realistic model, one that can explain and predict ever more behavioral and neural data. In this way, the model undergoes a type of evolutionary development, as it becomes able to cope behaviorally with environmental constraints of ever increasing subtlety and complexity. The Method of Minimal Anatomies may hereby be viewed as way to functionally understand how increasingly demanding combinations of environmental pressures were incorporated into brains during the evolutionary process.

    If such an Embedding Principle cannot be carried out - that is, if the model cannot be unlumped or refined in a self-consistent way - then the previous model was, put simply, wrong, and one needs to figure out which parts must be discarded. Such a model is, as it were, an evolutionary dead end. Fortunately, this has not happened to me since I began my work in 1957 because the theoretical method is so conservative. No theoretical addition is made unless it is supported by multiple experiments that cannot be explained in its absence. Where multiple mechanistic instantiations of some Design Principles were possible, they were all developed in models to better underestand their explanatory implications. Not all of these instantiations could survive the pressure of the evolutionary method, but some always could. As a happy result, all earlier models have been capable of incremental refinement and expansion.

    The cycle of model evolution has been carried out many times since 1957, leading today to increasing numbers of models that individually can explain and predict psychological, neurophysiological, anatomical, biophysical, and even biochemical data. In this specific sense, the classical mind-body problem is being incrementally solved.

    Howell: bold added for emphasis.
    (keys : Principles-Principia, behavior-mind-brain link, brain evolution, cycle of model evolution)
    see also quotes: Charles William Lucas "Universal Force" and others (not retyped yet). /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:266:
  • image pxvifig00.01 Macrocircuit of the visual system /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:267:
  • image p087fig03.01 A macrocircuit of key visual processes (in green) and the cortical areas in which they primarily occur (in red), from the retina to the Prefrontal Cortex (PFC), including both the What and Where cortical streams. The [bottom-up, horizontal, and top-down] interactions help each of these processes to overcome computationally complementary processing deficiencies that they would experience without them, and aso to read-out top-down expectations that help to stabilize learning while they focus attention on salient objects and positions.
    || Emerging unified theory of visual intelligence. [What, Where] streams. Bottom-up and top-down interactions overcome COMPLEMENTARY processing deficiencies. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:268:
  • image p168fig04.44 Macrocircuit of the main boundary and surface formation stages that take place from the lateral geniculate nucleus, or LGN, through cortical areas [V1, V2, V4]. See the text for details.
    ||
    left eyebinocularright eye
    V4 binocular surface
    V2 monocular surfaceV2 layer 2/3 binocular boundaryV2 monocular surface
    V2 layer 4 binocular boundary
    V1 monocular surfaceV1 monocular boundaryV1 binocular boundaryV1 monocular boundaryV1 monocular surface
    LGNLGN
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:269:
  • image p228fig05.37 A macrocircuit of the neurotrophic Spectrally Timed ART, or nSTART, model. I developed nSTART with my PhD student Daniel Franklin. It proposes how adaptively timed learning in the hippocampus, bolstered by Brain Derived Neurotrophic Factor, or BDNF, helps to ensure normal memory consolidation.
    || habituative gates, CS, US, Thalamus (sensory cortex, category learning, conditioned reinforcer learning, adaptively time learning and BDNF), Amygdala (incentive motivation learning), Hippocampus (BDNF), Prefrontal Cortex (attention), Pontine nuclei, Cerebellum (adaptively timed motor learning) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:26: /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:270:
  • image p246fig05.48 Microcircuits of the LAMINART model that I developed with Rajeev Raizada. See the text for details of how they integrate bottom-up adaptive filtering, horizontal bipole grouping, and top-down attentional matching that satisfied the ART Matching Rule.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:271:
  • image p346fig09.16 A macrocircuit of some of the main brain regions that are used to move the eyes. Black boxes denote areas belonging to the saccadic eye movement systes (SAC), white boxes the smooth pursuit eye system (SPEM), and gray boxes, both systems. The abbreviations for the different brain regions are: LIP - Lateral Intra-Parietal area; FPA - Frontal Pursuit Area; MST - Middle Superior Temporal area; MT - Middle Temporal area; FEF - Frontal Eye Fields; NRPT - Nucleus Reticularis Tegmenti Pontis; DLPN - Dorso-Lateral Pontine Nuclei; SC - Superior Colliculus; CBM - CereBelluM; MVN/rLVN - Medial and Rostro-Lateral Vestibular Nucleii; PPRF - a Peri-Pontine Reticular Formation; TN - Tonic Neurons
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:272:
  • image p436fig12.30 The conscious ARTWORD, or cARTWORD, laminar cortical speech model simulates how future context can disambiguate noisy past speech sounds in such a way that the completed percept is consciously heard to proceed from past to future as a feature-item-list resonant wave propagates through time.
    || cARTWORD: Laminar cortical model macrocircuit (Grossberg, Kazerounian 2011) Simulates PHONEMIC RESTORATION: Cognitive Working Memory (processed item sequences) - [Excitatory-> inhibitory-> habituative-> adaptive filter-> adaptive filter-> adaptive filter with depletable synapse-> Acoustic [item, feature] /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:273:
  • image p474fig12.70 The kind of model macrocircuit that was used in (Grossberg, Stone 1986) to explain lexical decision task data.
    || inputs-> A1 <-> A2 oconic sensory features <-> A3 item and order in sensory STM <-> A4 list parsing in STM (masking field) <-> A5 semantic network (self-feedback). [A4, A5] <-> V* visual object recognition system. M1-> [outputs, A1]. M1 <-> M2 iconic motor features <-> M3 item and order in motor STM. A2-> M2. A3-> M3. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:274:
  • image p481fig13.01 Macrocircuit of the functional stages and anatomical interpretations of the Cognitive-Emotional-Motor, or CogEM, model.
    || Drive-> hypothalamus value categories <-> amygdala incentive motivational learning-> Orbitofrontal cortex- object-value categories <-> sensory cortex- invariant object categories- conditioned reinforcer learning-> amygdala-> hypothalamus. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:275:
  • image p520fig14.02 Macrocircuit of the main brain regions, and connections between them, that are modelled in the unified predictive Adaptive Resonance Theory (pART) of cognitive-emotional and working memory dynamics. Abbreviations in red denote brain regions used in cognitive-emotional dynamics. Those in green denote brain regions used in working memory dynamics. Black abbreviations denote brain regions that carry out visual perception, learning and recognition of visual object categories, and motion perception, spatial representation and target tracking. Arrow denote non-excitatory synapses. Hemidiscs denote adpative excitatory synapses. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown. Also not shown are output signals from cortical areas to motor responses. V1: striate, or primary, visual cortex; V2 and V4: areas of prestriate visual cortex; MT: Middle Temporal cortex; MST: Medial Superior Temporal area; ITp: posterior InferoTemporal cortex; ITa: anterior InferoTemporal cortex; PPC: Posterior parietal Cortex; LIP: Lateral InterParietal area; VPA: Ventral PreArculate gyrus; FEF: Frontal Eye Fields; PHC: ParaHippocampal Cortex; DLPFC: DorsoLateral Hippocampal Cortex; HIPPO: hippocampus; LH: Lateral Hypothalamus; BG: Basal Ganglia; AMYG: AMYGdala; OFC: OrbitoFrontal Cortex; PRC: PeriRhinal Cortex; VPS: Ventral bank of the Principal Sulcus; VLPFC: VentroLateral PreFrontal Cortex. See the text for further details.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:276:
  • image p532fig14.08 Macrocircuit of the ARTSCENE Search neural model for learning to search for desired objects by using the sequences of already experienced objects and their locations to predict what and where the desired object is. V1 = First visual area or primary visual cortex; V2 = Second visual area; V4 = Fourth visual area; PPC = Posterior Parietal Cortex; ITp = posterior InferoTemporal cortex; ITa = anterior InferoTemporal cortex; MTL = Medial Temporal Lobe; PHC = ParaHippoCampal cortex; PRC = PeriRhinal Cortex; PFC = PreFrontal Cortex; DLPFC = DorsoLateral PreFrontal Cortex; VPFC = Ventral PFC; SC = Superior Colliculus.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:277:
  • image p580fig16.05 Macrocircuit of the GridPlaceMap model, which can learn both 2D grid cells and place cells in response to realistic trajectories of navigating rats using a hierarchy of SOMs with identical equations.
    || GridPlaceMap model: rate-based and spiking (Pilly, Grossberg 2012). Pre-wired 1D stripe cells, learns both 2D frid and place cells! Same laws for both; both select most frequent and energetic inputs. Place cells emerge gradually in response to developing grid cells. [place-> grid-> stripe] cells-> path integration-> vestibular signals /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:278:
  • image p599fig16.35 Data (a) and simulations (b,c) about anatomically overlapping grid cell modules. (a) shows the anatomical distribution of grid cells belonging to different modules in one animal. DV location (mm) vs postrhinal border. (b) shows the simulated distribution of learned grid cell spacings from two stripe cell scales. frequency (%) vs grid spacing (cm). mu = [1, 0.6]. (c) shows what happens when half the cells respond with one rate and half another rate. (d) shows the same with three rates. (e-g) show spatial maps and autocorrelograms of grid cells that arise from the different rates in (d). [rate map, autocorelogram] vs [score [1.07, 0.5, 0.67], spacing (cm) [23.58, 41, 63.64]].
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:279:
  • image p612fig16.42 Macrocircuit of the main SOVEREIGN subsystems.
    || [reward input, drive input, drive representation (DR), visual working memory and planning system (VWMPS), visual form and motion system (VFMS), motor approach and orienting system (MAOS), visual input (VisIn), motor working memory and planning system (MWMPS), motor approach and orienting system (MAOS), motor plant (MotP), Proprioceptive Input (PropIn), Vestibular Input (VesIn), Environmental feedback (EnvFB). DR [incentive motivational learning-> [VWMPS, MWMPS], -> VFMS, -> MAOS], VWMPS [conditioned reinforcer learning-> DR, MAOS], VFMS [visual object categories-> VWMPS, reactive movement commands-> MAOS], MWMPS [conditioned reinforcer learning-> DR, planned movement commands-> MAOS], MAOS [motor map positions-> MWMPS, motor outflow-> MotP], VisIn-> VFMS, VesIn-> MAOS, EnvFB-> [VisIn, MotP, VesIn]. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:281:
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
    bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :
    whitegeneral microcircuit : a possible component of ART architecture
    lime greensensory perception [attention, expectation, learn]. Table includes [see, hear, !!*must add touch example*!!], no Grossberg [smell, taste] yet?
    light bluepost-perceptual cognition?
    pink"the feeling of what happens" and knowing what event caused that feeling
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:28: /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:296:
  • image p025fig01.16 (left panel) The main processing stages of the Cognitive-Emotional-Motor (CogEM) model have anatomical interpretations in terms of sensory cortex, amygdala, and prefrontal cortex. Chapter 13 will describe in greater detail how CS cues activate invariant object categories in the sensory cortex, value categories in the amygdala, and object-value categories in the prefrontal cortex, notably the orbitofrontal cortex. The amygdala is also modulated by internal drive inputs like hunger and satiety. (right panel) Anatomical data support this circuit, as do many neurophysiological data.
    || drive -> amygdala -> prefrontal cortex <-> sensory cortex -> amygdala. [visual, somatosensory, auditory, gustatory, olfactory] cortex -> [amygdala, Orbital Prefrontal Cortex]. amygdala -> Lateral Prefrontal Cortex /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:297:
  • image p481fig13.01 Macrocircuit of the functional stages and anatomical interpretations of the Cognitive-Emotional-Motor, or CogEM, model.
    || Drive-> hypothalamus value categories <-> amygdala incentive motivational learning-> Orbitofrontal cortex- object-value categories <-> sensory cortex- invariant object categories- conditioned reinforcer learning-> amygdala-> hypothalamus. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:298:
  • image p483fig13.02 The object-value categories in the orbitofrontal cortex require converging specific inputs from the sensory cortex and nonspecific incentive motivational inputs from the amygdala in order to fire. When the orbitofrontal cortex fires, it can deliver top-down ART Matching Rule priming signals to the sensory cortical area by which it was activated, thereby helping to choose the active recognition categories there that have the most emotional support, while suppressing others, leading to attentional blocking of irrelevant cues.
    || Cognitive-Emotional-Motor (CogEM) model. Drive-> amygdala incentive motivational learning-> orbitofrontal cortex- need converging cue and incentive iputs to fire <-> sensory cortex- conditioned renforcer learning-> amygdala. CS-> sensory cortex. Motivated attention closes the cognitive-emotional feedback loop, focuses on relevant cues, and causes blocking of irrelevant cues. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:299:
  • image p483fig13.03 The predicted processing stages of CogEM have been supported by anatomical studies of connections between sensory cortices, amygdala, and orbitofrontal cortex.
    || Adapted from (Barbas 1995). sensory cortices = [visual, somatosensory, auditory, gustatory, olfactory]. sensory cortices-> amygdala-> orbital prefrontal cortex. sensory cortices-> orbital prefrontal cortex. [visual cortex, amygdala]-> lateral prefrontal cortex. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:300:
  • image p484fig13.04 The top-down feedback from the orbitofrontal cortex closes a feedback loop that supports a cognitive-emotional resonance. If this resonance can be sustained long enough, it enables us to have feelings at the same time that we experience the categories that caused them.
    || Cognitive-Emotional resonance. Basis of "core consciousness" and "the feeling of what happens". (Damasio 1999) derives heuristic version of CogEM model from his clinical data. Drive-> amygdala-> prefrontal cortex-> sensory cortex, resonance around the latter 3. How is this resonance maintained long enough to become conscious? /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:301:
  • image p487fig13.11 The three main properties of CogEM that help to explain how attentional blocking occurs.
    || CogEM explanation of attentional blocking. Internal drive input <-> Conditioned reinforcer learning (self-recurrent) <-> Competition for STM <- Motor learning. 1. Sensory representations compete for limited capacity STM. 2. Previously reinforced cues amplify their STM via positive feedback. 3. Other dues lose STM via competition. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:302:
  • image p489fig13.13 (top row) If a positive ISI separates onset of a CS and US, then the CS can sample the consequences of the US during the time interval before it is inhibited by it. (bottom row) A CogEM simulation of the inverted-U in conditioning as a function of the ISI betweeen CS and US.
    || Positive ISI and conditioning. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:303:
  • image p490fig13.15 The CogEM circuit is an ancient design that is found even in mollusks like Aplysia. See the text for details.
    || Aplysia (Buononamo, Baxter, Byrne, Neural Networks 1990; Grossberg, Behavioral and Brain Sciences 1983). Facilitator neuron ~ drive representation. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:304:
  • image p494fig13.19 (left column, top row) Secondary conditioning of both arousal and a specific response are now possible. (bottom row) The CogEM circuit may be naturally extended to include multiple drive representations and inputs. (right column, top row) The incentive motivational pathways is also conditionable in order to enable motivational sets to be learned.
    || Secondary conditioning. Homology: conditionable incentive motivation. Multiple drive representations and inputs. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:305:
  • image p514fig13.44 Analog of the COgEM model in Figure 6.1 of (Damasio 1999).
    || (a) map of object X-> map of proto-self at inaugural instant-> [, map of proto-self modified]-> assembly of second-order map. (b) map of object X enhanced-> second-order map imaged. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:306:
  • image p523fig14.03 (a) The MOTIVATOR neural model generalizes CogEM by also including the basal ganglia. It can hereby explain and simulate complementary functions of the amygdala and basal ganglia (SNc) during conditioning and learned performance. The basal ganglia generate Now Print signals in response to unexpected rewards. These signals modulate learning of new associations in many brain regions. The amygdala supports motivated attention to trigger actions that are expected to occur in response to conditioned or unconditioned stimuli. Object Categories represent visual or gustatory inputs in anterior inferotemporal (ITA) and rhinal (RHIN) cortices, respectively. Value Categories represent the value of anticipated outcomes on the basis of hunger and satiety inputs, in amygdala (AMYG) and lateral hypothalamus (LH). Object-Value Categories resolve the value of competing perceptual stimuli in medial (MORB) and lateral (ORB) orbitofrontal cortex. The Reward Expectation Filter detects the omission or delivery of rewards using a circuit that spans ventral striatum (VS), ventral pallidum (VP), striosomal delay (SD) cells in the ventral striatum, the pedunculopontine nucleus (PPTN) and midbrain dopaminergic neurons of the substantia nigra pars compacta/ventral tegmental area (SNc/VTA). The circuit that processes CS-related visual information (ITA, AMTG, ORB) operates in parallel with a circuit that processes US-related visual and gustatory information (RHIN, AMYG, MORB). (b) Reciprocal adaptive connections between hypothalamus and amygdala enable amygdala cells to become learned value categories. The bottom region represents hypothalmic cells, which receive converging taste and metabolite inputs whereby they become taste-drive cells. Bottom-up signals from activity patterns across these cells activate competing value category, or US Value Representations, in the amygdala. A winning value category learns to respond selectively to specific combinations of taste-drive activity patterns and sends adaptive top-down priming signals back to the taste-drive cells that activated it. CS-activated conditioned reinforcer signals are also associatively linked to value categories. Adpative connections end in (approximately) hemidiscs. See the text for details.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:307:
  • image p548fig15.16 Homologous recognition learning and reinforcement learning macrocicuits enable adaptively timed conditioning in the reinforcement learning circuit to increase inhibition of the orienting system at times when a mismatch in the recognition system would have reduced inhibition of it.
    || Homolog between ART and CogEM model, complementary systems. [Recognition, Reinforcement] learning vs [Attentional, Orienting] system. Reinforcement: timing, drive representation. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:30: /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:319:
  • image p080fig02.33 An opposite-attracts rule during the development of intracellular connections can lead to a mature network that realizes informational noise suppression.
    || How do noise suppression parameters arise? Symmetry-breaking during morphogenesis? Opposites attract rule.
    Intracellular parameters C/B = 1/(1 - n) Intercellular parameters
    Predicts that:
    • Intracellular excitatory and inhibitory saturation points can control the growth during development of :
    • Intercellular excitatory and inhibitory connections.
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:335:
  • image p012fig01.08 A sigmoidal signal function is a hybrid signal that combines the best properties of [faster, same, slower]-than linear signals. It can suppress noise and store a partially contrast-enhanced activity pattern. slower-than-linear saturates pattern; approximately linear- preserves pattern and normalizes; faster-than-linear- noise suppression and contrast-enhancement.
    || Sigmoidal signal: a hybrid. (upper) saturates pattern- slower-than-linear; (middle) preserves pattern and normalizes- approximately linear. (lower) noise suppression and contrast enhancement- faster-than-linear. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:336:
  • image p078fig02.30 Choosing the adaptation level to achieve informational noise suppression.
    || Noise suppression. Attenuate Zero Spatial frequency patterns: no information. Ii vs i (flat line), xi vs i (flat line at zero)
    B >> C: Try B = (n - 1)*C or C/(B + C) = 1/n
    Choose a uniform input pattern (no distinctive features): All θi = 1/n
    xi = (B + C)*I/(A + I)*[θi -C/(B + C)] = 0 no matter how intense I is. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:337:
  • image p078fig02.31 How noise suppression enables matching of bottom-up and top-down input patterns.
    || Noise suppression -> pattern matching. mismatch (out of phase) suppressed, match (in phase) amplifies pattern. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:338:
  • image p080fig02.33 An opposite-attracts rule during the development of intracellular connections can lead to a mature network that realizes informational noise suppression.
    || How do noise suppression parameters arise? Symmetry-breaking during morphogenesis? Opposites attract rule.
    Intracellular parameters C/B = 1/(1 - n) Intercellular parameters
    Predicts that:
    • Intracellular excitatory and inhibitory saturation points can control the growth during development of :
    • Intercellular excitatory and inhibitory connections.
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:339:
  • image p080fig02.34 How to achieve informational noise suppression in a network with multiple parallel processing channels.
    || Symmetry-breaking: dynamics and anatomy.
    Dynamics:
    • excitatory range is amplified
    • inhibitory range is compressed
    Anatomy:
    • narrow on-center
    • broad off-surround
    Noise suppression: attenuates uniform patterns
    Contour direction: enhances pattern gradients /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:340:
  • image p081fig02.36 Informational noise suppression in network with Gaussian on-center and off-surround function as contour detectors that are sensitive to ratio-contrast.
    || Noise suppression and contour detection.
    If B*sum[k=1 to n: Cki] <= D*sum[k=1 to n: Eki] then:
    • uniform patterns are suppressed
    • contrasts are selectively enhanced
    • contours are detected
    Ii vs i, xi vs i
    Responses are selective to [REFLECTANCE, SPATIAL SCALE], eg color [feature, surface] contours. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:341:
  • image p510fig13.39 Shunting competition and informational noise suppression in affective gated dipoles, plus back-propagating action potentials for teaching signals, enable the net normalized adaptive weights to be learned. They never saturate!
    || Learn net dipole output pattern. Opponent "decision" controls learning. Cf. competitive learning. Learning signal, opponent extinction. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:364:
  • image p009fig01.06 Primacy gradient of activity stored in working memory within a recurrent shunting on-center off-surround network. Rehersal is controlled by a nonspecific rehersal wave and self-inhibitory feedback of the item that is currently being rehearsed. Rehearsal is controlled by a nonspecific rehearsal wave and self-inhibitory feedback of the item that is currently being rehearsed. Green = excitatory, red = inhibitory
    || inputs? -> item and order WM storage -> competitive selection-> rehearsal wave -> outputs /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:365:
  • image p024fig01.15 A REcurrent Associative Dipole, or READ, circuit is a recurrent shunting on-center off-surround network with habituative transmitter gates. Sensory cues sample it with LTM traces and thereby become conditioned reinforcers.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:366:
  • image p038fig01.25 The ART Matching Rule stabilizes real time learning using a [top-down, modulatory on-center, off-surround] network. Object attention is realized by such a network. See text for additional discussion.
    || ART Matching Rule [volition, categories, features]. [one, two] against one. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:367:
  • image p073fig02.22 An on-center off-surround network is capable of computing input ratios.
    || Computing with patterns.
    How to compute the pattern-sensitive variable: θi = Ii / sum[k=1 to n: Ik]?
    Needs interactions! What type? θi = Ii / sum[k ≠ i: Ik]
    Ii↑ ⇒ θi↑ excitation, Ik↑ ⇒ θk↓, k ≠ i inhibition
    On-center off-surround network. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:368:
  • image p074fig02.23 The equations for a shunting on-center off-surround network. Shunting terms lead to many beautiful and important properties of these networks, which are found ubiquitously, in one form or another, in all cellular tissues.
    || Shunting on-center off-surround network.
    Mass action: d[dt: xi] = -A*xi +(B - xi)*Ii -xi*sum[k≠i: Ik]
    Turn on unexcited sitesTurn off excited sites
    At equilibrium:
    0 = d[dt: xi] = -(A + Ii + sum[k≠i: Ik])*xi + B*Ii = -(A + I)*xi + B*Ii
    xi = B*Ii/(A + I) = B*θi*I/(A + I) = θi* B*I/(A + I)No saturation!
    Infinite dynamical range
    Automatic gain control
    Compute ratio scale
    Weber law
    x = sum[k-1 to n: xk] = B*I/(A + I) ≤ B Conserve total activity
    NORMALIZATION
    Limited capacty
    Real-time probability
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:369:
  • image p075fig02.24 The membrane equations of neurophysiology describe how cell voltages change in response to excitatory, inhibitory, and passive input channels. Each channel is described by a potential difference multiplied by a conductance. With the special choices shown in the lower right-hand corner, this equation defines a feedforward shuntin on-center off-surround network.
    || Membrane equations of neurophysiology.
    C*dp[dt] = (V(+) - V)*g(+) +(V(-) - V)*g(-) +(V(p) - V)*g(p)
    Shunting equation (not additive)
    V Voltage
    V(+), V(-), V(p) Saturating voltages
    g(+), g(-), g(p) Conductances
    V(+) = B, C = 1; V(-) = V(p) = 0; g(+) = Ii; g(-) = sum[k≠i: Ik];
    lower V: V(+) = V(p) Silent inhibition, upper V: V(+). (Howell: see p068fig02.14 Grossberg's comment that Hodgkin&Huxley model was a "... Precursor of Shunting network model (Rail 1962) ..."). /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:370:
  • image p076fig02.25 An on-center off-surround network can respond to increasing on-center excitatory inputs without a loss of sensitivity. Instead, as the off-surround input increases, the region of a cell's maximal sensitivity to an increasing on-center input shifts to a range of larger inputs. This is because the off-surround divides the effect of the on-center input, an effect that is often called a Weber law.
    || Web er law, adaptation, and shift property (Grossberg 1963).
    Convert to logarithmic coordinates:
    K = ln(Ii), Ii = e^K, J = sum[k≠i: Ik]
    xi(K,J) = B*Ii/(A + Ii + J) = B*e^K/(A + e^K + J)
    x(K + S, J1) = x(K, J2), S = ln((A + J1)/(A + J2)) size of SHIFT. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:371:
  • image p076fig02.26 The mudpuppy retina exhibits the shift property that occurs in the feedforward shunting on-center off-surround network in Figure 2.25. As a result, its sensitivity also shifts in response to different background off-surrounds, and therefore exhibits no compression (dashed purple lines).
    || Mudpuppy retina neurophysiology.
    I center, J background
    a) Relative figure-to-ground
    b) Weber-Fechner I*(A + J)^(-I)
    c) No hyperpolarization, SHUNT: Silent inhibition
    d) Shift property(Werblin 1970) xi(K,J) vs K = ln(I)
    Adaptation- sensitivity shifts for different backgrounds. NO COMPRESSION. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:372:
  • image p077fig02.27 A schematic of the on-center off-surround network that occurs in the mudpuppy retina, including three main cell types: receptors, horizontal cells, and bipolar cells.
    || Mechanism: cooperative-competitive dynamics.
    On-center off-surround (Kuffler 1953) cat retina
    Subtractive lateral inhibition (Hartline, Ratcliff 1956/7+) limulus retina.
    R receptor -> H horizontal -> B bipolar (Werblin, Dowling, etal 1969+) mudpuppy retina. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:373:
  • image p080fig02.34 How to achieve informational noise suppression in a network with multiple parallel processing channels.
    || Symmetry-breaking: dynamics and anatomy.
    Dynamics:
    • excitatory range is amplified
    • inhibitory range is compressed
    Anatomy:
    • narrow on-center
    • broad off-surround
    Noise suppression: attenuates uniform patterns
    Contour direction: enhances pattern gradients /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:374:
  • image p081fig02.35 The equilibrium activities of a shunting netwok with Gaussian on-center off-surround kernels are sensitive to the ratio-contrasts of the input patterns that they process. The terms in the denominator of the equilibrium activities accomplish this using the shunting on-center and off-surround terms.
    || Ratio-contrast detector. flat versus [Gaussian Cki, flattened Gaussian? Eki]
    d[dt: xi] = -A*xi +(B - xi)*sum[k≠i: Ik]*Cki -(xi + D)*sum[k=1 to n: Ik*Eki]
    Cki = C*e^(-μ*(k - i)^2), Eki = E*e^(-μ*(k - i)^2)
    At equilibrium: xi = I*sum[k=1 to n: θk*Fki] / (A + I*sum[k=1 to n: θk*Gki])
    Fki = B*Cki -D*Eki (weighted D.O.G)
    Gki = Cki +Eki (S,O,G)
    • Reflectance processing
    • Contrast normalization
    • Discount illuminant
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:375:
  • image p081fig02.36 Informational noise suppression in network with Gaussian on-center and off-surround function as contour detectors that are sensitive to ratio-contrast.
    || Noise suppression and contour detection.
    If B*sum[k=1 to n: Cki] <= D*sum[k=1 to n: Eki] then:
    • uniform patterns are suppressed
    • contrasts are selectively enhanced
    • contours are detected
    Ii vs i, xi vs i
    Responses are selective to [REFLECTANCE, SPATIAL SCALE], eg color [feature, surface] contours. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:376:
  • image p106fig03.24 In response to the Synthetic Aperture image (upper corner left), a shunting on-center off-surround network "discounts the illiminant" and thereby normalizes cell activities to compute feature contours, without causing saturation (upper right corner). Multiple-scale boundaries form in response to spatially coherent activities in the feature contours (lower left corner) and create the webs, or containers, into which the feature contours fill-in the final surface representations (lower right corner).
    || Do these ideas work on hard problems? SAR!
    input imagefeature contoursboundary contoursfilled-in surface
    Synthetic Aperture Radar: sees through weather 5 orders of magnitude of power in radar returndiscounting the illuminant
    • normalizes the image: preseves RELATIVE activities without SATURATION
    • shows individual PIXELS
    boundaries complete between regions where normalized feature contrasts changefilling-in averages brightnesses within boundary compartments
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:377:
  • image p176fig04.53 The on-center off-surround network within position and across depth helps to explain why brighter Kanizsa squares look closer.
    || inhibition vs. depth. p176c1h0.25 "... to qualitatively understand how this example of proximity-luminance covariance works. It follows directly from the boundary pruning by surface contour feedback signals (Figure 4.51) that achieves complementary consistency and initiates figure-ground perception. ...". p176c1h0.45 "... these inhibitory sigals are part of an off-surround network whose strength decreases as the depth difference increases between the surface that generates the signal and its recipient boundaries. ...". p176c1h0.8 "... Within FACADE theory, the perceived depth of a surface is controlled by the boundaries that act as its filling-in generators and barriers (Figure 3.22), since these boundaries select the depth-sselective FIDOs within whin filling-in can occur, and thereby achieve surface capture. These boundaries, in turn, are themselves strengthened after surface-to-boundary contour feedback eliminates redundant boundaries that cannot support sucessful filling-in (Figure 4.51). These surface contour feedback signals have precisely the properties that are needed to explain why brighter Kanizsa squares look closer! ..." /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:378:
  • image p192fig05.05 ON and OFF cells in the LGN respond differently to the sides and ends of lines.
    || [ON, OFF]-center, [OFF, ON]-surround (respectively). OFF-center cells maximum response at line end (interior), ON-center cells maximum response along sides (exterior) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:379:
  • image p253fig06.02 After bottom-up surface inputs activate spatial attentional cells, they send top-down topographic excitatory signals back to the surface representations. This recurrent shunting on-center off-surround network contrast enhances larger attentional activities while approximately normalizing the total spatial attentional activity. A surface-shroud resonance hereby forms that selects an attentional shroud, enhances the perceived contrast of the attended surface (light blue region), and maintains spatial attention on it.
    || Surface-shroud resonance. perceptual surfaces -> competition -> spatial attention. (Carrasco, Penpeci-Talgar, and Eckstein 2000, Reynolds and Desimone 2003) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:380:
  • image p300fig08.12 A single flash activates a Gaussian receptive field across space whose maximum is chosen by a winner-take-all recurrent on-center off-surround network.
    || Gaussian receptive fields are sufficient! (Grossberg, Rudd 1992). Single flash. Suppose that a single flash causes a narrow peak of activity at the position where it occurs. It generates output signals through a Gaussian filter that produces a Gaussian activity profile at the next processing stage., A recurrent on-center off-surround network chooses the maximum activity and suppresses samaller activities. Winner-take-all /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:381:
  • image p315fig08.35 The output signals from the directional grouping network obey the ART Matching Rule. They thereby select consistent motion directional signals while suppressing inconsistent ones, and do not distort that the spared cells code. The aperture problem is hereby solved by the same mechanism that dynamically stabilizes the learning of directional grouping cells.
    || How to select correct direction and preserve speed estimates? Prediction: Feedback from MSTv to MT- obeys ART Matching Rule; Top-down, modulatory on-center, off-surround network (Grossberg 1976, 1980; Carpenter, Grossberg 1987, 1991); Explains how directional grouping network can stably develop and how top-down directional attention can work. (Cavanough 1992; Goner etal 1986; Sekuer, Ball 1977; Stelmach etal 1994). Directional grouping network (MSTv) <-> Directional long-range filter (MT). Modulatory on-center selects chosen direction and preserves speed. Off-surround inhibits incompatible directions. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:382:
  • image p340fig09.07 Log polar remapping from the retina to cortical area V1 and beyond converts expansion, translation, and spiral flows on the retina into parallel flows, with different orientations, on the cortical map.
    || Log polar remapping of optic flow. retina -> cortex. Any combination of expansion and circular motion centered on the fovea maps to cortex as a single direction. Retinal Cartesian coordinates (x,y) map to cortical polar coordinates (r,theta). This makes it easy to compute directional receptive fields in the cortex! /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:383:
  • image p345fig09.15 Double opponent directional receptive fields in MT are capable of detecting the motion of objects relative to each other and their backgrounds.
    || Motion opponency in MT (Born, Tootell 1992). Motion opponent (Grossberg etal), Differential motion (Royden etal), Subtractive motion cells (Neumann etal). ON center directionally selective: [excit, inhibit]ed by motion in [one, opponent] direction. OFF surround directionally selective: [excit, inhibit]ed by motion in [opponent, center] direction. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:384:
  • image p354fig10.01 The lamiar cortical circuit that realizes how we pay attention to an object sends signals from layer 6 of a higher cortical level to layer 6 of a lower cortical level and then back up to layer 4. This "folded feedback" circuit realizes a top-down, modulatory on-center, off-surround circuit that realizes the ART Matching Rule.
    || Top-down attention and folded feedback. Attentional signals also feed back into 6-to-4 on-center off-surround. 1-to-5-to-6 feedback path: Macaque (Lund, Booth 1975) cat (Gilbert, Wiesel 1979). V2-to-V1 feedback is on-center off-surround and affects layer 6 of V1 the most (Bullier etal 1996; Sandell, Schiller 1982). Attended stimuli enhanced, ignored stimuli suppressed. This circuit supports the predicted ART Matching Rule! [LGN, V[1,2][6->1]] /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:385:
  • image p359fig10.06 Another, albeit indirect, pathway from LGN exists that can also excite layer 4 of V1. Why are not these two pathways redundant? The answer, ultimately, how to do with how cortex learns, as well as with how it pays attention. See the text for details.
    || Another bottom-up input to layer 4: Why?? Layer 6-to-4 on-center off-surround (Grieve, Sillito 1991, 1995; Ahmedetal 1994, 1997). LGN projects to layers 6 and 4. Layer 6 excites spiny stellates in column above it. Medium range connections onto inhibitory neurons. 6-t-4 path acts as on-center off-curround. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:386:
  • image p359fig10.07 The two bottom-up pathways from LGN to layer 4 of V1 can together activate layer 4 and contrast-normalize layer 4 responses.
    || Bottom-up contrast normalization (Grossberg 1968, 1973; Sperling, Sondhi 1968; Heeger 1992; Douglas etal 1995; Shapley etal 2004). Together, direct LGN-to-4 path and 6-to-4 on-center off-surround provide contrast normalization if cells obey shunting or membrane equation dynamics. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:387:
  • image p360fig10.08 The bottom-up on-center off-surround from LGN-to-6-to-4 has a modulatory on-center because of its role in realizing the ART Matching Rule and, with it, the ability of the cortex to dynamically stabilize its learned memories.
    || Modulation of priming by 6-to-4 on-center (Stratford etal 1996; Callaway 1998). On-center 6-to-4 excitation is inhibited down to being modulatory (priming, subthreshold). On-center 6-to-4 excitation cannot activate layer 4 on its own. Clarifies need for direct path. Prediction: plays key role in stable grouping, development and learning. ART Matching Rule! /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:388:
  • image p362fig10.11 Feedback between layer 2/3 to the layer 6-to-4-to-2/3 feedback loop chooses the strongest grouping in cases where there is more than one. If only one grouping exists, then the circuit can function very quickly in a feedforward manner. When multiple groupings exist, the cortex "runs as fast as it can" to select the one with the most evidence to support it using the self-normalizing inhibition in the layer 6-to-4 off-surround.
    || How is the final grouping selected? Folded feedback LGN-> 6-> 4-> 2/3. 1. Layer 2/3 groupings feed back into 6-to-4 on-center off-surround: a) direct layer 2/3 -to-6 path; b) can also go via layer 5 (Blasdel etal 1985; Kisvarday etal 1989). 2. Strongest grouping enhanced by its on-center. 3. Inputs to weaker groupings suppressed by off-surround. 4. Interlaminar feedback creates functional columns. Activities of conflicting groupings are reduced by self-normalizing inhibition, slowing processing; intracortical feedback selects and contrast-enhances the winning grouping, speeding processing. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:389:
  • image p364fig10.13 The bottom-up adaptive filter, intracortical grouping circuit, and intercortical top-down attentional circuit all use the same competitive decision circuit between layers 6 and 4, called the attention-preattention interface, with which to select the featural patterns that will be processed.
    || Bottom-up filters and intracortical grouping feedback use the same 6-to-4 decision circuit, LGN-> Vx[6,4,2/3]. competitive decision circuit, modulatory on-center off-surround network. Top-down intercortical attention also uses the same 6-to-4 decision circuit! /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:390:
  • image p364fig10.14 This figure emphasizes how preattentive intracortical groupings and top-down intercortical attention share the same modulatory on-center, off-surround layer 4-to-6 decision circuit.
    || Explanation: grouping and attention share the same modulatory decision circuit. Layer 6-6-4-2/3 pathway shown; also a layer 6-1-2/3 path. intercortical attention, both act via a modulatory on-center off-surround decision circuit, intracortical feedback from groupings /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:391:
  • image p367fig10.15 Data (left column) and simulation (right column) of how attention prevents a masking stimulus from inhibiting the response to the on-center of the cell from which the recording was made.
    || Attention protects target from masking stimulus (Reynolds etal 1999; Grossberg, Raizada 2000). /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:392:
  • image p375fig11.07 The 3D LAMINART model uses both monocular and binocular simple cells to binocularly fuse like image contrasts. The remainder of the model generates 3D boundary and surface representations of multiple kinds of experiments as well as of natural scenes.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A], V2 thin stripe [4->2/3A], V4]. V1 blob [V1-4 monocular, V1 interior binocular] simple cells. [complex, simple, inhibitory] cells, on-center off-surround /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:393:
  • image p376fig11.10 The 3D LAMINART model shows how the disparity filter can be integrated into the circuit that completes 3D boundary representations using bipole grouping cells. It also explains how surface contours can strengthen boundaries that succeed in generating closed filling-in domains.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A] surface contour, V2 thin stripe (monocular surface) [4->2/3A], V2 interior [disynaptic inhibitory interneurons, bipole grouping cells, disparity filter, V4 binocular surface]. [complex, simple, inhibitory] cells, on-center off-surround /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:394:
  • image p423fig12.20 The Spatial Pitch Network, or SPINET, model shows how a log polar spatial representation of the sound frequency spectrum can be derived from auditory signals occuring in time. The spatial representation allows the ARTSTREAM model to compute spatially distinct auditory streams.
    || SPINET model (Spatial Pitch Network) (Cohen, Grossberg, Wyse 1995). 1. input sound 2. Gamma-tone filter bank 3. Shaort-term average energy spectrum 4. MAP transfer function 5. On-center off-surround and rectification 6. Harmonic weighting 7. Harmonic summation and competition -> PITCH /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:395:
  • image p448fig12.46 A Masking Field working memory is a multiple-scale self-similar recurrent shunting on-center off-surround network. It can learn list chunks that respond selectively to lists of item chunks of variable length that are stored in an item working memory at the previous processing stage. Chunks that code for longer lists (eg MY vs MYSELF) are larger, and give rise to stronger recurrent inhibitory neurons (red arrows).
    || How to code variable length lists? MASKING FIELDS code list chunks of variable length (Cohen, Grossberg 1986, 1987; Grossberg, Kazerounian 2011, 2016; Grossberg, Meyers 2000; Grossberg, Pearson 2008). Multiple-scale self-similar WM: Masking field, adaptive filter. Variable length coding- Masjking fields select list chunks that are sensitive to WM sequences of variable length; Selectivity- Larger cells selectively code code longer lists; Assymetric competition- Larger cells can inhibit smaller cells more than conversely MAgic Number 7! Temporal order- different list chunks respond to the same items in different orders eg LEFT vs FELT;. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:396:
  • image p564fig15.35 (a) A pair of recurrent shunting on-center off-surround networks for control of the fore limbs and hind limbs. (b) Varying the GO signal to these networks can trigger changes in movement gaits. See the text for details.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:397:
  • image p567fig15.38 (a) The Gated Pacemaker model for the control of circadian rythms is a recurrent shunting on-center off-surround network whose excitatory feedback signals are gated by habituative transmitters. Tonic arousal signals energize the pacemaker. Diurnal (left) and nocturnal (right) pacemakers are determined by whether phasic light signals turn the pacemaker on or off. An activity-dependent fatigue signal prevents the pacemaker from becoming overly active for too long. (b) Two simulations of circadian activity cycles during different schedules of light (L) and dark (D). See the text for details.
    || sourceOn-> on-cells (recurrent) <-(-) (-)> off-cells (recurrent) <-sourceOff. on-cells-> activity-> off-cells. off-cells-> fatigue. Diurnal: sourceOn=[light, arousal]; sourceOff=arousal;. Nocturnal: sourceOn=arousal; sourceOff=[arousal, light];. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:398:
  • image p586fig16.16 In the place cell learning model of (Gorchetnikov, Grossberg 2007), three populations of five cells each of entorhinal grid cells (only two are shown) with different spatial periods input to the model's dentate gyrus. The grid cells are one-dimensional and defined algorithmically. A model dentate gyrus granule cell that receives strong projections from all three grid cell scales fires (green cell) and activates a recurrent inhibitory interneuron that inhibits other granule cells. It also generates back-propagating action potentials that trigger learning in the adaptive weights of the projections from the grid cells, thereby causing learning of place cell receptive fields.
    || Grid-to-place Self-Organizing map (Gorchetnikov, Grossberg 2007). Formation of place cell fields via grid-to-place cell learning. Least common multiple: [grid (cm), place (m)] scales: [40, 50, 60 (cm); 6m], [50, 60, 70 (cm); 21m], [41, 53, 59 (cm); 1.282 km]. Our simulations: [40, 50 (cm); 2m], [44, 52 (cm); 5.72m]. Our SOM: Spiking Hodgkin-Huxley membrane equations; Nonlinear choice by contrast-enhancing recurrent on-center off-surround net;. Choice triggers back-propagating action potentials that induce STDP-modulated learning on cell dendrites. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:399:
  • image p627tbl17.01 Homologs between reaction-diffusion and recurrent shunting cellular network models of development.
    || byRows: (reaction-diffusion, recurrent shunting net) (activator, excitatory activity) (inhibitor, inhibitory activity) (morphogenic source density, inputs) (firing of morphogen gradient, contrast enhancement) (maintenance of morphogen gradient, short-term memory) (power or sigmoidal signal functions, power or sigmoidal signal functions) (on-center off-surround interactions via diffusion, on-center off-surround interactions via signals) (self-stabilizing distributions of morphogens if inhibitors equilibrate rapidly, short-term memory pattern if inhibitors equilibrate rapidly) (periodic pulses if inhibitors equilibrate slowly, periodic pulses if inhibitors equilibrate slowly) (regulation, adaptation). /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:39: /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:429:
  • image p016fig01.11 A sufficiently big mismatch between a bottom-up input pattern and a top-down expectation can activate the orienting system, which triggers a burst of nonspecific arousal that can reset the recognition category that read out the expectation. In this way, unexpected events can reset short-term memory and initiate a search for a category that better represents the current situation.
    || [category- top-down (TD) expectation; Bottom-up (BU) input pattern] -> Feature pattern -> BU-TD mismatch -> orienting system -> non-specific arousal -> category. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:430:
  • image p038fig01.25 The ART Matching Rule stabilizes real time learning using a [top-down, modulatory on-center, off-surround] network. Object attention is realized by such a network. See text for additional discussion.
    || ART Matching Rule [volition, categories, features]. [one, two] against one. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:431:
  • image p052fig02.02 Feature-category resonances enable us to rapidly learn how to recognize objects without experiencing catastrophic forgetting. Attentive matching between bottom-up feature pattern inputs and top-down expectations prevent catastrophic forgetting by focussing object attention upon expected patterns of features, while suppressing outlier features that might otherwise have caused catastophic forgetting if they were learned also.
    || Adaptive Resonance. Attended feature clusters reactivate bottom-up pathways. Activated categories reactivate their top-down pathways. Categories STM, Feature patterns STM. Feature-Category resonance [synchronize, amplify, prolong]s system response. Resonance triggers learning in bottom-up and top-down adaptive weights: adaptive resonance! /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:432:
  • image p078fig02.31 How noise suppression enables matching of bottom-up and top-down input patterns.
    || Noise suppression -> pattern matching. mismatch (out of phase) suppressed, match (in phase) amplifies pattern. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:433:
  • image p079fig02.32 Matching amplifies the matched pattern due to automatic gain control. See terms I and J in the equation.
    || Substrate of resonance. Match (in phase) of BU and TD input patterns AMPLIFIES matched pattern due to automatic gain control by shunting terms. J = sum[i: Ji], I = sum[i: Ii], θi = (Ii + Ji)/(I + J)
    xi = (B + C)*(I + J)/(A + I + J)*[θi -C/(B + C)]
    Need top-down expectations to be MODULATORY. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:434:
  • image p087fig03.01 A macrocircuit of key visual processes (in green) and the cortical areas in which they primarily occur (in red), from the retina to the Prefrontal Cortex (PFC), including both the What and Where cortical streams. The [bottom-up, horizontal, and top-down] interactions help each of these processes to overcome computationally complementary processing deficiencies that they would experience without them, and aso to read-out top-down expectations that help to stabilize learning while they focus attention on salient objects and positions.
    || Emerging unified theory of visual intelligence. [What, Where] streams. Bottom-up and top-down interactions overcome COMPLEMENTARY processing deficiencies. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:435:
  • image p091fig03.04 A cross-section of the eye, and top-down view of the retina, shao how the blind spot and retinal veins can occlude the registration of light signals at their positions on the retina.
    || Eye: [optic nerve, ciliary body, iris,lens, pupil, cornea, sclera, choroid, retina]. Human retina: [fovea, blind spot, optic nerve]. see alsi cross-section of retinal layer. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:436:
  • image p163fig04.39 A schematic of the LAMINART model that explains key aspects of laminar visual cortical anatomy and dynamics. LGN -> V1 [6, 4, 2/3] -> V2 [6, 4, 2/3]
    || p163c1h0.6 "... The first article abount laminar computing ... proposed how the laminar cortical model could process 2D pictures using bottom-up filtering and horizontal bipole grouping interactions (Grossbeerg, Mingolla, Ross 1997). In 1999, I was able to extend the model to also include top-down circuits for expectation and attention (Grossberg 1999)(right panel). Such a synthesis of laminar bottom-up, horizontal, and top-down circuits is characteristic of the cerebral cortex (left panel). I called it LAMINART because it began to show how properties of Adaptive Resonance Theory, or ART, notably the ART prediction about how top-down expectations and attention work, and are realized by identical cortical cells and circuits. You can immediately see from the schematic laminar circuit diagram ... (right panel) that circuits in V2 seem to repeat circuits in V1, albeit with a larger spatial scale, despite the fact that V1 and V2 carry out different functions, How this anatomical similarity can coexist with functional diversity will be clarified in subsequent sections and chapters. It enable dfifferent kinds of biological intelligence to communicate seamlessly while carrying out their different psychological functions. ..." /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:437:
  • image p192fig05.06 Bottom-up and top-down circuits between the LGN and cortical area V1. The top-down circuits obey the ART Matching Rule for matching with bottom-up input patterns and focussing attention on expected critical features.
    || Model V1-LGN circuits, version [1, 2]. retina -> LGN relay cells -> interneurons -> cortex [simple, endstopped] cells -> cortex complex cells /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:438:
  • image p193fig05.08 The patterns of LGN activation and inhibition on the sides and ends of a line without the top-down feedback (A) and with it (C). The top-down distribution of excitation (+) and inhibition (-) are shown in (B).
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:439:
  • image p199fig05.11 Instar learning enables a bottom-up adaptive filter to become selectively tuned to particular feature patterns. Such pattern learning needs adaptive weights that can either increase or decrease to match the featural activations that they filter.
    || Instar learning STM->LTM: need both increases and decreases in strength for the LTM pattern to learn the STM pattern /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:440:
  • image p200fig05.13 Instar and outstar learning are often used to learn the adaptive weights in the bottom-up filters and top-down expectations that occur in ART. The ART Matching Rule for object attention enables top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features.
    || Expectations focus attention: feature pattern (STM), Bottom-Up adaptive filter (LTM), Category (STM), competition, Top-Down expectation (LTM); ART Matching Rule: STM before top-down matching, STM after top-down matching (attention!) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:441:
  • image p211fig05.20 The PN and N200 event-related potentials are computationally complementary events that are computed within the attentional and orienting systems.
    || PN and N200 are complementary waves. PN [top-down, conditionable, specific] match; N200 [bottom-up, unconditionable, nonspecific] mismatch /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:442:
  • image p214fig05.24 Learning of a top-down expectation must occur during bottom-up learning in the adaptive filter in order to be able to match the previously associated feature pattern with the one that is currently active.
    || Learning top-down expectations. When the code (green right triangle GRT) for X1 was learned at F2, GRT learned to read-out X1 at F1. [Bottom-Up, Top-Down] learning /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:443:
  • image p214fig05.25 The sequence of events whereby a novel input pattern can activate a category which, in turn, reads out its learned top-down expectation to be matched against the input pattern. Error correction thus requires the use of a Match Detector that has properties of the Processing Negativity ERP.
    || How is an error corrected. During bottom-up learning, top-down learning must also occur so that the pattern that is read out top-down can be compared with the pattern that is activated by bottom-up inputs. Match detector: Processing Negativity ERP. 1. top-down, 2. conditionable, 3. specific, 4. match /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:444:
  • image p214fig05.26 When a big enough mismatch occurs, the orienting system is activated and sends a burst of nonspecific arousal to the category level. This Mismatch Detector has properties of the N200 ERP.
    || Mismatch triggers nonspecific arousal. Mismatch at F1 eleicits a nonspecific event at F2. Call this event nonspecific arousal. N200 ERP Naatanen etal: 1. bottom-up, 2. unconditionable, 3. nonspecific, 4. mismatch /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:445:
  • image p215fig05.28 How a mismatch between bottom-up and top-down input patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level.
    || Mismatch -> inhibition -> arousal -> reset. BU input orienting arousal, BU+TD mismatch arousal and reset. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:446:
  • image p220fig05.29 Vigilance is a gain parameter on inputs to the orienting system that regulates whether net excitation from bottom-up inputs or inhibition from activated categories will dominate the orienting system. If excitation wins, then a memory search for a better matching will occur. If inhibition wins, then the orienting system will remain quiet, thereby enabling resonance and learning to occur.
    || Vigilance control [resonate and learn, reset and search]. ρ is a sensitivity or gain parameter /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:447:
  • image p221fig05.30 When a predictive disconfirmation occurs, vigilance increases enough to drive a search for a more predictive category. If vigilance increases just enough to exceed the analog match between features that survive top-down matching and the entire bottom-up input pattern, then minimax learning occurs. In this case, the minimum amount of category generalization is given up to correct the predictive error.
    || Match tracking realizes minimax learning principle. Given a predictive error, vigilance increases just enough to trigger search and thus acrifices the minimum generalization to correct the error ... and enables expert knowledge to be incrementally learned. predictive error -> vigilance increase just enough -> minimax learning /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:448:
  • image p221fig05.31 A system like Fuzzy ARTMAP can learn to associate learned categories in one ART network with learned categories in a second ART network. Because both bottom-up and top-down interactions occur in both networks, a bottom-up input pattern to the first ART network can learn to generate a top-down output pattern from the second ART network.
    || Fuzzy ARTMAP. Match tracking realizes minimax learning principle: vigilence increases to just above the match ratio of prototype / exemplar, thereby triggering search /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:449:
  • image p226fig05.35 I had shown in 1976 how a competitive learning or self-organizing map model could undergo catastrophic forgetting if the input environment was sufficiently dense and nonstationary, as illustrated by Figure 5.18. Later work with Gail Carpenter showed how, if the ART Matching Rule was shut off, repeating just four input patterns in the correct order could also casue catastrophic forgetting by causing superset recoding, as illustrated in Figure 5.36.
    || Code instability input sequences. D C A; B A; B C = ; |D|<|B|<|C|; where |E| is the number of features in the set E. Any set of input vectors that satisfy the above conditions will lead to unstable coding if they are periodically presented in the order ABCAD and the top-down ART Matching Rule is shut off. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:450:
  • image p246fig05.48 Microcircuits of the LAMINART model that I developed with Rajeev Raizada. See the text for details of how they integrate bottom-up adaptive filtering, horizontal bipole grouping, and top-down attentional matching that satisfied the ART Matching Rule.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:451:
  • image p252fig06.01 A surface-shroud resonance begins to form when the surface representations of objects bid for spatial attention. In addition to these topographic excitatory inputs, there is long-range inhibition of the spatial attention cells that determines which inputs will attract spatial attention.
    || Bottom-up spatial attention competition. [more, less] luminous perceptual surfaces -> competition -> spatial attention /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:452:
  • image p253fig06.02 After bottom-up surface inputs activate spatial attentional cells, they send top-down topographic excitatory signals back to the surface representations. This recurrent shunting on-center off-surround network contrast enhances larger attentional activities while approximately normalizing the total spatial attentional activity. A surface-shroud resonance hereby forms that selects an attentional shroud, enhances the perceived contrast of the attended surface (light blue region), and maintains spatial attention on it.
    || Surface-shroud resonance. perceptual surfaces -> competition -> spatial attention. (Carrasco, Penpeci-Talgar, and Eckstein 2000, Reynolds and Desimone 2003) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:453:
  • image p258fig06.07 A top-down spotlight of attention can also be converted into a shroud. This process begins when the spotlight triggers surface filling-in within a region. Figure 6.8 shows how it is completed.
    || Reconciling spotlights and shrouds: top-down attentional spotlight becomes a shroud. spotlight of attention, surface filling-in /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:454:
  • image p286fig07.04 Illusory contours persist longer than real contours because real contours have more inducers whose rebound at contour offset can cause faster boundary reset. Illusory contours also take longer to form than real contours, which explains the increasing portion of the curve.
    || Persistence data and simulations (Meyer, Ming 1988; Reynolds 1981). Increasing portion of curve is due to formation time of the illusory contour. Longer persistence is due to fewer bottom-up inducers of an illusory contour that has the same length as a real contour: only illuminance-derived edges generate reset signals. When bottom-up inducers are inhibited by OFF cell rebounds, their offset gradually propagates to the center of the illusory contour. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:455:
  • image p286fig07.05 This figure shows the propagation through time of illusory contour offset from the rebounded cells that got direct inputs to the center of the contour.
    || Persistence data and simulations. Illusory contours persist longer than real contours (Meyer, Ming 1988; Reynolds 1981). When bottom-up inducers are inhibited by OFF cell rebounds, their offset gradually propagates to the center of the illusory contour. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:456:
  • image p315fig08.35 The output signals from the directional grouping network obey the ART Matching Rule. They thereby select consistent motion directional signals while suppressing inconsistent ones, and do not distort that the spared cells code. The aperture problem is hereby solved by the same mechanism that dynamically stabilizes the learning of directional grouping cells.
    || How to select correct direction and preserve speed estimates? Prediction: Feedback from MSTv to MT- obeys ART Matching Rule; Top-down, modulatory on-center, off-surround network (Grossberg 1976, 1980; Carpenter, Grossberg 1987, 1991); Explains how directional grouping network can stably develop and how top-down directional attention can work. (Cavanough 1992; Goner etal 1986; Sekuer, Ball 1977; Stelmach etal 1994). Directional grouping network (MSTv) <-> Directional long-range filter (MT). Modulatory on-center selects chosen direction and preserves speed. Off-surround inhibits incompatible directions. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:457:
  • image p330fig08.52 Direction fields of the object frame (left column) and of the two dot "parts" (right column) show the correct motion directions after the peak shift top-down expectation acts.
    || Simulation of motion vector decomposition. [Larger scale (nearer depth), Small scale (farther depth)] vs [Down, Up] /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:458:
  • image p331fig08.54 The simulated part directions of the rotating dot through time after the translational motion of the frame does its work via the top-down peak shift mechanism.
    || Cycloid. Motion directions of a single dot moving slowly along a cycloid curve through time. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:459:
  • image p354fig10.01 The lamiar cortical circuit that realizes how we pay attention to an object sends signals from layer 6 of a higher cortical level to layer 6 of a lower cortical level and then back up to layer 4. This "folded feedback" circuit realizes a top-down, modulatory on-center, off-surround circuit that realizes the ART Matching Rule.
    || Top-down attention and folded feedback. Attentional signals also feed back into 6-to-4 on-center off-surround. 1-to-5-to-6 feedback path: Macaque (Lund, Booth 1975) cat (Gilbert, Wiesel 1979). V2-to-V1 feedback is on-center off-surround and affects layer 6 of V1 the most (Bullier etal 1996; Sandell, Schiller 1982). Attended stimuli enhanced, ignored stimuli suppressed. This circuit supports the predicted ART Matching Rule! [LGN, V[1,2][6->1]] /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:460:
  • image p356fig10.03 Laminar computing achieves at least three basic properties of visual processing that have analogs in all biologically intelligent behaviors. These properties may be found in all cortical circuits in specialized form.
    || What does Laminar Computing achieve? 1. Self-stabilizing development and learning; 2. Seamless fusion of a) pre-attentive automatic bottom-up processing, b) attentive task-selective top-down processing; 3. Analog coherence: Solution of Binding Problem for perceptual grouping without loss of analog sensitivity. Even the earliest visual cortical stages carry out active adaptive information processing: [learn, group, attention]ing /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:461:
  • image p359fig10.05 Activation of V1 is initiated, in part, by direct excitatory signals from the LGN to layer 4 of V1.
    || How are layer 2/3 bipole cells activated? Direct bottom-up activation of layer 4. LGN -> V1 layer 4. Strong bottom-up LGN input to layer 4 (Stratford etal 1996; Chung, Ferster 1998). Many details omitted. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:462:
  • image p359fig10.06 Another, albeit indirect, pathway from LGN exists that can also excite layer 4 of V1. Why are not these two pathways redundant? The answer, ultimately, how to do with how cortex learns, as well as with how it pays attention. See the text for details.
    || Another bottom-up input to layer 4: Why?? Layer 6-to-4 on-center off-surround (Grieve, Sillito 1991, 1995; Ahmedetal 1994, 1997). LGN projects to layers 6 and 4. Layer 6 excites spiny stellates in column above it. Medium range connections onto inhibitory neurons. 6-t-4 path acts as on-center off-curround. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:463:
  • image p359fig10.07 The two bottom-up pathways from LGN to layer 4 of V1 can together activate layer 4 and contrast-normalize layer 4 responses.
    || Bottom-up contrast normalization (Grossberg 1968, 1973; Sperling, Sondhi 1968; Heeger 1992; Douglas etal 1995; Shapley etal 2004). Together, direct LGN-to-4 path and 6-to-4 on-center off-surround provide contrast normalization if cells obey shunting or membrane equation dynamics. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:464:
  • image p360fig10.08 The bottom-up on-center off-surround from LGN-to-6-to-4 has a modulatory on-center because of its role in realizing the ART Matching Rule and, with it, the ability of the cortex to dynamically stabilize its learned memories.
    || Modulation of priming by 6-to-4 on-center (Stratford etal 1996; Callaway 1998). On-center 6-to-4 excitation is inhibited down to being modulatory (priming, subthreshold). On-center 6-to-4 excitation cannot activate layer 4 on its own. Clarifies need for direct path. Prediction: plays key role in stable grouping, development and learning. ART Matching Rule! /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:465:
  • image p364fig10.13 The bottom-up adaptive filter, intracortical grouping circuit, and intercortical top-down attentional circuit all use the same competitive decision circuit between layers 6 and 4, called the attention-preattention interface, with which to select the featural patterns that will be processed.
    || Bottom-up filters and intracortical grouping feedback use the same 6-to-4 decision circuit, LGN-> Vx[6,4,2/3]. competitive decision circuit, modulatory on-center off-surround network. Top-down intercortical attention also uses the same 6-to-4 decision circuit! /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:466:
  • image p364fig10.14 This figure emphasizes how preattentive intracortical groupings and top-down intercortical attention share the same modulatory on-center, off-surround layer 4-to-6 decision circuit.
    || Explanation: grouping and attention share the same modulatory decision circuit. Layer 6-6-4-2/3 pathway shown; also a layer 6-1-2/3 path. intercortical attention, both act via a modulatory on-center off-surround decision circuit, intracortical feedback from groupings /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:467:
  • image p420fig12.18 The ARTSTREAM model explains and simulates the auditory continuity illusion as an example of a spectral-pitch resonance. Interactions of ART Matching Rule and asymmetric competition mechanisms in cortical strip maps explain how the tone selects the consistent frequency from the noise in its own stream while separating the rest of the noise into another stream.
    || ARTSTREAM model (Grossberg 1999; Grossberg, Govindarajan, Wyse, Cohen 2004). SPINET. Frequency and pitch strips. Bottom Up (BU) harmonic sieve. Top Down (TD) harmonic ART matching. Exclusive allocation. Learn pitch categories based on early harmonic processing. A stream is a Spectral-Pitch Resonance! /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:468:
  • image p441fig12.38 The LTM Invariance Principle is realized if the relative sizes of the inputs to the list chunk level stay the same as more items are stored in working memory. This property, in turn, follows from shunting previously stored working memory activities when a ne4w item occurs.
    || LTM Invariance principle. Choose STM activities so that newly stored STM activities may alter the size of old STM activities without recoding their LTM patterns. In particular: New events do not change the relative activities of past event sequences, but may reduce their absolute activites. Why? Bottom-up adaptive filtering uses dot products: T(j) = sum[i=1 to n: x(i)*z(i,j) = total input to v(j). The relative sizes of inputs to coding nodes v(j) are preserved. x(i) -> w*x(i), 0 < w <= 1, leaves all past ratios T(j)/T(k) unchanged. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:469:
  • image p449fig12.47 This figure illustrates the self-similarity in a Masking Field of both its recurrent inhibitory connections (red arrows) and its top-down excitatory priming signals (green arrows) to the item chunk working memory.
    || Both recurrent inhibition and top-down excitatory priming are self-similar in a masking field. MYSELF <-> [MY, MYSELF] /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:470:
  • image p453fig12.50 The ARTWORD perception cycle shows how sequences of items activate possible list chunks, which compete among each other and begin to send their top-down expectations back to the item working memory. An item-list resonance develops through time as a result.
    || ARTWORD perception cycle. (a) bottom-up activation (b) list chunk competition (c) item-list resonance (d) chunk reset due to habituative collapse. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:471:
  • image p483fig13.02 The object-value categories in the orbitofrontal cortex require converging specific inputs from the sensory cortex and nonspecific incentive motivational inputs from the amygdala in order to fire. When the orbitofrontal cortex fires, it can deliver top-down ART Matching Rule priming signals to the sensory cortical area by which it was activated, thereby helping to choose the active recognition categories there that have the most emotional support, while suppressing others, leading to attentional blocking of irrelevant cues.
    || Cognitive-Emotional-Motor (CogEM) model. Drive-> amygdala incentive motivational learning-> orbitofrontal cortex- need converging cue and incentive iputs to fire <-> sensory cortex- conditioned renforcer learning-> amygdala. CS-> sensory cortex. Motivated attention closes the cognitive-emotional feedback loop, focuses on relevant cues, and causes blocking of irrelevant cues. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:472:
  • image p484fig13.04 The top-down feedback from the orbitofrontal cortex closes a feedback loop that supports a cognitive-emotional resonance. If this resonance can be sustained long enough, it enables us to have feelings at the same time that we experience the categories that caused them.
    || Cognitive-Emotional resonance. Basis of "core consciousness" and "the feeling of what happens". (Damasio 1999) derives heuristic version of CogEM model from his clinical data. Drive-> amygdala-> prefrontal cortex-> sensory cortex, resonance around the latter 3. How is this resonance maintained long enough to become conscious? /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:473:
  • image p523fig14.03 (a) The MOTIVATOR neural model generalizes CogEM by also including the basal ganglia. It can hereby explain and simulate complementary functions of the amygdala and basal ganglia (SNc) during conditioning and learned performance. The basal ganglia generate Now Print signals in response to unexpected rewards. These signals modulate learning of new associations in many brain regions. The amygdala supports motivated attention to trigger actions that are expected to occur in response to conditioned or unconditioned stimuli. Object Categories represent visual or gustatory inputs in anterior inferotemporal (ITA) and rhinal (RHIN) cortices, respectively. Value Categories represent the value of anticipated outcomes on the basis of hunger and satiety inputs, in amygdala (AMYG) and lateral hypothalamus (LH). Object-Value Categories resolve the value of competing perceptual stimuli in medial (MORB) and lateral (ORB) orbitofrontal cortex. The Reward Expectation Filter detects the omission or delivery of rewards using a circuit that spans ventral striatum (VS), ventral pallidum (VP), striosomal delay (SD) cells in the ventral striatum, the pedunculopontine nucleus (PPTN) and midbrain dopaminergic neurons of the substantia nigra pars compacta/ventral tegmental area (SNc/VTA). The circuit that processes CS-related visual information (ITA, AMTG, ORB) operates in parallel with a circuit that processes US-related visual and gustatory information (RHIN, AMYG, MORB). (b) Reciprocal adaptive connections between hypothalamus and amygdala enable amygdala cells to become learned value categories. The bottom region represents hypothalmic cells, which receive converging taste and metabolite inputs whereby they become taste-drive cells. Bottom-up signals from activity patterns across these cells activate competing value category, or US Value Representations, in the amygdala. A winning value category learns to respond selectively to specific combinations of taste-drive activity patterns and sends adaptive top-down priming signals back to the taste-drive cells that activated it. CS-activated conditioned reinforcer signals are also associatively linked to value categories. Adpative connections end in (approximately) hemidiscs. See the text for details.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:474:
  • image p600fig16.36 The entorhinal-hipppocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories. See the text for details.
    || Entorhinal-hippocampal interactions as an ART system. Hippocampal place cells as spatial categories. Angular head velocity-> head direction cells-> stripe cells- small scale 1D periodic code (ECIII) SOM-> grid cells- small scale 2D periodic code (ECII) SOM-> place cells- larger scale spatial map (DG/CA3)-> place cells (CA1)-> conjunctive-coding cells (EC V/VI)-> top-down feedback back to stripe cells- small scale 1D periodic code (ECIII). stripe cells- small scale 1D periodic code (ECIII)-> place cells (CA1). /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:475:
  • image p613fig16.44 The main target position vector (TPV), difference vector (DV), and volitional GO computations in SOVEREIGN that bring together reactive and planned signals to control decision-making and action. See the text for details.
    || Reactive visual TPV (RVT), NETs (NETs), S-MV mismatch (SMVM), NETmv (NETmv), reactive visual TPV storage (RVTS), reactive DV1 (RD1), NET (NET), motivated what and where decisions (MWWD), Planned DV1 (PD1), tonic (Tonic), top-down readout mismatch (TDRM), Parvo gate (tonic) (PG), Orienting GOp offset (OGpO). RVT-> [NETs, RVTS], NETs-> [SMVM, NET], SMVM-> NET, NETmv-> SMVM, RVTS-> [NETs, RD1], NET-> [RD1, PD1, TDRM], MWWD-> PD1, PD1-> Tonic-> TDRMPG-> NETs, OGpO-> [NETmv, PD1]. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:476:
  • Grossberg 2021 p229c2h0.60 SMART computer simulations demonstrate that a good enough match of a top-down expectation with a bottom-up feature pattern generates an attentive resonance during which the spikes of active cells synchronize in the gamma frequency range of 20-70 Hz (Figure 5.40). Many labs have reported a link between attention and gamma oscillations in the brain, including two articles published in 2001, one from the laboratory of Robert Desimone when he was at the the National Institute of Mental Health in Bethseda (Fries, Reynolds, Rorie, Desimone 2001), and the other from the laboratory of Wolf Singer in Frankfurt (Engel, Fries, Singer 2001). You'll note that Pascal Fries participated in both studies, and is an acknowledged leader in neurobiological studies of gamma oscillations; eg (Fries 2009). .." /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:478:
  • Grossberg 2021 p081c2h0.66 sub-section: How does one evolve a computational brain?
    The above discussion illustrates that no single step of theoretical derivation can derive a whole brain. One needs a method for deriving a brain in stages, or cycles, much as evolution has incrementally discovered ever more complex brains over many thousands of years. The following theoretical method has been successfully applied many times since I first used it in 1957. It embodies a kind of conceptual evolutionary process for deriving a brain.

    Because "brain evolution needs to achieve behavioural success", we need to start with data that embodiey indices of behavioral success. That is why, as illustrated in Figure 2.37 Modelling method and cycle, one starts with Behavioral Data from scores or hundreds of psychological experiments. These data are analyszed as the result of an individual adapting autonomously in real time to a changing world. This is the Arty of Modeling. It requires that one be able to infer from static data curves the dynamical processes that control individual behaviors occuring in real time. One of the hardest things that I teach to my students to do is "how to think in real time" to be able to carry out this speculative leap.

    Properly carried out, this analysis leads to the discovery of new Design Principles that are embodied by these behavioral processes. The Design Principles highlight the functional meaning of the data, and clarify how individual behaviors occurring in real time give rise to these static data curves.

    These principles are then converted into the simplest Mathematical Model using a method of minimal anatomies, which is a form of Occam's Razor, or principle of parsimony. Such a mathematical model embodies the psychological principles using the simplest possible differential equations. By "simplest" I mean that, if any part of the derived model is removed, then a significant fraction of the targeted data could no longer be explained. One then analyzes the model mathematically and simulates it on the computer, showing along the way how variations on the minimal anatomy can realize the design principles in different individuals or species.

    This analysis has always provided functional explanations and Behavioral Predictions for much larger behavioral data bases than those used to discover the Design Principles. The most remarkable fact is, however, that the behaviorally derived model always looks like part of a brain, thereby explaining a body of challenging Neural Data and making novel Brain Predictions.

    The derivation hereby links mind to brain via psychological organizational principles and their mechanistic realization as a mathematically defined neural network. This startling fact is what I first experienced as a college Freshman taking Introductory Psychology, and it changed my life forever.

    I conclude from having had this experience scores of times since 1957 that brains look the way they do because they embody a natural computational realization for controlling autonomous adaptation in real-time to a changing world. Moreover, the Behavior -> Principles -> Model -> Neural derivation predicts new functional roles for both known and unknown brain mechanisms by linking the brain data to how it helps to ensure behavioral success. As I noted above, the power of this method is illustrated by the fact that scores of these predictions about brain and behavior have been supported by experimental data 5-30 years after they were first published.

    Having made the link from behavior to brain, one can then "burn the candle from both ends" by pressing both top-down from Behavioral Data and bottom-up from Brain Data to clarify what the model can and cannot explain at its current stage of derivation. No model can explain everything. At each stage of development, the model can cope with certain environmental challenges but not others. An important part of the mathematical and computational analysis is to characterize the boundary between the known and unknown; that is which challenges the model can cope with and which it cannot. The shape of this boundary between the known and unknown helps to direct the theorist's attention to new design principles that have been omitted from previous analysis.

    The next step is to show how these new design principles can be incorporated into the evolved model in a self-consistent way, without undermining its previous mechanisms, thereby leading to a progressively more realistic model, one that can explain and predict ever more behavioral and neural data. In this way, the model undergoes a type of evolutionary development, as it becomes able to cope behaviorally with environmental constraints of ever increasing subtlety and complexity. The Method of Minimal Anatomies may hereby be viewed as way to functionally understand how increasingly demanding combinations of environmental pressures were incorporated into brains during the evolutionary process.

    If such an Embedding Principle cannot be carried out - that is, if the model cannot be unlumped or refined in a self-consistent way - then the previous model was, put simply, wrong, and one needs to figure out which parts must be discarded. Such a model is, as it were, an evolutionary dead end. Fortunately, this has not happened to me since I began my work in 1957 because the theoretical method is so conservative. No theoretical addition is made unless it is supported by multiple experiments that cannot be explained in its absence. Where multiple mechanistic instantiations of some Design Principles were possible, they were all developed in models to better underestand their explanatory implications. Not all of these instantiations could survive the pressure of the evolutionary method, but some always could. As a happy result, all earlier models have been capable of incremental refinement and expansion.

    The cycle of model evolution has been carried out many times since 1957, leading today to increasing numbers of models that individually can explain and predict psychological, neurophysiological, anatomical, biophysical, and even biochemical data. In this specific sense, the classical mind-body problem is being incrementally solved.

    Howell: bold added for emphasis.
    (keys : Principles-Principia, behavior-mind-brain link, brain evolution, cycle of model evolution)
    see also quotes: Charles William Lucas "Universal Force" and others (not retyped yet). /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:479:
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
    bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :
    whitegeneral microcircuit : a possible component of ART architecture
    lime greensensory perception [attention, expectation, learn]. Table includes [see, hear, !!*must add touch example*!!], no Grossberg [smell, taste] yet?
    light bluepost-perceptual cognition?
    pink"the feeling of what happens" and knowing what event caused that feeling
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:47:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:501:
  • image p009fig01.06 Primacy gradient of activity stored in working memory within a recurrent shunting on-center off-surround network. Rehersal is controlled by a nonspecific rehersal wave and self-inhibitory feedback of the item that is currently being rehearsed. Rehearsal is controlled by a nonspecific rehearsal wave and self-inhibitory feedback of the item that is currently being rehearsed. Green = excitatory, red = inhibitory
    || inputs? -> item and order WM storage -> competitive selection-> rehearsal wave -> outputs /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:502:
  • image p077fig02.27 A schematic of the on-center off-surround network that occurs in the mudpuppy retina, including three main cell types: receptors, horizontal cells, and bipolar cells.
    || Mechanism: cooperative-competitive dynamics.
    On-center off-surround (Kuffler 1953) cat retina
    Subtractive lateral inhibition (Hartline, Ratcliff 1956/7+) limulus retina.
    R receptor -> H horizontal -> B bipolar (Werblin, Dowling, etal 1969+) mudpuppy retina. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:503:
  • image p100fig03.15 A fuzzy band of possible initial grouping orientations allows grouping to get started. Cooperative-competitive feedback via a hierarchical resolution of uncertainty chooses a sharp final grouping that has the most evidence to support it.
    || before choice: transient; after choice: equilibrium /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:504:
  • image p108fig03.28 The watercolor illusion of Baingio Pinna 1987 can be explained using spatial competition betweeen like-oriented boundary signals. This occurs at what I have called the First Competitive Stage. This is one stage in the brain's computation of hypercomplex cells, which are also called endstopped complex cells. Why the blue regions seem to bulge in depth may be explained using multple-scale, depth-selective boundary webs. See ther text for details.
    || Baigio Pinna. Watercolor illusion 1987. Filled-in regions bulge in depth. Multiple-scale, depth-selective boundary web! /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:505:
  • image p146fig04.25 Networks of simple, complex, and hypercomplex cells can create end cuts as an example of hierarchical resolution of uncertainty. See the text for details.
    || How are end cuts created? (Grossberg 1984) Two stages of short-range competition. 1st stage: Simple cells -> complex cells -> hypercomplex - endstopped complex. First competitive stage- across position, same orientation; Second competitive stage- same position, across orientation. -> cooperation. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:506:
  • image p148fig04.26 End cuts are formed during neon color spreading in the same way that they are formed at line ends.
    || End cut during neon color spreading.
    FIRST competitive stageSECOND competitive stage
    within orientationacross orientation
    across positionwithin position
    to generate end cuts. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:507:
  • image p149fig04.27 Bipole cells can form boundaries that interpolate end cuts, and use their cooperative-competitive interactions to choose the boundary groupings that have the most support from them.
    || Bipole cells: boundary completion. long-range cooperation & short-range inhibition: complete winning boundary groupings and suppress weaker boundaries. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:508:
  • image p161fig04.37 Kanizsa squares that form either collinearly to their inducers (left panel) or perpendicular to them (right panel) confirm predictions of the BCS boundary completion model.
    || Analog-sensitive boundary completion. contour strength vs Kanizsa square image. Increases with "support ratio" (Shipley, Kellman 1992). Inverted-U (Lesher, Mingoloa 1993; cf Soriano, Spillmann, Bach 1994)(shifted gratings). p370h0.6 BCS = Boundary Contour System, FCS = Feature Contour System. p161c1h0.85 "... As predicted by the BCS, they found an Inverted-U in contour strength as a function of line density. ... This effect may be explained by the action of the short-range competition that occurs before the stage of long-range cooperative grouping by bipole cells (Figure 4.32). It is thus another example of the balance between cooperative and competitive mechanisms. ..." /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:509:
  • image p198fig05.10 A competitive learning circuit learns to transform distributed feature patterns into selective responses of recognition categories.
    || Competitive learning and Self-Organized Maps (SOMs). input patterns -> feature level (F1) -> adaptive filter (T=ZS) -> /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:510:
  • image p205fig05.18 How catastrophic forgetting can occur in a competitive learning or self-organizing map model due to basic properties of competition and associative learning.
    || Learning from pattern sequences, practicing a sequence of spatial patterns can recode all of them! When is learning stable? Input patterns cannot be too dense relative to the number of categories; Either: not to many distributed inputs relative to the number of categories, or not too many input clusters /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:511:
  • image p226fig05.35 I had shown in 1976 how a competitive learning or self-organizing map model could undergo catastrophic forgetting if the input environment was sufficiently dense and nonstationary, as illustrated by Figure 5.18. Later work with Gail Carpenter showed how, if the ART Matching Rule was shut off, repeating just four input patterns in the correct order could also casue catastrophic forgetting by causing superset recoding, as illustrated in Figure 5.36.
    || Code instability input sequences. D C A; B A; B C = ; |D|<|B|<|C|; where |E| is the number of features in the set E. Any set of input vectors that satisfy the above conditions will lead to unstable coding if they are periodically presented in the order ABCAD and the top-down ART Matching Rule is shut off. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:512:
  • image p287fig07.07 Persistence increases with distance between a target and a masking stimulus due to weakening of the spatial competition in the first competitive stage of hypercomplex cells.
    || Persistence data and simulations. Persistence increases with distance between a target and a masking stimulus (Farrell, Pavel, Sperling 1990). There is less spatial competition from the masker to the target when they are more distant, hence the target is more persistent. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:513:
  • image p364fig10.13 The bottom-up adaptive filter, intracortical grouping circuit, and intercortical top-down attentional circuit all use the same competitive decision circuit between layers 6 and 4, called the attention-preattention interface, with which to select the featural patterns that will be processed.
    || Bottom-up filters and intracortical grouping feedback use the same 6-to-4 decision circuit, LGN-> Vx[6,4,2/3]. competitive decision circuit, modulatory on-center off-surround network. Top-down intercortical attention also uses the same 6-to-4 decision circuit! /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:514:
  • image p437fig12.33 Item and Order working memory models explain free recall data, as well as many other psychological and neurobiological data, by simulating how temporal series of events are stored as evolving spatial patterns of activity at content-addressable item categories. The categories with the largest activities are rehearsed first, and self-inhibit their activity as they do so in order to prevent tem from being rehearsed perseveratively. The laws whereby the items are stored in working memory obey basic design principles concerning list categories, or chunks, of sequences of stored items can be stably remembered.
    || Working memory models: item and order, or competitive queuing (Grossberg 1978; Houghton 1990; Page, Norris 1998). Event sequence in time stored as an evolving spatial pattern of activity. Primacy gradient of working memory activation stores correct temporal order at content-addressable cells. Maximally activated cell populations is performed next when a rehearsal wave is turned on. Output signal from chosen cell population inhibits its own activity to prevent perseveration: inhibition of return. Iterate until entire sequence is performed. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:515:
  • image p488fig13.12 (left column) How incentive motivational feedback amplifies activity of a sensory cortical cell population. (right column) A sensory cortical cell population whose activity is amplified by incentive motivational feedback can suppress the activities of less activated populations via self-normalizing recurrent competitive interactions.
    || Motivational feedback and blocking. (left) sensory input CS, STM activity without motivational feedback, STM activity with motivational feedback. (right) STM suppressed by competition, STM amplified by (+) feedback. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:516:
  • image p510fig13.39 Shunting competition and informational noise suppression in affective gated dipoles, plus back-propagating action potentials for teaching signals, enable the net normalized adaptive weights to be learned. They never saturate!
    || Learn net dipole output pattern. Opponent "decision" controls learning. Cf. competitive learning. Learning signal, opponent extinction. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:51:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:528:
  • p001 Chapter 1 Overview - From Complementary Computing and Adaptive Resonance to conscious awareness /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:529:
  • p250 Chapter 6 Conscious seeing and invariant recognition - Complementary cortical streams coordinate attention for seeing and recognition /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:530:
  • p289 Chapter 8 How we see and recognize object motion - Visual form and motion perception obey complementary laws /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:531:
  • p337 Chapter 9 Target tracking, navigation, and decision-making - Visual tracking and navigation obey complementary laws /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:54:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:553:
  • image p029tbl01.01 Some pairs of complementary processing streams.
    ||
    visual boundary:
    interblob stream V1-V2-V4
    visual surface:
    blob stream V1-V2-V4
    visual boundary:
    interblob stream V1-V2-V4
    visual motion:
    magno stream V1-MT-MST
    WHAT streamWHERE stream
    perception & recognition:
    interferotemporal & prefrontal areas
    space & action:
    parietal & prefrontal areas
    object tracking:
    MT interbands & MSTv
    optic flow navigation:
    MT+ bands & MSTd
    motor target position:
    motor & parietal cortex
    volitional speed:
    basal ganglia
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:554:
  • image p030tbl01.02 The What and Where cortical processing streams obey complementary laws. These laws enable the What stream to rapidly and stably learn invariant object categories without experiencing catastrophic forgetting, while the Where stream learns labile spatial and action representations to control actions that are aimed towards these objects.
    ||
    WHATWHERE
    spatially-invariant object learning and recognitionspatially-variant reaching and movement
    fast learning without catastrophic forgettingcontinually update sensory-motor maps and gains
    IT InterferoTemporal CortexPPC Posterior Parietal Cortex
    WhatWhere
    matchingexcitatoryinhibitory
    learningmatchmismatch
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:555:
  • image p087fig03.01 A macrocircuit of key visual processes (in green) and the cortical areas in which they primarily occur (in red), from the retina to the Prefrontal Cortex (PFC), including both the What and Where cortical streams. The [bottom-up, horizontal, and top-down] interactions help each of these processes to overcome computationally complementary processing deficiencies that they would experience without them, and aso to read-out top-down expectations that help to stabilize learning while they focus attention on salient objects and positions.
    || Emerging unified theory of visual intelligence. [What, Where] streams. Bottom-up and top-down interactions overcome COMPLEMENTARY processing deficiencies. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:556:
  • image p094fig03.07 The processes of boundary completion and surface filling-in are computationally complementary.
    ||
    Boundary completionSurface filling-in
    outwardinward
    orientedunoriented
    insensitive to direction of contrastsensitive to direction-of-contrast
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:557:
  • image p174fig04.51 The same feedback circuit that ensures complementary consistency between boundaries and surfaces also, automatically, initiates figure-ground separation! See the text for details.
    || before feedback: [V1 -> V2 pale stripe -> V2 thin stripe, "attention pointers" (Cavanagh etal 2010)]; after feedback: [V1 + V2 thin stripe] -> V2 pale stripe via contrast sensitive [exhitation, inhibition] for depths [1, 2] -> object recognition /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:558:
  • image p176fig04.53 The on-center off-surround network within position and across depth helps to explain why brighter Kanizsa squares look closer.
    || inhibition vs. depth. p176c1h0.25 "... to qualitatively understand how this example of proximity-luminance covariance works. It follows directly from the boundary pruning by surface contour feedback signals (Figure 4.51) that achieves complementary consistency and initiates figure-ground perception. ...". p176c1h0.45 "... these inhibitory sigals are part of an off-surround network whose strength decreases as the depth difference increases between the surface that generates the signal and its recipient boundaries. ...". p176c1h0.8 "... Within FACADE theory, the perceived depth of a surface is controlled by the boundaries that act as its filling-in generators and barriers (Figure 3.22), since these boundaries select the depth-sselective FIDOs within whin filling-in can occur, and thereby achieve surface capture. These boundaries, in turn, are themselves strengthened after surface-to-boundary contour feedback eliminates redundant boundaries that cannot support sucessful filling-in (Figure 4.51). These surface contour feedback signals have precisely the properties that are needed to explain why brighter Kanizsa squares look closer! ..." /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:559:
  • image p211fig05.20 The PN and N200 event-related potentials are computationally complementary events that are computed within the attentional and orienting systems.
    || PN and N200 are complementary waves. PN [top-down, conditionable, specific] match; N200 [bottom-up, unconditionable, nonspecific] mismatch /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:560:
  • image p267fig06.14 Feedback from object surfaces to object boundaries uses surface contours. This feedback assures complementary consistency and enables figure-ground separation. A corollary discharge of the surface contours can be used to compite salient object feature positions.
    || Perceptual consistency and figure-ground separation. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:561:
  • image p314fig08.34 The VISTARS model for visually-based spatial navigation. It uses the Motion BCS as a front end and feeds it output signals into two computationally complementary cortical processing streams for computing optic flow and target tracking information.
    || VISTARS navigation model (Browning, Grossberg, Mingolia 2009). Use FORMOTION model as front end for higher level navigational circuits: input natural image sequences -> estimate heading (MT+)-MSTd -> additive processing -> estimate object position (MT-)-MSTv direction and speed subtractive processing -> Complementary Computing. [optic flow navigation, object tracking] /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:562:
  • image p523fig14.03 (a) The MOTIVATOR neural model generalizes CogEM by also including the basal ganglia. It can hereby explain and simulate complementary functions of the amygdala and basal ganglia (SNc) during conditioning and learned performance. The basal ganglia generate Now Print signals in response to unexpected rewards. These signals modulate learning of new associations in many brain regions. The amygdala supports motivated attention to trigger actions that are expected to occur in response to conditioned or unconditioned stimuli. Object Categories represent visual or gustatory inputs in anterior inferotemporal (ITA) and rhinal (RHIN) cortices, respectively. Value Categories represent the value of anticipated outcomes on the basis of hunger and satiety inputs, in amygdala (AMYG) and lateral hypothalamus (LH). Object-Value Categories resolve the value of competing perceptual stimuli in medial (MORB) and lateral (ORB) orbitofrontal cortex. The Reward Expectation Filter detects the omission or delivery of rewards using a circuit that spans ventral striatum (VS), ventral pallidum (VP), striosomal delay (SD) cells in the ventral striatum, the pedunculopontine nucleus (PPTN) and midbrain dopaminergic neurons of the substantia nigra pars compacta/ventral tegmental area (SNc/VTA). The circuit that processes CS-related visual information (ITA, AMTG, ORB) operates in parallel with a circuit that processes US-related visual and gustatory information (RHIN, AMYG, MORB). (b) Reciprocal adaptive connections between hypothalamus and amygdala enable amygdala cells to become learned value categories. The bottom region represents hypothalmic cells, which receive converging taste and metabolite inputs whereby they become taste-drive cells. Bottom-up signals from activity patterns across these cells activate competing value category, or US Value Representations, in the amygdala. A winning value category learns to respond selectively to specific combinations of taste-drive activity patterns and sends adaptive top-down priming signals back to the taste-drive cells that activated it. CS-activated conditioned reinforcer signals are also associatively linked to value categories. Adpative connections end in (approximately) hemidiscs. See the text for details.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:563:
  • image p548fig15.16 Homologous recognition learning and reinforcement learning macrocicuits enable adaptively timed conditioning in the reinforcement learning circuit to increase inhibition of the orienting system at times when a mismatch in the recognition system would have reduced inhibition of it.
    || Homolog between ART and CogEM model, complementary systems. [Recognition, Reinforcement] learning vs [Attentional, Orienting] system. Reinforcement: timing, drive representation. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:56:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:574:
  • p353 Chapter 10 Laminar computing by cerebral cortex - Towards a unified theory of biologucal and artificial intelligence /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:58:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:594:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:596:
  • image p030fig01.20 A schematic cross-section of a slice of laminar neocortex whose cells are organized in a characteristic way in six layers, which themselves may be organized into distinct sublaminae. The computational paradigm of Laminar Computing attempts to show how different parts of neocortex can represent and control very different kinds of behavior - including vision, speech, can cognition - using specializations of the same canonical laminar cortical design.
    || Projection fibres: Cortico[spinal, bulbar, pontine, striate, reticulat, etc]; Thalamocortical fibres; Diffuse cortical afferent fibres: [nonspecific thalamocortical, Cholinergic, Monoaminergic]; Corticocortical efferents; Projection [cell, fibre]; Corticocortical efferent terminals. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:597:
  • image p141fig04.19 A laminar cortical circuit for computing binocular disparities in layer 3B of V1 at binocular simple cells. These cells add positionally disparate inputes from like polarized monocular simple cells (layer 4 of V1). Binocular simple cells at each position that is sensitive to opposite polarities then add their outputs at complex cells in layer 2/3. Chapter 10 will explain how these laminar circuits work in greater detail.
    || Laminar cortical circuit for complex cells. [left, right] eye.
    V1 layerdescription
    2/3Acomplex cells
    3Bbinocular simple cells
    4monocular simple cells
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:598:
  • image p163fig04.39 A schematic of the LAMINART model that explains key aspects of laminar visual cortical anatomy and dynamics. LGN -> V1 [6, 4, 2/3] -> V2 [6, 4, 2/3]
    || p163c1h0.6 "... The first article abount laminar computing ... proposed how the laminar cortical model could process 2D pictures using bottom-up filtering and horizontal bipole grouping interactions (Grossbeerg, Mingolla, Ross 1997). In 1999, I was able to extend the model to also include top-down circuits for expectation and attention (Grossberg 1999)(right panel). Such a synthesis of laminar bottom-up, horizontal, and top-down circuits is characteristic of the cerebral cortex (left panel). I called it LAMINART because it began to show how properties of Adaptive Resonance Theory, or ART, notably the ART prediction about how top-down expectations and attention work, and are realized by identical cortical cells and circuits. You can immediately see from the schematic laminar circuit diagram ... (right panel) that circuits in V2 seem to repeat circuits in V1, albeit with a larger spatial scale, despite the fact that V1 and V2 carry out different functions, How this anatomical similarity can coexist with functional diversity will be clarified in subsequent sections and chapters. It enable dfifferent kinds of biological intelligence to communicate seamlessly while carrying out their different psychological functions. ..." /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:599:
  • image p174fig04.52 An example of how the 3D LAMINART model can transform the two monocular images of the random dot stereogream in the top row into the three depth-separated surfaces representations in the bottom row.
    || Stereogram surface percepts: surface lightnesses are segregated in depth (Fang and Grossberg 2009). [left, right] inputs, [far, fixation, near] planes. Contrast algorithms that just compute disparity matches and let computer code build the surface eg (Marr, Poggio, etal 1974). /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:600:
  • image p182fig04.58 LAMINART model processing stage that are sufficient to explain many percepts of transparency, including those summarized in Figure 4.57.
    || [left, right] eye, [LGN, V1 [6, 4, 3B, 2/3 A], V2 [4, 2/3]], [mo, bi]nocular cart [simple, complex] cells, [excita, inhibi]tory cart [connection, cell]s. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:601:
  • image p230fig05.38 The Synchronous Matching ART, or SMART, model includes spiking neurons in a laminar cortical hierarchy. I developed SMART with my PhD student Massimiliano Versace. By unlumping LAMINART to include spiking neurons, finer details of neurodynamics, such as the existence of faster gamma oscillations during good enough matches, and slower beta oscillations during bad enough mismatches, could be shown as emergent properties of network interactions.
    || Second order thalamus -> specific thalamic nucleus -> Thalamic reticulate nucleus -> neocortical laminar circuit [6ll, 6l, 5, 2/3, 1] -> Higher order cortex. Similar for First order thalamus -> First order cortex, with interconnection to Second order, nonspecific thalamic nucleus /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:602:
  • image p231fig05.39 The SMART hypothesis testing and learning cycle predicts that vigilance increases when a mismatch in subcortical regions like the nonspecific thanlamus activates the nucleus basalis of Meynert which, in turn, broadcasts a burst of the neurotransmitter acetylcholine, or ACh, to deeper cortical layers. Due to the way in which LAMINART proposes that cortical matching and mismatching occurs, this ACh burst can increase vigilance and thereby trigger a memory search. See the text for details.
    || [BU input, [, non]specific thalamic nucleus, thalamic reticulate nucleus, neocortical laminar circuit] cart [Arousal, Reset, Search, Vigilance] /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:603:
  • image p232fig05.41 (a)-(c). The sequence of interlaminar events that SMART predicts during a mismatch reset. (d) Some of the compatible neurophysiological data.
    || Mismatch causes layer 5 dendritic spike that trigger reset. (a) Arousal causes increase in nonspecific thalamic nuclei firing rate and layer 5 dendritic and later somatic spikes (Karkum and Zhu 2002, Williams and Stuart 1999) (b) Layer 5 spikes reach layer 4 via layer 6i and inhibitory neurons (Lund and Boothe 1975, Gilbert and Wiesel 1979) (c) habituative neurotransmitters in layer 6i shift the balance of active cells in layer 4 (Grossberg 1972, 1976) (d) Dendritic stimulation fires layer 5 (Larkum and Zhu 2002) stimulation apical dendrites of nonspecific thalamus /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:604:
  • image p246fig05.48 Microcircuits of the LAMINART model that I developed with Rajeev Raizada. See the text for details of how they integrate bottom-up adaptive filtering, horizontal bipole grouping, and top-down attentional matching that satisfied the ART Matching Rule.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:605:
  • image p248fig05.49 This circuit of the LAMINART model helps to explain properties of Up and Down states during slow wave sleep, and how disturbances in ACh dynamics can disrupt them.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:608:
  • image p356fig10.03 Laminar computing achieves at least three basic properties of visual processing that have analogs in all biologically intelligent behaviors. These properties may be found in all cortical circuits in specialized form.
    || What does Laminar Computing achieve? 1. Self-stabilizing development and learning; 2. Seamless fusion of a) pre-attentive automatic bottom-up processing, b) attentive task-selective top-down processing; 3. Analog coherence: Solution of Binding Problem for perceptual grouping without loss of analog sensitivity. Even the earliest visual cortical stages carry out active adaptive information processing: [learn, group, attention]ing /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:609:
  • image p357fig10.04 Laminar Computing achieves its properties by computing in a new way that sythesizes the best properties of feedforward and feedback interactions, analog and digital computations, and preattentive and attentive learning. The property of analog coherence enables coherent groupings and decisions to form without losing sensitivity to the amount of evidence that supports them.
    || Laminar Computing: a new way to compute. 1. Feedforward and feedback: a) Fast feedforward processing when data are unambiguous (eg Thorpe etal), b) slower feedback chooses among ambiguous alternatives [self-normalizing property, real-time probabiligy theory], c) A self-organizing system that trades certainty against speed: Goes beyond Bayesian models! 2. Analog and Digital: Analog Coherence combines the stability of digital with the sensitivity of analog. 3. Preattentive and Attentive Learning: Reconciles the differences of (eg) Helmholtz and Kanizsa, "A preattentive grouping is its own 'attentional' prime" /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:60:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:610:
  • image p362fig10.11 Feedback between layer 2/3 to the layer 6-to-4-to-2/3 feedback loop chooses the strongest grouping in cases where there is more than one. If only one grouping exists, then the circuit can function very quickly in a feedforward manner. When multiple groupings exist, the cortex "runs as fast as it can" to select the one with the most evidence to support it using the self-normalizing inhibition in the layer 6-to-4 off-surround.
    || How is the final grouping selected? Folded feedback LGN-> 6-> 4-> 2/3. 1. Layer 2/3 groupings feed back into 6-to-4 on-center off-surround: a) direct layer 2/3 -to-6 path; b) can also go via layer 5 (Blasdel etal 1985; Kisvarday etal 1989). 2. Strongest grouping enhanced by its on-center. 3. Inputs to weaker groupings suppressed by off-surround. 4. Interlaminar feedback creates functional columns. Activities of conflicting groupings are reduced by self-normalizing inhibition, slowing processing; intracortical feedback selects and contrast-enhances the winning grouping, speeding processing. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:611:
  • image p363fig10.12 The same laminar circuit design repeats in V1 and V2, albeit with specializations that include longer horizontal grouping axoms and figure-ground separation interactions.
    || V2 repeats V1 circuitry at larger spatial scale, LGN-> V1[6,4,2/3]-> V2[6,4,2/3]. V2 layer 2/3 horizontal axons longer-range than in V1 (Amir etal 1993). Therefore, longer-range groupings can form in V2 (Von der Heydt etal 1984) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:612:
  • image p375fig11.07 The 3D LAMINART model uses both monocular and binocular simple cells to binocularly fuse like image contrasts. The remainder of the model generates 3D boundary and surface representations of multiple kinds of experiments as well as of natural scenes.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A], V2 thin stripe [4->2/3A], V4]. V1 blob [V1-4 monocular, V1 interior binocular] simple cells. [complex, simple, inhibitory] cells, on-center off-surround /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:613:
  • image p376fig11.10 The 3D LAMINART model shows how the disparity filter can be integrated into the circuit that completes 3D boundary representations using bipole grouping cells. It also explains how surface contours can strengthen boundaries that succeed in generating closed filling-in domains.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A] surface contour, V2 thin stripe (monocular surface) [4->2/3A], V2 interior [disynaptic inhibitory interneurons, bipole grouping cells, disparity filter, V4 binocular surface]. [complex, simple, inhibitory] cells, on-center off-surround /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:614:
  • image p378fig11.12 How monocular and binocular information are combined in V1 and V2 in the 3D LAMINART model.
    || Model utilizes monocular information. [left, right] eye cart V1-[4 monocular simple, 3B binocular simple, complex2/3A [mo,bi]nocular] cells, V2-4 binocular complex cells. black = monocular cells, blue = binocular cells. In V2, monocular inputs add to binocular inputs along the line of sight and contribute to depth perception. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:615:
  • image p379fig11.13 How the 3D LAMINART model explains DaVinci stereopsis. All the stages of boundary and surface formation are color coded to clarify their explanation. Although each mechanism is very simple, when all of them act together, the correct depthful surface representation is generated. See the text for details.
    || DaVinci stereopsis (Nakayama, Shimojo 1990). An emergent property of the previous simple mechanisms working together. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] pair [Binocular match: boundaries of thick bar -> Add monocular boundaries along lines-of-sight -> Line-of-sight inhibition kills weaker vertical boundaries -> 3D surface percept not just a disparity match!] pair [Binocular match: right edge of thin and thick bars -> Strongest boundaries: binocular and monocular boundaries add -> Vertical boundaries from monocular left edge of thin bar survive -> Filling-in contained by connected boundaries]. cart [very near, near, fixation plane, far, very far] /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:616:
  • image p380fig11.14 The model explanation of DaVinci stereopsis when the input stimuli have opposite contrast polarities.
    || Polarity-reversed Da Vinci stereopsis (Nakayama, Shimojo 1990). Same explanation! (... as Figure 11.13 ...) [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far] /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:617:
  • image p395fig11.34 A comparison of the properties of other rivalry models with those of the 3D LAMINART model (surrounded by red border). Significantly, only 3D LAMINART explains both stable vision and rivalry (green border).
    || Comparison of rivalry models /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:618:
  • image p401fig11.41 The 3D LAMINART model proposes how angle cells and disparity-gradient interact through learning to generate 3D representations of slanted objects.
    || 3D LAMINART model. [LGN, V1, V2, V4] Four key additions: 1. Angle cells - tuned to various angles; 2. Disparity-gradient cells - tuned to disparity gradients in the image; 3. weights from [angle to disparity-gradient] cells - learned while viewing 3D image; Colinear grouping between [angle to disparity-gradient] cells - disambiguates ambiguous groupings. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:619:
  • image p402fig11.44 3D scenic reconstruction of the image in Figure 11.43 by the 3D LAMINART model.
    || Disparity [5, 6, 8, 10, 11, 14]: images of objects in common depth planes /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:620:
  • image p436fig12.30 The conscious ARTWORD, or cARTWORD, laminar cortical speech model simulates how future context can disambiguate noisy past speech sounds in such a way that the completed percept is consciously heard to proceed from past to future as a feature-item-list resonant wave propagates through time.
    || cARTWORD: Laminar cortical model macrocircuit (Grossberg, Kazerounian 2011) Simulates PHONEMIC RESTORATION: Cognitive Working Memory (processed item sequences) - [Excitatory-> inhibitory-> habituative-> adaptive filter-> adaptive filter-> adaptive filter with depletable synapse-> Acoustic [item, feature] /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:621:
  • image p444fig12.42 The LIST PARSE laminar cortical model of working memory and list chunking that I published with Lance Pearson in 2008 simulated the Averbeck etal data in Figure 12.41, as in the left column of the figure. It also simulated cognitive data about working memory storage by human subjects. See the text for details.
    || LIST PARSE: Laminar cortical model of working memory and list chunking (Grossberg, Pearson 2008). Simulates data about: [immediate, delayed, continuous] distractor free recall; immediate serial recall; and variable-speed sequential perfomance of motor acts. [velocity, acceleration] vs time (ms) from recall cue. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:622:
  • image p445fig12.43 The LIST PARSE laminar cortical Cognitive Working Memory circuit, that is proposed to occur in ventrolateral prefrontal cortex, is homologous to the LAMINART circuit circuit that models aspects of how visual cortex sees. The Motor Working Memory, VITE Trajectory Generator, and Variable-Rate Volitional Control circuits model how other brain regions, including dorsolateral prefrontal cortex, motor cortex, cerebellum, and basal ganglia, interact with the Cognitive Working Memory to control working memory storage and variable-rate performance of item sequences.
    || List parse circuit diagram. Connectivity convention. sequence chunks [<- BU filter, TD expectation ->] working memory. Working memory and sequence chunking circuit is homologous to visual LAMINART circuit! /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:623:
  • image p498fig13.22 (left column, top row) Adaptive filtering and conditioned arousal are both needed to regulate what cues can learn to activate particular space-time patterns. These developments lead inexorably to basic cognitive abilities, as embodied in the 3D LAMINART models for 3D vision and figure-ground perception (Chapter 11) and the 3D ARTSCAN SEARCH model for invariant object learning, recognition, and 3D search (Chapter 6). (right column, top row) Conditioned arousal enables only emotionally important cues to activate a motivationally relevant space-time pattern. (bottom row) Conditioned arousal and drive representations arise naturally from the unlumping of avalanche circuits to make them selective to motivationally important cues. The MOTIVATOR model is a natural outcome of this unlumping process (this chapter).
    || (top) Adaptive filtering and Conditioned arousal. Towards Cognition: need to filter inputs to the command cell. Towards Emotion: important signals turn arousal ON and OFF. (bottom) Conditioned arousal and Drive representations. Competition between conditioned arousal sources at drive representations, eg amygdala. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:62:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:64:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:685:
  • image p038fig01.25 The ART Matching Rule stabilizes real time learning using a [top-down, modulatory on-center, off-surround] network. Object attention is realized by such a network. See text for additional discussion.
    || ART Matching Rule [volition, categories, features]. [one, two] against one. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:686:
  • image p163fig04.39 A schematic of the LAMINART model that explains key aspects of laminar visual cortical anatomy and dynamics. LGN -> V1 [6, 4, 2/3] -> V2 [6, 4, 2/3]
    || p163c1h0.6 "... The first article abount laminar computing ... proposed how the laminar cortical model could process 2D pictures using bottom-up filtering and horizontal bipole grouping interactions (Grossbeerg, Mingolla, Ross 1997). In 1999, I was able to extend the model to also include top-down circuits for expectation and attention (Grossberg 1999)(right panel). Such a synthesis of laminar bottom-up, horizontal, and top-down circuits is characteristic of the cerebral cortex (left panel). I called it LAMINART because it began to show how properties of Adaptive Resonance Theory, or ART, notably the ART prediction about how top-down expectations and attention work, and are realized by identical cortical cells and circuits. You can immediately see from the schematic laminar circuit diagram ... (right panel) that circuits in V2 seem to repeat circuits in V1, albeit with a larger spatial scale, despite the fact that V1 and V2 carry out different functions, How this anatomical similarity can coexist with functional diversity will be clarified in subsequent sections and chapters. It enable dfifferent kinds of biological intelligence to communicate seamlessly while carrying out their different psychological functions. ..." /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:687:
  • image p192fig05.06 Bottom-up and top-down circuits between the LGN and cortical area V1. The top-down circuits obey the ART Matching Rule for matching with bottom-up input patterns and focussing attention on expected critical features.
    || Model V1-LGN circuits, version [1, 2]. retina -> LGN relay cells -> interneurons -> cortex [simple, endstopped] cells -> cortex complex cells /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:688:
  • image p200fig05.13 Instar and outstar learning are often used to learn the adaptive weights in the bottom-up filters and top-down expectations that occur in ART. The ART Matching Rule for object attention enables top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features.
    || Expectations focus attention: feature pattern (STM), Bottom-Up adaptive filter (LTM), Category (STM), competition, Top-Down expectation (LTM); ART Matching Rule: STM before top-down matching, STM after top-down matching (attention!) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:689:
  • image p207fig05.19 The ART hypothesis testing and learning cycle. See the text for details about how the attentional system and orienting system interact in order to incorporate learning of novel categories into the corpus of already learned categories without causing catastophic forgetting.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:690:
  • image p211fig05.21 Sequences of P120, N200, and P300 event-related potentials occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    || ERP support for mismatch-mediated reset: event-related potentials: human scalp potentials. ART predicted correlated sequences of P120-N200-P300 Event Related Potentials during oddball learning. P120 mismatch; N200 arousal/novelty; P300 STM reset. Confirmed in (Banquet and Grossberg 1987) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:691:
  • image p215fig05.28 How a mismatch between bottom-up and top-down input patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level.
    || Mismatch -> inhibition -> arousal -> reset. BU input orienting arousal, BU+TD mismatch arousal and reset. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:692:
  • image p221fig05.31 A system like Fuzzy ARTMAP can learn to associate learned categories in one ART network with learned categories in a second ART network. Because both bottom-up and top-down interactions occur in both networks, a bottom-up input pattern to the first ART network can learn to generate a top-down output pattern from the second ART network.
    || Fuzzy ARTMAP. Match tracking realizes minimax learning principle: vigilence increases to just above the match ratio of prototype / exemplar, thereby triggering search /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:693:
  • image p226fig05.35 I had shown in 1976 how a competitive learning or self-organizing map model could undergo catastrophic forgetting if the input environment was sufficiently dense and nonstationary, as illustrated by Figure 5.18. Later work with Gail Carpenter showed how, if the ART Matching Rule was shut off, repeating just four input patterns in the correct order could also casue catastrophic forgetting by causing superset recoding, as illustrated in Figure 5.36.
    || Code instability input sequences. D C A; B A; B C = ; |D|<|B|<|C|; where |E| is the number of features in the set E. Any set of input vectors that satisfy the above conditions will lead to unstable coding if they are periodically presented in the order ABCAD and the top-down ART Matching Rule is shut off. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:694:
  • image p226fig05.36 Column (a) shows catastrophic forgetting when the ART Matching Rule is not operative. It is due to superset recoding. Column (b) shows how category learning quickly stabilizes when the ART Martching Rule is restored.
    || Stabel and unstable learning, superset recoding /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:695:
  • image p228fig05.37 A macrocircuit of the neurotrophic Spectrally Timed ART, or nSTART, model. I developed nSTART with my PhD student Daniel Franklin. It proposes how adaptively timed learning in the hippocampus, bolstered by Brain Derived Neurotrophic Factor, or BDNF, helps to ensure normal memory consolidation.
    || habituative gates, CS, US, Thalamus (sensory cortex, category learning, conditioned reinforcer learning, adaptively time learning and BDNF), Amygdala (incentive motivation learning), Hippocampus (BDNF), Prefrontal Cortex (attention), Pontine nuclei, Cerebellum (adaptively timed motor learning) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:696:
  • image p230fig05.38 The Synchronous Matching ART, or SMART, model includes spiking neurons in a laminar cortical hierarchy. I developed SMART with my PhD student Massimiliano Versace. By unlumping LAMINART to include spiking neurons, finer details of neurodynamics, such as the existence of faster gamma oscillations during good enough matches, and slower beta oscillations during bad enough mismatches, could be shown as emergent properties of network interactions.
    || Second order thalamus -> specific thalamic nucleus -> Thalamic reticulate nucleus -> neocortical laminar circuit [6ll, 6l, 5, 2/3, 1] -> Higher order cortex. Similar for First order thalamus -> First order cortex, with interconnection to Second order, nonspecific thalamic nucleus /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:697:
  • image p240fig05.44 When an algebraic exemplar model is realized using only local computations, it starts looking like an ART prototype model.
    || How does the model know which exemplars are in category A? BU-TD learning. How does a NOVEL test item access category A? /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:698:
  • image p241fig05.45 The 5-4 category structure is one example of how an ART network learns the same kinds of categories as human learners. See the text for details.
    || 5-4 Category structure. A1-A5: closer to the (1 1 1 1) prototype; B1-B4: closer to the (0 0 0 0) prototype /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:699:
  • image p246fig05.48 Microcircuits of the LAMINART model that I developed with Rajeev Raizada. See the text for details of how they integrate bottom-up adaptive filtering, horizontal bipole grouping, and top-down attentional matching that satisfied the ART Matching Rule.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:69:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:700:
  • image p254fig06.03 These interactions of the ARTSCAN Search model enable it to learn to recognize and name invariant object categories. Interactions between spatial attention in the Where cortical stream, via surface-shroud resonances, and object attention in the What cortical stream, that obeys the ART Matching Rule, coordinate these learning, recognition, and naming processes.
    || Retinal image -> On & OFF cell contrast normalization (retina/LGN) -> polarity [, in]sensitive contrast enhancement (V1) -> object [boundary (V2), surface (V2/V4)] -> surface contour (V2); What stream categories: volition control (BG). object boundary (V2) <-> view (ITp) <-> view integrator (ITp) <-> object (ITa) <-> [object-value (ORB), value (Amyg)] <-> name (PFC) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:701:
  • image p315fig08.35 The output signals from the directional grouping network obey the ART Matching Rule. They thereby select consistent motion directional signals while suppressing inconsistent ones, and do not distort that the spared cells code. The aperture problem is hereby solved by the same mechanism that dynamically stabilizes the learning of directional grouping cells.
    || How to select correct direction and preserve speed estimates? Prediction: Feedback from MSTv to MT- obeys ART Matching Rule; Top-down, modulatory on-center, off-surround network (Grossberg 1976, 1980; Carpenter, Grossberg 1987, 1991); Explains how directional grouping network can stably develop and how top-down directional attention can work. (Cavanough 1992; Goner etal 1986; Sekuer, Ball 1977; Stelmach etal 1994). Directional grouping network (MSTv) <-> Directional long-range filter (MT). Modulatory on-center selects chosen direction and preserves speed. Off-surround inhibits incompatible directions. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:702:
  • image p316fig08.36 How the directional grouping network, notably properties of the ART Matching Rule, enables a small set of amplified feature tracking signals at the ends of a line to select consistent directions in the line interior, while suppressing inconsistent directions.
    || Motion capture by directional grouping feedback. Directional grouping network (MSTv) <-> Directional long-range filter (MT). It takes longer to capture ambiguous motion signals in the line interior as the length of the line increases cf (Castet etal 1993) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:703:
  • image p354fig10.01 The lamiar cortical circuit that realizes how we pay attention to an object sends signals from layer 6 of a higher cortical level to layer 6 of a lower cortical level and then back up to layer 4. This "folded feedback" circuit realizes a top-down, modulatory on-center, off-surround circuit that realizes the ART Matching Rule.
    || Top-down attention and folded feedback. Attentional signals also feed back into 6-to-4 on-center off-surround. 1-to-5-to-6 feedback path: Macaque (Lund, Booth 1975) cat (Gilbert, Wiesel 1979). V2-to-V1 feedback is on-center off-surround and affects layer 6 of V1 the most (Bullier etal 1996; Sandell, Schiller 1982). Attended stimuli enhanced, ignored stimuli suppressed. This circuit supports the predicted ART Matching Rule! [LGN, V[1,2][6->1]] /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:704:
  • image p360fig10.08 The bottom-up on-center off-surround from LGN-to-6-to-4 has a modulatory on-center because of its role in realizing the ART Matching Rule and, with it, the ability of the cortex to dynamically stabilize its learned memories.
    || Modulation of priming by 6-to-4 on-center (Stratford etal 1996; Callaway 1998). On-center 6-to-4 excitation is inhibited down to being modulatory (priming, subthreshold). On-center 6-to-4 excitation cannot activate layer 4 on its own. Clarifies need for direct path. Prediction: plays key role in stable grouping, development and learning. ART Matching Rule! /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:705:
  • image p419fig12.17 The auditory continuity illusion illustrates the ART Matching Rule at the level of auditory streaming. Its "backwards in time" effect of future context on past conscious perception is a signature of resonance.
    || Auditory continuity illusion. input, percept. Backwards in time - How does a future sound let past sound continue through noise? Resonance! - It takes a while to kick in. After it starts, a future tone can maintain it much more quickly. Why does this not happen if there is no noise? - ART Matching Rule! TD harmonic filter is maodulatory without BU input. It cannot create something out of nothing. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:706:
  • image p420fig12.18 The ARTSTREAM model explains and simulates the auditory continuity illusion as an example of a spectral-pitch resonance. Interactions of ART Matching Rule and asymmetric competition mechanisms in cortical strip maps explain how the tone selects the consistent frequency from the noise in its own stream while separating the rest of the noise into another stream.
    || ARTSTREAM model (Grossberg 1999; Grossberg, Govindarajan, Wyse, Cohen 2004). SPINET. Frequency and pitch strips. Bottom Up (BU) harmonic sieve. Top Down (TD) harmonic ART matching. Exclusive allocation. Learn pitch categories based on early harmonic processing. A stream is a Spectral-Pitch Resonance! /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:707:
  • image p473fig12.69 Error rate and mean reaction time (RT) data from the lexical decision experiments of (Schvanevelt, McDonald 1981). ART Matching Rule properties explain these data in (Grossberg, Stone 1986).
    || (left) Error rate vs type of prime [R, N, U], [non,] word. (right) Mean RT (msec) vs type of prime [R, N, U], [non,] word. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:708:
  • image p483fig13.02 The object-value categories in the orbitofrontal cortex require converging specific inputs from the sensory cortex and nonspecific incentive motivational inputs from the amygdala in order to fire. When the orbitofrontal cortex fires, it can deliver top-down ART Matching Rule priming signals to the sensory cortical area by which it was activated, thereby helping to choose the active recognition categories there that have the most emotional support, while suppressing others, leading to attentional blocking of irrelevant cues.
    || Cognitive-Emotional-Motor (CogEM) model. Drive-> amygdala incentive motivational learning-> orbitofrontal cortex- need converging cue and incentive iputs to fire <-> sensory cortex- conditioned renforcer learning-> amygdala. CS-> sensory cortex. Motivated attention closes the cognitive-emotional feedback loop, focuses on relevant cues, and causes blocking of irrelevant cues. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:709:
  • image p548fig15.16 Homologous recognition learning and reinforcement learning macrocicuits enable adaptively timed conditioning in the reinforcement learning circuit to increase inhibition of the orienting system at times when a mismatch in the recognition system would have reduced inhibition of it.
    || Homolog between ART and CogEM model, complementary systems. [Recognition, Reinforcement] learning vs [Attentional, Orienting] system. Reinforcement: timing, drive representation. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:710:
  • image p600fig16.36 The entorhinal-hipppocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories. See the text for details.
    || Entorhinal-hippocampal interactions as an ART system. Hippocampal place cells as spatial categories. Angular head velocity-> head direction cells-> stripe cells- small scale 1D periodic code (ECIII) SOM-> grid cells- small scale 2D periodic code (ECII) SOM-> place cells- larger scale spatial map (DG/CA3)-> place cells (CA1)-> conjunctive-coding cells (EC V/VI)-> top-down feedback back to stripe cells- small scale 1D periodic code (ECIII). stripe cells- small scale 1D periodic code (ECIII)-> place cells (CA1). /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:711:
  • image p613fig16.43 The main visual form and motion processing stream mechanisms of SOVEREIGN, many of them described at length in previous chapters.
    || Render 3-D scene (R3DS), figure-ground separation (FGS), log-polar transform (LPT), Gaussian coarse-coding (GCC), Invariant visual target map (IVTM), What Fuzzy ART (WhatFuzz), body spatial coordinates (BSC), where reactive visual TPV storage (WRVTS), Directional transient cell network (DTCN), Motion direction hemifild map (MDHM), Hemifiled left/right scoring (HLRS), reactive visual control signal (RVCS), Parvo/Magno/Erg competition (PMEC), Approach and Orient GOp (AOGp), GOm (GOm). R3DS [parvo-> FGS, magno-> DTCN], FGS-> [LPT, WRVTS], LPT-> GCC-> IVTM-> WhatFuzz, BSC-> [RVTS, PMEC], PMEC-> [gateRVTS-> RVTS, gateRVCS-> RVCS], DTCN-> MDHM-> HLRS, HLRS-> [PMEC, RVCS], AOGp-> gateRVTS, GOm-> gateRVCS. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:718:
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
    bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :
    whitegeneral microcircuit : a possible component of ART architecture
    lime greensensory perception [attention, expectation, learn]. Table includes [see, hear, !!*must add touch example*!!], no Grossberg [smell, taste] yet?
    light bluepost-perceptual cognition?
    pink"the feeling of what happens" and knowing what event caused that feeling
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:72:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:731:
  • image p163fig04.39 A schematic of the LAMINART model that explains key aspects of laminar visual cortical anatomy and dynamics. LGN -> V1 [6, 4, 2/3] -> V2 [6, 4, 2/3]
    || p163c1h0.6 "... The first article abount laminar computing ... proposed how the laminar cortical model could process 2D pictures using bottom-up filtering and horizontal bipole grouping interactions (Grossbeerg, Mingolla, Ross 1997). In 1999, I was able to extend the model to also include top-down circuits for expectation and attention (Grossberg 1999)(right panel). Such a synthesis of laminar bottom-up, horizontal, and top-down circuits is characteristic of the cerebral cortex (left panel). I called it LAMINART because it began to show how properties of Adaptive Resonance Theory, or ART, notably the ART prediction about how top-down expectations and attention work, and are realized by identical cortical cells and circuits. You can immediately see from the schematic laminar circuit diagram ... (right panel) that circuits in V2 seem to repeat circuits in V1, albeit with a larger spatial scale, despite the fact that V1 and V2 carry out different functions, How this anatomical similarity can coexist with functional diversity will be clarified in subsequent sections and chapters. It enable dfifferent kinds of biological intelligence to communicate seamlessly while carrying out their different psychological functions. ..." /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:732:
  • image p174fig04.52 An example of how the 3D LAMINART model can transform the two monocular images of the random dot stereogream in the top row into the three depth-separated surfaces representations in the bottom row.
    || Stereogram surface percepts: surface lightnesses are segregated in depth (Fang and Grossberg 2009). [left, right] inputs, [far, fixation, near] planes. Contrast algorithms that just compute disparity matches and let computer code build the surface eg (Marr, Poggio, etal 1974). /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:733:
  • image p182fig04.58 LAMINART model processing stage that are sufficient to explain many percepts of transparency, including those summarized in Figure 4.57.
    || [left, right] eye, [LGN, V1 [6, 4, 3B, 2/3 A], V2 [4, 2/3]], [mo, bi]nocular cart [simple, complex] cells, [excita, inhibi]tory cart [connection, cell]s. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:734:
  • image p230fig05.38 The Synchronous Matching ART, or SMART, model includes spiking neurons in a laminar cortical hierarchy. I developed SMART with my PhD student Massimiliano Versace. By unlumping LAMINART to include spiking neurons, finer details of neurodynamics, such as the existence of faster gamma oscillations during good enough matches, and slower beta oscillations during bad enough mismatches, could be shown as emergent properties of network interactions.
    || Second order thalamus -> specific thalamic nucleus -> Thalamic reticulate nucleus -> neocortical laminar circuit [6ll, 6l, 5, 2/3, 1] -> Higher order cortex. Similar for First order thalamus -> First order cortex, with interconnection to Second order, nonspecific thalamic nucleus /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:735:
  • image p231fig05.39 The SMART hypothesis testing and learning cycle predicts that vigilance increases when a mismatch in subcortical regions like the nonspecific thanlamus activates the nucleus basalis of Meynert which, in turn, broadcasts a burst of the neurotransmitter acetylcholine, or ACh, to deeper cortical layers. Due to the way in which LAMINART proposes that cortical matching and mismatching occurs, this ACh burst can increase vigilance and thereby trigger a memory search. See the text for details.
    || [BU input, [, non]specific thalamic nucleus, thalamic reticulate nucleus, neocortical laminar circuit] cart [Arousal, Reset, Search, Vigilance] /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:736:
  • image p246fig05.48 Microcircuits of the LAMINART model that I developed with Rajeev Raizada. See the text for details of how they integrate bottom-up adaptive filtering, horizontal bipole grouping, and top-down attentional matching that satisfied the ART Matching Rule.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:737:
  • image p248fig05.49 This circuit of the LAMINART model helps to explain properties of Up and Down states during slow wave sleep, and how disturbances in ACh dynamics can disrupt them.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:738:
  • image p375fig11.07 The 3D LAMINART model uses both monocular and binocular simple cells to binocularly fuse like image contrasts. The remainder of the model generates 3D boundary and surface representations of multiple kinds of experiments as well as of natural scenes.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A], V2 thin stripe [4->2/3A], V4]. V1 blob [V1-4 monocular, V1 interior binocular] simple cells. [complex, simple, inhibitory] cells, on-center off-surround /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:739:
  • image p376fig11.10 The 3D LAMINART model shows how the disparity filter can be integrated into the circuit that completes 3D boundary representations using bipole grouping cells. It also explains how surface contours can strengthen boundaries that succeed in generating closed filling-in domains.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A] surface contour, V2 thin stripe (monocular surface) [4->2/3A], V2 interior [disynaptic inhibitory interneurons, bipole grouping cells, disparity filter, V4 binocular surface]. [complex, simple, inhibitory] cells, on-center off-surround /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:740:
  • image p378fig11.12 How monocular and binocular information are combined in V1 and V2 in the 3D LAMINART model.
    || Model utilizes monocular information. [left, right] eye cart V1-[4 monocular simple, 3B binocular simple, complex2/3A [mo,bi]nocular] cells, V2-4 binocular complex cells. black = monocular cells, blue = binocular cells. In V2, monocular inputs add to binocular inputs along the line of sight and contribute to depth perception. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:741:
  • image p379fig11.13 How the 3D LAMINART model explains DaVinci stereopsis. All the stages of boundary and surface formation are color coded to clarify their explanation. Although each mechanism is very simple, when all of them act together, the correct depthful surface representation is generated. See the text for details.
    || DaVinci stereopsis (Nakayama, Shimojo 1990). An emergent property of the previous simple mechanisms working together. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] pair [Binocular match: boundaries of thick bar -> Add monocular boundaries along lines-of-sight -> Line-of-sight inhibition kills weaker vertical boundaries -> 3D surface percept not just a disparity match!] pair [Binocular match: right edge of thin and thick bars -> Strongest boundaries: binocular and monocular boundaries add -> Vertical boundaries from monocular left edge of thin bar survive -> Filling-in contained by connected boundaries]. cart [very near, near, fixation plane, far, very far] /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:742:
  • image p380fig11.14 The model explanation of DaVinci stereopsis when the input stimuli have opposite contrast polarities.
    || Polarity-reversed Da Vinci stereopsis (Nakayama, Shimojo 1990). Same explanation! (... as Figure 11.13 ...) [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far] /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:743:
  • image p395fig11.34 A comparison of the properties of other rivalry models with those of the 3D LAMINART model (surrounded by red border). Significantly, only 3D LAMINART explains both stable vision and rivalry (green border).
    || Comparison of rivalry models /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:744:
  • image p401fig11.41 The 3D LAMINART model proposes how angle cells and disparity-gradient interact through learning to generate 3D representations of slanted objects.
    || 3D LAMINART model. [LGN, V1, V2, V4] Four key additions: 1. Angle cells - tuned to various angles; 2. Disparity-gradient cells - tuned to disparity gradients in the image; 3. weights from [angle to disparity-gradient] cells - learned while viewing 3D image; Colinear grouping between [angle to disparity-gradient] cells - disambiguates ambiguous groupings. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:745:
  • image p402fig11.44 3D scenic reconstruction of the image in Figure 11.43 by the 3D LAMINART model.
    || Disparity [5, 6, 8, 10, 11, 14]: images of objects in common depth planes /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:746:
  • image p445fig12.43 The LIST PARSE laminar cortical Cognitive Working Memory circuit, that is proposed to occur in ventrolateral prefrontal cortex, is homologous to the LAMINART circuit circuit that models aspects of how visual cortex sees. The Motor Working Memory, VITE Trajectory Generator, and Variable-Rate Volitional Control circuits model how other brain regions, including dorsolateral prefrontal cortex, motor cortex, cerebellum, and basal ganglia, interact with the Cognitive Working Memory to control working memory storage and variable-rate performance of item sequences.
    || List parse circuit diagram. Connectivity convention. sequence chunks [<- BU filter, TD expectation ->] working memory. Working memory and sequence chunking circuit is homologous to visual LAMINART circuit! /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:747:
  • image p498fig13.22 (left column, top row) Adaptive filtering and conditioned arousal are both needed to regulate what cues can learn to activate particular space-time patterns. These developments lead inexorably to basic cognitive abilities, as embodied in the 3D LAMINART models for 3D vision and figure-ground perception (Chapter 11) and the 3D ARTSCAN SEARCH model for invariant object learning, recognition, and 3D search (Chapter 6). (right column, top row) Conditioned arousal enables only emotionally important cues to activate a motivationally relevant space-time pattern. (bottom row) Conditioned arousal and drive representations arise naturally from the unlumping of avalanche circuits to make them selective to motivationally important cues. The MOTIVATOR model is a natural outcome of this unlumping process (this chapter).
    || (top) Adaptive filtering and Conditioned arousal. Towards Cognition: need to filter inputs to the command cell. Towards Emotion: important signals turn arousal ON and OFF. (bottom) Conditioned arousal and Drive representations. Competition between conditioned arousal sources at drive representations, eg amygdala. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:74:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:760:
  • image p228fig05.37 A macrocircuit of the neurotrophic Spectrally Timed ART, or nSTART, model. I developed nSTART with my PhD student Daniel Franklin. It proposes how adaptively timed learning in the hippocampus, bolstered by Brain Derived Neurotrophic Factor, or BDNF, helps to ensure normal memory consolidation.
    || habituative gates, CS, US, Thalamus (sensory cortex, category learning, conditioned reinforcer learning, adaptively time learning and BDNF), Amygdala (incentive motivation learning), Hippocampus (BDNF), Prefrontal Cortex (attention), Pontine nuclei, Cerebellum (adaptively timed motor learning) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:761:
  • image p541fig15.02 The neurotrophic Spectrally Timed Adaptive Resonance Theory, or nSTART, model of (Franklin, Grossberg 2017) includes hippocampus to enable adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between CS and US.
    || Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. [CS, US] -> Sensory Cortex (SC) <- motivational attention <-> category learning -> Prefrontal Cortex (PFC). SC conditioned reinforcement learning-> Amygdala (cannot bridge the temporal gap) incentive motivational learning-> PFC. SC adaptively timer learning and BDNF-> Hippocampus (can bridge the temporal gap) BDNF-> PFC. PFC adaptively timed motor learning-> cerebellum. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:762:
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
    bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :
    whitegeneral microcircuit : a possible component of ART architecture
    lime greensensory perception [attention, expectation, learn]. Table includes [see, hear, !!*must add touch example*!!], no Grossberg [smell, taste] yet?
    light bluepost-perceptual cognition?
    pink"the feeling of what happens" and knowing what event caused that feeling
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:76:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:773:
  • image p221fig05.31 A system like Fuzzy ARTMAP can learn to associate learned categories in one ART network with learned categories in a second ART network. Because both bottom-up and top-down interactions occur in both networks, a bottom-up input pattern to the first ART network can learn to generate a top-down output pattern from the second ART network.
    || Fuzzy ARTMAP. Match tracking realizes minimax learning principle: vigilence increases to just above the match ratio of prototype / exemplar, thereby triggering search /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:774:
  • image p225fig05.33 Some early ARTMAP benchmark studies. These successes led to the use of ARTMAP, and many variants that we and other groups have developed, in many large-scale applications in engineering and technology that has not abated even today.
    || see Early ARTMAP benchmark studies /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:775:
  • image p225fig05.34 ARTMAP was successfully used to learn maps of natural terrains with many advantages over those of mapping projects that used AI expert systems. The advantages are so great that many mapping projects started to use this technology.
    || AI expert system - 1 year: field identification of natural regions; derivation of ad hoc rules for each region by expert geographers; correct 80,000 of 250,000 site labels; 230m (site-level) scale. ARTMAP system - 1 day: rapid, automatic, no natural regions or rules; confidence map; 30m (pixel-level) scale can see roads; equal accuracy at test sites /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:776:
  • image p242fig05.46 Computer simulations of how two variants of Distributed ARTMAP incrementally learn the 5-4 category structure. See the text for details.
    || Distributed ARTMAP with [self-supervised learning, post-training LTM noise] /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:786:
  • image p436fig12.30 The conscious ARTWORD, or cARTWORD, laminar cortical speech model simulates how future context can disambiguate noisy past speech sounds in such a way that the completed percept is consciously heard to proceed from past to future as a feature-item-list resonant wave propagates through time.
    || cARTWORD: Laminar cortical model macrocircuit (Grossberg, Kazerounian 2011) Simulates PHONEMIC RESTORATION: Cognitive Working Memory (processed item sequences) - [Excitatory-> inhibitory-> habituative-> adaptive filter-> adaptive filter-> adaptive filter with depletable synapse-> Acoustic [item, feature] /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:787:
  • image p453fig12.49 The ARTWORD model that I published in 2000 with my PhD student Christopher Myers simulates data such as the (Repp etal 1978) data in Figure 12.68. See the text for details.
    || ARTWORD model (Grossberg, Myers 2000). Input phonetic features-> Phonemic item working memory-> Masking Field unitized lists-> Automatic gain control-> Phonemic item working memory. [habituative gate, adaptive filter]s. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:788:
  • image p453fig12.50 The ARTWORD perception cycle shows how sequences of items activate possible list chunks, which compete among each other and begin to send their top-down expectations back to the item working memory. An item-list resonance develops through time as a result.
    || ARTWORD perception cycle. (a) bottom-up activation (b) list chunk competition (c) item-list resonance (d) chunk reset due to habituative collapse. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:789:
  • image p455fig12.52 Simulation of cARTWORD dynamics in response to the complete list /1/-/2/-/3/. The relevant responses are surrounded by a red box.
    || Presentation of a normal sequence: input /1/-/2/-/3/. |c(i,1)-5| vs time (msec). List chunks select most predictive code. Order stored in WM layers. Resonant activity of /1/-/2/-/3/ in item and feature layers corresponds to conscious speech percept. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:78:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:790:
  • image p456fig12.53 Simulation of cARTWORD dynamics in response to the partial list /1/-silence-/3/ with /2/ replaced by silence. Only the representations of these items can be seen in the red box.
    || Presentation with silence duration: input /1/-silence-/3/. |c(i,1)-5| vs time (msec). List chunks select most predictive code. Order stored in WM layers. Gap in resonant activity of /1/-silence-/3/ in item and feature layers corresponds to perceived silence. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:791:
  • image p456fig12.54 Item /2/ is restored in the correct list position in response to the list /1/-noise-/3/.
    || Presentation with noise: input /1/-noise-/3/. |c(i,1)-5| vs time (msec). List chunks select the most predictive code. Order restored in WM layers. Resonant activity of /1/-/2/-/3/ in item and feature layers corresponds to restoration of item /2/ replaced by noise in input. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:792:
  • image p457fig12.55 Item /4/ is restored in the correct list position in response to the list /1/-noise-/5/. This and the previous figure show how future context can disambiguate past noisy sequences that are otherwise identical.
    || Presentation with noise: input /1/-noise-/5/. |c(i,1)-5| vs time (msec). List chunks select the most predictive code. Order restored in WM layers. Resonant activity of /1/-/4/-/3/ in item and feature layers corresponds to restoration of item /4/ replaced by noise in input. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:805:
  • image p444fig12.42 The LIST PARSE laminar cortical model of working memory and list chunking that I published with Lance Pearson in 2008 simulated the Averbeck etal data in Figure 12.41, as in the left column of the figure. It also simulated cognitive data about working memory storage by human subjects. See the text for details.
    || LIST PARSE: Laminar cortical model of working memory and list chunking (Grossberg, Pearson 2008). Simulates data about: [immediate, delayed, continuous] distractor free recall; immediate serial recall; and variable-speed sequential perfomance of motor acts. [velocity, acceleration] vs time (ms) from recall cue. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:806:
  • image p445fig12.43 The LIST PARSE laminar cortical Cognitive Working Memory circuit, that is proposed to occur in ventrolateral prefrontal cortex, is homologous to the LAMINART circuit circuit that models aspects of how visual cortex sees. The Motor Working Memory, VITE Trajectory Generator, and Variable-Rate Volitional Control circuits model how other brain regions, including dorsolateral prefrontal cortex, motor cortex, cerebellum, and basal ganglia, interact with the Cognitive Working Memory to control working memory storage and variable-rate performance of item sequences.
    || List parse circuit diagram. Connectivity convention. sequence chunks [<- BU filter, TD expectation ->] working memory. Working memory and sequence chunking circuit is homologous to visual LAMINART circuit! /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:807:
  • image p446fig12.44 (left column, top row) LIST PARSE can model linguistic data from human subjects. In this figure, model parameters are fixed to enable a close fit to data about error-type distributions in immediate free recall experiments, notably transposition errors. (right column, top row) Simulation and data showing bowing of the serial position curve, including an extended primacy gradient. (left column, bottom row) The simulation curve overlays data about list length effects, notably the increasing recall difficulty of longer lists during immediate serial recall (ISR). (right column, bottom row) Simulation (bottom image) and data (top image) of the limited temporal extent for recall.
    || (1. TL) Error-type distributions in immediate serial recall (Hanson etal 1996). % occurence vs serial position. Graph convention: Data- dashed lines; Simulations- solid lines. Six letter visual ISR. Order errors- transpositions of neighboring items are the most common. Model explanation: Noisy activation levels change relative oorder in primacy gradient. Similar activation of neighboring items most susceptible to noise. Model paramenters fitted on these data. (2. TR) Bowing of serial position curve (Cowan etal 1999). % correct vs serial position. Auditory ISR with various list lengths (graphs shifted rightward): For [, sub-]span lists- extended primacy, with one (or two) item recency; Auditory presentation- enhanced performance for last items. LIST PARSE: End effects- first and last items half as many members; Echoic memory- last presented item retained in separate store. (3. BL) List length effects, circles (Crannell, Parrish 1968), squares (Baddeley, Hitch 1975), solid line- simulation. % list correct vs list length. Variable list length ISR: longer lists are more difficult to recall. LIST PARSE: More items- closer activation levels and lower absolute activity level with enough inputs; Noise is more likely to produce order errors, Activity levels more likely to drop below threshold;. (4. BR) Limited temporal extent for recall (Murdoch 1961). % recalled vs retention interval (s). ISR task with distractor-filled retention intervals (to prevent rehersal): Increasing retention interval - decreases probabilty of recalling list correctly; Load dependence- longer list more affected by delays; Performance plateau- subjects reach apparent asymptote. LIST PARSE: Increase convergence of activities with time; loss of order information;. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:808:
  • image p447fig12.45 (left column) LIST PARSE simulations of the proportion of order errors as a function of serial position for 6 item lists with (a) an extended pause of 7 time units between the third and fourth items, and (b) pauses of 5 time units (solid curve) and 10 time units (dashed curve) between all items. (right column) Simulations (solid curves) and data (dashed curves) illustrating close model fits in various immediate free recall tasks.
    || (Left) Temporal grouping and presentation variability. Temporal grouping: Inserting an extended pause leads to inter-group bowing; Significantly different times of integration and activity levels across pause, fewer interchanges. (Right) Immediate free recall, and [delayed, continuous] distractor-free recall. Overt rehersal IFR task with super-span (ie 20 item) lists: Extended recency- even more extended with shorter ISIs; Increased probability of recall with diminished time from last rehearsal; Early items in list rehearsed most;. LIST PARSE (unique) for long lists: Incoming items from recency gradient; Rehearsal (re-presentation) based upon level of activity. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:80:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:823:
  • image p254fig06.03 These interactions of the ARTSCAN Search model enable it to learn to recognize and name invariant object categories. Interactions between spatial attention in the Where cortical stream, via surface-shroud resonances, and object attention in the What cortical stream, that obeys the ART Matching Rule, coordinate these learning, recognition, and naming processes.
    || Retinal image -> On & OFF cell contrast normalization (retina/LGN) -> polarity [, in]sensitive contrast enhancement (V1) -> object [boundary (V2), surface (V2/V4)] -> surface contour (V2); What stream categories: volition control (BG). object boundary (V2) <-> view (ITp) <-> view integrator (ITp) <-> object (ITa) <-> [object-value (ORB), value (Amyg)] <-> name (PFC) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:824:
  • image p255fig06.04 The ARTSCAN Search model can also search for a desired target object in a scene, thereby clarifying how our brains solve the Where's Waldo problem.
    || similar ilustration as Figure 06.03, with some changes to arrows /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:825:
  • image p259fig06.08 The distributed ARTSCAN, or dARTSCAN, model includes spatial attention in both PPC and PFC, and both fast-acting attention, triggered by transient cells in Where cortical areas such as MT, and slower-acting surface-shroud resonances in What cortical areas such as V4 and PPC. See the text for details.
    || dARTSCN spatial attention hierarchy, Fast (Where stream) Slow (What stream) (Foley, Grossberg, and Mingolia 2012). [transient cells (MT) ->, object surfaces (V4) <->] [object shrouds (PPC) <-> spatial shrouds (PPC/PFC)] /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:826:
  • image p271fig06.17 Persistent activity in IT cells is just what is needed to enable view-invariant object category learning by ARTSCAN to be generalized to [view, position, size]-invariant category learning by positional ARTSCAN, or pARTSCAN. See the text for details.
    || Persistent activity in IT. Physiological data show that persistent activity exist in IT (Fuester and Jervey 1981, Miyashita and Chang 1988, Tomita etal 1999). Adapted from (Tomita etal 1999 Nature) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:827:
  • image p272fig06.18 The pARTSCAN model can learn [view, position, size]-invariant categories by adding view category integrator cells that have the properties of persistent neurons in IT. These integrator cells get reset with the invariant object category, not the view category.
    || pARTSCAN: positionally-invariant object learning. (Cao, Grossberg, Markowitz 2011). IT cells with persistent activities are modeled by view category integrators in ITp. View-specific category cells are RESET as the eyes move within the object. View category integrator cells are NOT RESET when the view-specific category is reset. They are RESET along with invariant object category cells when a spatial attention shift occurs. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:828:
  • image p273fig06.20 pARTSCAN can simulate the IT cell recoding that Li and DiCarlo reported in their swapping experiments because the swapping procedure happens without causing a parietal reset burst to occur. Thus the originally activated invariant category remains activated and can get associated with the swapped object features.
    || Simulation of Li and DiCarlo swapping data. data (Li and DiCarlo 2008), model (Cao, Grossberg, Markowitz 2011). normalized response vs. exposure (swaps and/or hours) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:829:
  • image p274fig06.21 pARTSCAN can also simulate the trade-off in IT cell responses between position invariance and selectivity that was reported by Zoccolan etal 2007. This trade-off limits the amount of position invariance that can be learned by a cortical area like V1 that is constrained by the cortical magnification factor.
    || Trade-off in IT cell response properties. Inferotemporal cortex cells with greater position invariance respond less selectively to natural objects. invariance-tolerance, selectivity-sparseness. data (Zoccolan etal 2007) model (Grossberg, Markowitzm, Cao 2011). position tolerance (PT, degrees) vs sparseness (S) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:82:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:830:
  • image p274fig06.22 pARTSCAN can simulate how IT cortex processes image morphs, when it learns with high vigilance. See the text for details.
    || Akrami etal simulation: a case of high vigilance. tested on morphs between image pairs /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:831:
  • image p498fig13.22 (left column, top row) Adaptive filtering and conditioned arousal are both needed to regulate what cues can learn to activate particular space-time patterns. These developments lead inexorably to basic cognitive abilities, as embodied in the 3D LAMINART models for 3D vision and figure-ground perception (Chapter 11) and the 3D ARTSCAN SEARCH model for invariant object learning, recognition, and 3D search (Chapter 6). (right column, top row) Conditioned arousal enables only emotionally important cues to activate a motivationally relevant space-time pattern. (bottom row) Conditioned arousal and drive representations arise naturally from the unlumping of avalanche circuits to make them selective to motivationally important cues. The MOTIVATOR model is a natural outcome of this unlumping process (this chapter).
    || (top) Adaptive filtering and Conditioned arousal. Towards Cognition: need to filter inputs to the command cell. Towards Emotion: important signals turn arousal ON and OFF. (bottom) Conditioned arousal and Drive representations. Competition between conditioned arousal sources at drive representations, eg amygdala. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:841:
  • image p531fig14.06 Classification of scenic properties as texture categories by the ARTSCENE model. See the text for details.
    || Image-> Feature extraction (texture principal component rankings)-> Learning feature-to-scene mapping (texture category principal component rankings)<- scene class. Large-to-small attentional shrouds as principle component higher. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:842:
  • image p531fig14.07 Voting in the ARTSCENE model achieves even better prediction of scene type. See the text for details.
    || Image-> Feature extraction (texture principal component rankings)-> Learning feature-to-scene mapping (texture category principal component rankings)-> evidence accumulation (sum)-> scene class winner-take-all inference. Large-to-small attentional shrouds as principle component higher. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:843:
  • image p532fig14.08 Macrocircuit of the ARTSCENE Search neural model for learning to search for desired objects by using the sequences of already experienced objects and their locations to predict what and where the desired object is. V1 = First visual area or primary visual cortex; V2 = Second visual area; V4 = Fourth visual area; PPC = Posterior Parietal Cortex; ITp = posterior InferoTemporal cortex; ITa = anterior InferoTemporal cortex; MTL = Medial Temporal Lobe; PHC = ParaHippoCampal cortex; PRC = PeriRhinal Cortex; PFC = PreFrontal Cortex; DLPFC = DorsoLateral PreFrontal Cortex; VPFC = Ventral PFC; SC = Superior Colliculus.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:844:
  • image p533fig14.09 Search data and ARTSCENE Search simulations of them in each pair of images from (A) to (F). See the text for details.
    || 6*[data vs simulation], [Response time (ms) versus epoch]. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:855:
  • image p214fig05.26 When a big enough mismatch occurs, the orienting system is activated and sends a burst of nonspecific arousal to the category level. This Mismatch Detector has properties of the N200 ERP.
    || Mismatch triggers nonspecific arousal. Mismatch at F1 eleicits a nonspecific event at F2. Call this event nonspecific arousal. N200 ERP Naatanen etal: 1. bottom-up, 2. unconditionable, 3. nonspecific, 4. mismatch /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:856:
  • image p230fig05.38 The Synchronous Matching ART, or SMART, model includes spiking neurons in a laminar cortical hierarchy. I developed SMART with my PhD student Massimiliano Versace. By unlumping LAMINART to include spiking neurons, finer details of neurodynamics, such as the existence of faster gamma oscillations during good enough matches, and slower beta oscillations during bad enough mismatches, could be shown as emergent properties of network interactions.
    || Second order thalamus -> specific thalamic nucleus -> Thalamic reticulate nucleus -> neocortical laminar circuit [6ll, 6l, 5, 2/3, 1] -> Higher order cortex. Similar for First order thalamus -> First order cortex, with interconnection to Second order, nonspecific thalamic nucleus /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:857:
  • image p231fig05.39 The SMART hypothesis testing and learning cycle predicts that vigilance increases when a mismatch in subcortical regions like the nonspecific thanlamus activates the nucleus basalis of Meynert which, in turn, broadcasts a burst of the neurotransmitter acetylcholine, or ACh, to deeper cortical layers. Due to the way in which LAMINART proposes that cortical matching and mismatching occurs, this ACh burst can increase vigilance and thereby trigger a memory search. See the text for details.
    || [BU input, [, non]specific thalamic nucleus, thalamic reticulate nucleus, neocortical laminar circuit] cart [Arousal, Reset, Search, Vigilance] /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:858:
  • image p232fig05.40 Computer simulation of how the SMART mode generates (a) gamma oscillations if a good enough match occurs, or (c) beta oscillations if a bad enough match occurs. See the text for details.
    || Brain oscillations during match/mismatch, data, simulation. (a) TD corticothalamic feedback increases synchrony (Sillito etal 1994) (b) Match increases γ oscillations (c) Mismatch increases θ,β oscillations /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:859:
  • image p232fig05.41 (a)-(c). The sequence of interlaminar events that SMART predicts during a mismatch reset. (d) Some of the compatible neurophysiological data.
    || Mismatch causes layer 5 dendritic spike that trigger reset. (a) Arousal causes increase in nonspecific thalamic nuclei firing rate and layer 5 dendritic and later somatic spikes (Karkum and Zhu 2002, Williams and Stuart 1999) (b) Layer 5 spikes reach layer 4 via layer 6i and inhibitory neurons (Lund and Boothe 1975, Gilbert and Wiesel 1979) (c) habituative neurotransmitters in layer 6i shift the balance of active cells in layer 4 (Grossberg 1972, 1976) (d) Dendritic stimulation fires layer 5 (Larkum and Zhu 2002) stimulation apical dendrites of nonspecific thalamus /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:860:
  • Grossberg 2021 p229c2h0.60 SMART computer simulations demonstrate that a good enough match of a top-down expectation with a bottom-up feature pattern generates an attentive resonance during which the spikes of active cells synchronize in the gamma frequency range of 20-70 Hz (Figure 5.40). Many labs have reported a link between attention and gamma oscillations in the brain, including two articles published in 2001, one from the laboratory of Robert Desimone when he was at the the National Institute of Mental Health in Bethseda (Fries, Reynolds, Rorie, Desimone 2001), and the other from the laboratory of Wolf Singer in Frankfurt (Engel, Fries, Singer 2001). You'll note that Pascal Fries participated in both studies, and is an acknowledged leader in neurobiological studies of gamma oscillations; eg (Fries 2009). .." /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:871:
  • image p468fig12.65 Linguistic properties of the PHONET model and some of the data that it simulates. The upper left image summarizes the asymmetric transient-to-sustained gain control that helps to create invariant intraword ratios during variable-rate speech. The lower left image summarizes the rate-dependent gain control of the ARTPHONE model that creates rate-invariant working memory representations in response to sequences of variable-rate speech. The right image summarizes the kind of paradoxical VC-CV category boundary data of (Repp 1980) that ARTPHONE simulates. See the text for details.
    || (left upper) [transient, sustained] [working memory, filter, category]. (left lower) phone inputs-> [input rate estimate, features], Features w <- habituative transmitter gates -> categories-> rate invariant phonetic output, input rate estimate-> gain control-> [features, categories] rate-dependent integration of categories and features. (right) % 2-stop vs VC-CV silent interval (msec): [ib-ga, ib-ba, iga, iba]. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:87:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:883:
  • image p420fig12.18 The ARTSTREAM model explains and simulates the auditory continuity illusion as an example of a spectral-pitch resonance. Interactions of ART Matching Rule and asymmetric competition mechanisms in cortical strip maps explain how the tone selects the consistent frequency from the noise in its own stream while separating the rest of the noise into another stream.
    || ARTSTREAM model (Grossberg 1999; Grossberg, Govindarajan, Wyse, Cohen 2004). SPINET. Frequency and pitch strips. Bottom Up (BU) harmonic sieve. Top Down (TD) harmonic ART matching. Exclusive allocation. Learn pitch categories based on early harmonic processing. A stream is a Spectral-Pitch Resonance! /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:884:
  • image p422fig12.19 The ARTSTREAM model includes mechanisms for deriving streams both from pitch and from source direction. See the text for details.
    || [left, right] cart Peripheral processing = [input signal-> outer & middle ear preemphasis-> basilar membrane gammatone filterbank-> energy measure]. Spectral stream layer-> spectral summation layer-> delays-> [f-, tau] plane-> pitch stream layer-> pitch summation layer. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:885:
  • image p423fig12.20 The Spatial Pitch Network, or SPINET, model shows how a log polar spatial representation of the sound frequency spectrum can be derived from auditory signals occuring in time. The spatial representation allows the ARTSTREAM model to compute spatially distinct auditory streams.
    || SPINET model (Spatial Pitch Network) (Cohen, Grossberg, Wyse 1995). 1. input sound 2. Gamma-tone filter bank 3. Shaort-term average energy spectrum 4. MAP transfer function 5. On-center off-surround and rectification 6. Harmonic weighting 7. Harmonic summation and competition -> PITCH /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:886:
  • image p424fig12.21 One of the many types of data about pitch processing that are simulated by the SPINET model. See the text for details.
    || Pitch shifts with component shifts (Patterson, Wightman 1976; Schouten 1962). Pitch vs lowest harmonic number. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:887:
  • image p425fig12.23 ARTSTREAM simulations of the auditory continuity illusion and other streaming properties (left column, top row). When two tones are separated by silence (Input), a percept of silence also separates them in a spectral-pitch resonance. (left column, bottom row). When two tones are separated by broadband noise, the percept of tone continues through the noise in one stream (stream 1) while the remainder of the noise occurs in a different stream (stream 2). (right column) Some of the other streaming properties that have been simulated by the ARTSTREAM model.
    || Auditory continuity does not occur without noise. Auditory continuity in noise. Other simulated streaming data. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:888:
  • image p431fig12.27 The strip maps that occur in ARTSTREAM and NormNet are variants of a cortical design that aalso creates ocular dominance columns in the visual cortex.
    || Adult organization of V1 (Grinvald etal http://www.weizmann.ac.il/brain/images/cubes.html). (1) Occular dominance columns (OCDs): Alternating strips of cortex respond preferentially to visual inputs of each eye (R/L corresponds to Right and Left eye inputs in the figure); Orientation columns: A smooth pattern of changing orientation preference within each ODC. Organized in a pinwheel like fashion. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:903:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:905:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:908:
    p370 Chapter 11 means (Grossberg 2021) page 370, Chapter 11
    p002sec Illusion and realitymeans (Grossberg 2021) page 2, section Illusion and reality
    p013fig01.09means (Grossberg 2021) page 13, Figure 1.09 (1.9 as in book)
    p030tbl01.02 means (Grossberg 2021) page 30, Table 1.02 (1.2 as in book)
    p111c2h0.5means (Grossberg 2021) page 111, column 2, height from top as fraction of page height
    || text...Are notes in addition to [figure, table] captions, mostly comprised of text within the image, but also including quotes of text in the book. Rarely, it includes comments by Howell preceded by "Howell". The latter are distinct from "readers notes" (see, for example : reader Howell notes).
    p044 Howell: grepStr 'conscious' means a comment by reader Howell, extracted using the grep string shown, referring to page 44 in (Grossberg 2021)
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:90:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:911:
  • p001 Chapter 1 Overview - From Complementary Computing and Adaptive Resonance to conscious awareness /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:913:
  • p250 Chapter 6 Conscious seeing and invariant recognition - Complementary cortical streams coordinate attention for seeing and recognition /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:914:
  • p539 Chapter 15 Adaptively timed learning - How timed motivation regulates conscious learning and memory consolidation /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:92:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:94:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:957:
    p370 Chapter 11 means (Grossberg 2021) page 370, Chapter 11
    p002sec Illusion and realitymeans (Grossberg 2021) page 2, section Illusion and reality
    p013fig01.09means (Grossberg 2021) page 13, Figure 1.09 (1.9 as in book)
    p030tbl01.02 means (Grossberg 2021) page 30, Table 1.02 (1.2 as in book)
    p111c2h0.5means (Grossberg 2021) page 111, column 2, height from top as fraction of page height
    || text...Are notes in addition to [figure, table] captions, mostly comprised of text within the image, but also including quotes of text in the book. Rarely, it includes comments by Howell preceded by "Howell". The latter are distinct from "readers notes" (see, for example : reader Howell notes).
    p044 Howell: grepStr 'conscious' means a comment by reader Howell, extracted using the grep string shown, referring to page 44 in (Grossberg 2021)
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:958:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:960:
  • image p039tbl01.03 The link between consciousness and movement
    ||
    VISUALseeing, knowing, and reaching
    AUDITORYhearing, knowing, and speaking
    EMOTIONALfeeling, knowing, and acting
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:961:
  • image p042tbl01.04 The six main kinds of resonances which support different kinds of conscious awareness that will be explained and discussed in this book.
    ||
    type of resonancetype of consciousness
    surface-shroudsee visual object or scene
    feature-categoryrecognize visual object or scene
    stream-shroudhear auditory object or stream
    spectral-pitch-and-timbrerecognize auditory object or stream
    item-listrecognize speech and language
    cognitive-emotionalfeel emotion and know its source
    /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:964:
  • image p270fig06.16 The same target position signal that can command the next saccade also updates a gain field that predictively maintains the attentional shroud in head-centered coordinates, even before the eye movement is complete. This process keeps the shroud invariant under eye movements, so that it can continue to inhibit reset of an emerging invariant category as t is associated with multiple object views, even while the conscious surface representation shifts with each eye movement in retinotopic coordinates. This pdating process is often called predictive re mapping.
    || Predictive remapping of eye movements! From V3A to LIP. [spatial attention, object attention, figure-ground separation, eye movement remapping, visual search]. (Beauvillaib etal 2005, Carlson-Radvansky 1999, Cavanaugh etal 2001, Fecteau & Munoz 2003, Henderson & Hollingworth 2003, Irwin 1991) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:965:
  • image p278fig06.27 A surface-shroud resonance through the Where stream enables us to consciously see an object while a feature-category resonance into the What stream enables us to recognize it. Both kinds of resonances can synchronize via visual cortex so that we can know what an object is when we see it.
    || What kinds of resonances support knowing vs seeing? What stream [knowing, feature-prototype resonance], Where stream [seeing, surface-shroud resonance] /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:966:
  • image p278fig06.28 If the feature-category resonances cannot form, say due to a lesion in IT, then a surface-shroud resonance can still support conscious seeing of an attended object, and looking at or reaching for it, even if the individual doing so knows nothing about the object, as occurs during visual agnosia. The surface-shroud resonance supports both spatial attention and releases commands that embody the intention to move towards the attended object.
    || What kinds of resonances support knowing vs seeing? visual agnosia: reaching without knowing Patient DF (Goodale etal 1991). Attention and intention both parietal cortical functions (Anderson, Essick, Siegel 1985; Gnadt, Andersen 1988; Synder, Batista, Andersen 1997, 1998) /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:967:
  • image p355fig10.02 Distinguishing processes of seeing vs knowing has been difficult because they interact so strongly.
    || Seeing vs. Knowing. Seeing and knowing [operate at different levels of the brain, use specialized circuits], but they [interact via feedback, use similar cortical designs, feedback is needed for conscious perception]. Cerebral Cortex: Seeing [V1-V4, MS-MST], Knowing [IT, PFC]. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:968:
  • image p369fig10.19 Data from (Watanabe etal 2001) showing perceptual learning of the coherent motion direction, despite the lack of extra-foveal attention and awareness of the moving stimuli.
    || Unconscious perceptual learning of motion direction, % correct for two tests, compared to chance level results. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:969:
  • image p396fig11.35 Three properties of bipole boundary grouping in V2 can explain how boundaries oscillate in response to rivalry-inducing stimuli. Because all boundaries are invisible, however, these properties are not sufficient to generate a conscious percept of rivalrous surfaces.
    || 3 V2 boundary properties cause binocular rivalry. 1. Bipole grouping, 2. Orientational competition, 3. Actovity-dependent habituation /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:96:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:970:
  • image p419fig12.17 The auditory continuity illusion illustrates the ART Matching Rule at the level of auditory streaming. Its "backwards in time" effect of future context on past conscious perception is a signature of resonance.
    || Auditory continuity illusion. input, percept. Backwards in time - How does a future sound let past sound continue through noise? Resonance! - It takes a while to kick in. After it starts, a future tone can maintain it much more quickly. Why does this not happen if there is no noise? - ART Matching Rule! TD harmonic filter is maodulatory without BU input. It cannot create something out of nothing. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:971:
  • image p436fig12.30 The conscious ARTWORD, or cARTWORD, laminar cortical speech model simulates how future context can disambiguate noisy past speech sounds in such a way that the completed percept is consciously heard to proceed from past to future as a feature-item-list resonant wave propagates through time.
    || cARTWORD: Laminar cortical model macrocircuit (Grossberg, Kazerounian 2011) Simulates PHONEMIC RESTORATION: Cognitive Working Memory (processed item sequences) - [Excitatory-> inhibitory-> habituative-> adaptive filter-> adaptive filter-> adaptive filter with depletable synapse-> Acoustic [item, feature] /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:972:
  • image p455fig12.52 Simulation of cARTWORD dynamics in response to the complete list /1/-/2/-/3/. The relevant responses are surrounded by a red box.
    || Presentation of a normal sequence: input /1/-/2/-/3/. |c(i,1)-5| vs time (msec). List chunks select most predictive code. Order stored in WM layers. Resonant activity of /1/-/2/-/3/ in item and feature layers corresponds to conscious speech percept. /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:973:
  • image p484fig13.04 The top-down feedback from the orbitofrontal cortex closes a feedback loop that supports a cognitive-emotional resonance. If this resonance can be sustained long enough, it enables us to have feelings at the same time that we experience the categories that caused them.
    || Cognitive-Emotional resonance. Basis of "core consciousness" and "the feeling of what happens". (Damasio 1999) derives heuristic version of CogEM model from his clinical data. Drive-> amygdala-> prefrontal cortex-> sensory cortex, resonance around the latter 3. How is this resonance maintained long enough to become conscious? /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:98:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs [core, fun, strange] concepts.HtmWeb.html:999:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs list of [chapter, section]s.HtmWeb.html:100:
  • p289 Chapter 8 How we see and recognize object motion - Visual form and motion perception obey complementary laws /home/bill/web/Neural nets/Grossberg/Grossbergs list of [chapter, section]s.HtmWeb.html:101:
  • p337 Chapter 9 Target tracking, navigation, and decision-making - Visual tracking and navigation obey complementary laws /home/bill/web/Neural nets/Grossberg/Grossbergs list of [chapter, section]s.HtmWeb.html:102:
  • p353 Chapter 10 Laminar computing by cerebral cortex - Towards a unified theory of biologucal and artificial intelligence /home/bill/web/Neural nets/Grossberg/Grossbergs list of [chapter, section]s.HtmWeb.html:103:
  • p370 Chapter 11 How we see the world in depth - From 3D vision to how 2D pictures induce 3D percepts /home/bill/web/Neural nets/Grossberg/Grossbergs list of [chapter, section]s.HtmWeb.html:104:
  • p404 Chapter 12From seeing and reaching to hearing and speaking - Circular reaction, streaming, working memory, chunking, and number /home/bill/web/Neural nets/Grossberg/Grossbergs list of [chapter, section]s.HtmWeb.html:105:
  • p480 Chapter 13 From knowing to feeling - How emotion regulates motivation, attention, decision, and action /home/bill/web/Neural nets/Grossberg/Grossbergs list of [chapter, section]s.HtmWeb.html:106:
  • p517 Chapter 14 How prefrontal cortex works - Cognitive working memory, planning, and emotion conjointly achieved valued goals /home/bill/web/Neural nets/Grossberg/Grossbergs list of [chapter, section]s.HtmWeb.html:107:
  • p539 Chapter 15 Adaptively timed learning - How timed motivation regulates conscious learning and memory consolidation /home/bill/web/Neural nets/Grossberg/Grossbergs list of [chapter, section]s.HtmWeb.html:108:
  • p572 Chapter 16 Learning maps to navigate space - From grid, place, and time cells to autonomous mobile agents /home/bill/web/Neural nets/Grossberg/Grossbergs list of [chapter, section]s.HtmWeb.html:109:
  • p618 Chapter 17 A universal development code - Mental measurements embody universal laws of cell biology and physics /home/bill/web/Neural nets/Grossberg/Grossbergs list of [chapter, section]s.HtmWeb.html:20: /home/bill/web/Neural nets/Grossberg/Grossbergs list of [chapter, section]s.HtmWeb.html:23: /home/bill/web/Neural nets/Grossberg/Grossbergs list of [chapter, section]s.HtmWeb.html:25: /home/bill/web/Neural nets/Grossberg/Grossbergs list of [chapter, section]s.HtmWeb.html:27: /home/bill/web/Neural nets/Grossberg/Grossbergs list of [chapter, section]s.HtmWeb.html:42:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs list of [chapter, section]s.HtmWeb.html:44:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs list of [chapter, section]s.HtmWeb.html:46:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs list of [chapter, section]s.HtmWeb.html:77:
    p370 Chapter 11 means (Grossberg 2021) page 370, Chapter 11
    p002sec Illusion and realitymeans (Grossberg 2021) page 2, section Illusion and reality
    p013fig01.09means (Grossberg 2021) page 13, Figure 1.09 (1.9 as in book)
    p030tbl01.02 means (Grossberg 2021) page 30, Table 1.02 (1.2 as in book)
    p111c2h0.5means (Grossberg 2021) page 111, column 2, height from top as fraction of page height
    || text...Are notes in addition to [figure, table] captions, mostly comprised of text within the image, but also including quotes of text in the book. Rarely, it includes comments by Howell preceded by "Howell". The latter are distinct from "readers notes" (see, for example : reader Howell notes).
    p044 Howell: grepStr 'conscious' means a comment by reader Howell, extracted using the grep string shown, referring to page 44 in (Grossberg 2021)
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [chapter, section]s.HtmWeb.html:91:
  • p00I PrefacePreface - Biological intelligence in sickness, health, and technology /home/bill/web/Neural nets/Grossberg/Grossbergs list of [chapter, section]s.HtmWeb.html:92:
  • p001 Chapter 1 Overview - From Complementary Computing and Adaptive Resonance to conscious awareness /home/bill/web/Neural nets/Grossberg/Grossbergs list of [chapter, section]s.HtmWeb.html:93:
  • p050 Chapter 2 How a brain makes a mind - Physics and psychology split as brain theories were born /home/bill/web/Neural nets/Grossberg/Grossbergs list of [chapter, section]s.HtmWeb.html:94:
  • p086 Chapter 3 How a brain sees: Constructing reality - Visual reality as illusions that explain how we see art /home/bill/web/Neural nets/Grossberg/Grossbergs list of [chapter, section]s.HtmWeb.html:95:
  • p122 Chapter 4 How a brain sees: Neural mechanisms - From boundary completion and surface flling-in to figure-ground perception /home/bill/web/Neural nets/Grossberg/Grossbergs list of [chapter, section]s.HtmWeb.html:96:
  • p184 Chapter 5 Learning to attend, recognize, and predict the world - /home/bill/web/Neural nets/Grossberg/Grossbergs list of [chapter, section]s.HtmWeb.html:98:
  • p250 Chapter 6 Conscious seeing and invariant recognition - Complementary cortical streams coordinate attention for seeing and recognition /home/bill/web/Neural nets/Grossberg/Grossbergs list of [chapter, section]s.HtmWeb.html:99:
  • p280 Chapter 7 How do we see a changing world? - How vision regulates object and scene persistence /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:134:
  • image pxvifig00.01 Macrocircuit of the visual system /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:141:
  • image p002fig01.01 The difference between seeing and recognizing.
    || (W. Epstein, R. Gregory, H. von Helmholtz, G. Kanizsa, P. Kellman, A. Michote...) Seeing an object vs Knowing what it is. Seeing Ehrenstein illusion (See, recognize) va Recognizing offset grating Do not see, recognize). offset grating: some boundaries are invisible or amodal. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:142:
  • image p002fig01.02 Dalmation in snow
    || p002c2h0.55 "...This image reminds us that invisible boundaries can sometimes be very useful in helping us to recognize visual objects in the world. ... When we first look at this picture, it may just look like an array of black splotches of different sizes, desities, and orientations across the picture. Gradually, however, we can recognize the Dalmatian in it as new boundaries form in our brain between the black splotches. ..." /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:143:
  • image p003fig01.03 Amodal completion
    || p00c1h0.75 "... Figure 1.3 illustrates what I mean by the claim that percepts derived from pictures are often illusions. Figure 1.3 (left column) shows three rectangular shapes that abut one another. Our percept of this image irresitably creates a different interpretation, however. We perceive a horizontal bar lying in front of a partially occluded vertical bar that is amodally completed behind it. ..." /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:144:
  • image p004fig01.04 (top row) Kanizsa stratification; (botton row) transparency images
    || [top row images] "... are called stratification percepts... This simple percept can ... be perceived either as a white cross in front of a white outline square, or as a white outline square in front of a white cross. The former percept usually occurs, but the percept can intermittently switch between these two interpretations. ...it is said to be a bistable percept. ..." /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:145:
  • image p008fig01.05 Noise-saturation dilemma.
    || cell activity vs cell number; [minimum, equilibrium, current, maximal] activity /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:146:
  • image p009fig01.06 Primacy gradient of activity stored in working memory within a recurrent shunting on-center off-surround network. Rehersal is controlled by a nonspecific rehersal wave and self-inhibitory feedback of the item that is currently being rehearsed. Rehearsal is controlled by a nonspecific rehearsal wave and self-inhibitory feedback of the item that is currently being rehearsed. Green = excitatory, red = inhibitory
    || inputs? -> item and order WM storage -> competitive selection-> rehearsal wave -> outputs /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:147:
  • image p011fig01.07 The choice of signal function f determines how an initial activity pattern will be transformed and stored in short-term memory (STM). Among [same, slower, faster]-than-linear signal functions, only the last one can suppress noise. It does so as it chooses the population that receives the largest input for storage, while suppressing the activities of all other population, thereby giving rise to a winner-take-all choice.
    || initial pattern (xi(0) vs i):
    fXi(∞)= xi(∞)/sum[j: xj(∞)]x(∞)
    linearperfect storage of any patternamplifies noise (or no storage)
    slower-than-linearsaturatesamplifies noise
    faster-than-linearchooses max [winner-take-all, Bayesian], categorical perceptionsuppresses noise, [normalizes, quantizes] total activity, finite state machine
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:148:
  • image p012fig01.08 A sigmoidal signal function is a hybrid signal that combines the best properties of [faster, same, slower]-than linear signals. It can suppress noise and store a partially contrast-enhanced activity pattern. slower-than-linear saturates pattern; approximately linear- preserves pattern and normalizes; faster-than-linear- noise suppression and contrast-enhancement.
    || Sigmoidal signal: a hybrid. (upper) saturates pattern- slower-than-linear; (middle) preserves pattern and normalizes- approximately linear. (lower) noise suppression and contrast enhancement- faster-than-linear. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:149:
  • image p013fig01.09 A sigmoid signal function generates a quenching threshold below which cell activities are treated like noise and suppressed. Activities that are larger than the quenching threshold are contrast enhanced and stored in short-term memory.
    || Quenching threshold xi(o) vs i.
    fXi(∞)= xi(∞)/sum[j: xj(∞)]x(∞)
    sigmoidtunable filter
    stores infinitely many contrast-enhanced patterns
    suppresses noise
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:150:
  • image p016fig01.10 The blocking paradigm shows how sensory cues that are conditioned to predict specific consequences can attentionally block other cues that do not change those predictions. On the other hand, if the total cue context is changed by adding a cue that does not change the predicted consequences, then the new cues can be conditioned to the direction of that change. They can hereby learn, for example, to predict fear if the shock level unexpectedly increases, or relief if the shock level unexpectedly decreases.
    || Minimal adaptive prediction. blocking- CS2 is irrelevant, unblocking- CS2 predicts US change. Learn if CS2 predicts a different (novel) outcome than CS1. CS2 is not redundant. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:151:
  • image p016fig01.11 A sufficiently big mismatch between a bottom-up input pattern and a top-down expectation can activate the orienting system, which triggers a burst of nonspecific arousal that can reset the recognition category that read out the expectation. In this way, unexpected events can reset short-term memory and initiate a search for a category that better represents the current situation.
    || [category- top-down (TD) expectation; Bottom-up (BU) input pattern] -> Feature pattern -> BU-TD mismatch -> orienting system -> non-specific arousal -> category. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:152:
  • image p018fig01.12 Peak shift and behavioural contrast. When a negative generalization gradient (in red) is subtracted from a positive generalization gradient (in green), the net gradient (in purple) is shifted way from the negative gradient and has a width that is narrower than any of its triggering gradients. Because the total activity of the network tends to be normalized, the renormalized peak of the net gradient is higher than that of the rewarded gradient, thereby illustrating that we can prefer experiences that we have never previously experienced over those for which we have previously been rewarded.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:153:
  • image p019fig01.13 Affective circuits are organized into opponent channels, such as fear vs. relief, and hunger vs. frustration. On a larger scale of affective behaviours, exploration and consummation are also opponent types of behaviour. Exploration helps to discover novel sources of reward. Consummation enables expected rewards to be acted upon. Exploration must be inhibited to enable an animal to maintain attention long enough upon a stationary reward in order to consume it.
    || exploration vs consummation /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:154:
  • image p023fig01.14 A gated dipole opponent process can generate a transient antagonistic reboubnd from its OFF channel in response to offset of an input J to its ON channel. sustained on-response; transient off-response; opponent process; gates arousal: energy for rebound.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:155:
  • image p024fig01.15 A REcurrent Associative Dipole, or READ, circuit is a recurrent shunting on-center off-surround network with habituative transmitter gates. Sensory cues sample it with LTM traces and thereby become conditioned reinforcers.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:156:
  • image p025fig01.16 (left panel) The main processing stages of the Cognitive-Emotional-Motor (CogEM) model have anatomical interpretations in terms of sensory cortex, amygdala, and prefrontal cortex. Chapter 13 will describe in greater detail how CS cues activate invariant object categories in the sensory cortex, value categories in the amygdala, and object-value categories in the prefrontal cortex, notably the orbitofrontal cortex. The amygdala is also modulated by internal drive inputs like hunger and satiety. (right panel) Anatomical data support this circuit, as do many neurophysiological data.
    || drive -> amygdala -> prefrontal cortex <-> sensory cortex -> amygdala. [visual, somatosensory, auditory, gustatory, olfactory] cortex -> [amygdala, Orbital Prefrontal Cortex]. amygdala -> Lateral Prefrontal Cortex /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:157:
  • image p025fig01.17 Sensory-drive heterarchy vs. drive hierarchy. How cues and drives interact to choose the drive and motivation that will control behavioral choices.
    || [drive inputs, sensory cue [before, after] cross-over] -> incentive motivation [eat, sex]. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:158:
  • image p026fig01.18 Inverted U as a function of arousal. A Golden Mean at intermediate levels of arousal generates a combination of behavioral threshold, sensitivity, and activation that can support typical behaviors. Both underarousal and overarousal lead to symptoms that are found in mental disorders.
    || Behavior vs arousal.
    depressionunder-arousedover-aroused
    thresholdelevatedlow
    excitable above thresholdHyperHypo
    "UPPER" brings excitability "DOWN". /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:159:
  • image p027fig01.19 The ventral What stream is devoted to perception and categorization. The dorsal Where stream is devoted to spatial representation and action. The Where stream is also often called the Where/How stream because of its role in the control of action.
    ||
    Spatial representation of actionPerception categorization
    WHERE dorsalWHAT ventral
    Parietal pathway "where"Temporal pathway "what"
    Posterior Parietal Cortex (PPC)Inferior temporal Cortex (IT)
    Lateral Prefrontal Cortex (LPFC)Lateral Prefrontal Cortex (LPFC)
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:160:
  • image p029tbl01.01 Some pairs of complementary processing streams.
    ||
    visual boundary:
    interblob stream V1-V2-V4
    visual surface:
    blob stream V1-V2-V4
    visual boundary:
    interblob stream V1-V2-V4
    visual motion:
    magno stream V1-MT-MST
    WHAT streamWHERE stream
    perception & recognition:
    interferotemporal & prefrontal areas
    space & action:
    parietal & prefrontal areas
    object tracking:
    MT interbands & MSTv
    optic flow navigation:
    MT+ bands & MSTd
    motor target position:
    motor & parietal cortex
    volitional speed:
    basal ganglia
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:161:
  • image p030tbl01.02 The What and Where cortical processing streams obey complementary laws. These laws enable the What stream to rapidly and stably learn invariant object categories without experiencing catastrophic forgetting, while the Where stream learns labile spatial and action representations to control actions that are aimed towards these objects.
    ||
    WHATWHERE
    spatially-invariant object learning and recognitionspatially-variant reaching and movement
    fast learning without catastrophic forgettingcontinually update sensory-motor maps and gains
    IT InterferoTemporal CortexPPC Posterior Parietal Cortex
    WhatWhere
    matchingexcitatoryinhibitory
    learningmatchmismatch
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:162:
  • image p030fig01.20 A schematic cross-section of a slice of laminar neocortex whose cells are organized in a characteristic way in six layers, which themselves may be organized into distinct sublaminae. The computational paradigm of Laminar Computing attempts to show how different parts of neocortex can represent and control very different kinds of behavior - including vision, speech, can cognition - using specializations of the same canonical laminar cortical design.
    || Projection fibres: Cortico[spinal, bulbar, pontine, striate, reticulat, etc]; Thalamocortical fibres; Diffuse cortical afferent fibres: [nonspecific thalamocortical, Cholinergic, Monoaminergic]; Corticocortical efferents; Projection [cell, fibre]; Corticocortical efferent terminals. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:163:
  • image p032fig01.21 At least three parallel visual cortical streams respond to visual inputs that reach the retina. Two parvocellular streams process visual surfaces (blob stream) and visual boundaries (interblob stream). The magnocellular stream processes visual motion.
    || [Retina, LGNs, V[1,2,3,4], MT] to [What- inferotemporal areas, Where- parietal areas]: visual parallel streams [2x blob, 1x bound] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:164:
  • image p035fig01.22 A classical example of phonemic restoration. The spectrogram of the word "legislatures" is either excised, leaving a silent interval, or filled with broad-band noise. A percept of the restored phoneme is heard when it is replaced by noise, but not by silence.
    || [normal, silence, noise replaced] presentations. frequency (Hz) vs time (sec). /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:165:
  • image p036fig01.23 As more items are stored in working memory through time, they can select larger chunks with which to represent the longer list of stored items.
    || [x, y, z] -> [xy, xyz] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:166:
  • image p037fig01.24 Only three processing stages are needed to learn how to store and categorize sentences with repeated words in working memory. See the text for more discussion.
    || IOR working memory (item chunk-> sequences) <-> IOR masking field: [item->list]<->[list->list] chunks. (<-> signifies <- expectation/attention, adaptive filter ->) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:167:
  • image p038fig01.25 The ART Matching Rule stabilizes real time learning using a [top-down, modulatory on-center, off-surround] network. Object attention is realized by such a network. See text for additional discussion.
    || ART Matching Rule [volition, categories, features]. [one, two] against one. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:168:
  • image p039tbl01.03 The link between consciousness and movement
    ||
    VISUALseeing, knowing, and reaching
    AUDITORYhearing, knowing, and speaking
    EMOTIONALfeeling, knowing, and acting
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:169:
  • image p042tbl01.04 The six main kinds of resonances which support different kinds of conscious awareness that will be explained and discussed in this book.
    ||
    type of resonancetype of consciousness
    surface-shroudsee visual object or scene
    feature-categoryrecognize visual object or scene
    stream-shroudhear auditory object or stream
    spectral-pitch-and-timbrerecognize auditory object or stream
    item-listrecognize speech and language
    cognitive-emotionalfeel emotion and know its source
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:177:
  • image p051fig02.01 Along the boundaries between adjacent shades of gray, laterial inhibition makes the darker area appear even darker, and the lighter areas appear even lighter. (Ernst Mach bands)
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:178:
  • image p052fig02.02 Feature-category resonances enable us to rapidly learn how to recognize objects without experiencing catastrophic forgetting. Attentive matching between bottom-up feature pattern inputs and top-down expectations prevent catastrophic forgetting by focussing object attention upon expected patterns of features, while suppressing outlier features that might otherwise have caused catastophic forgetting if they were learned also.
    || Adaptive Resonance. Attended feature clusters reactivate bottom-up pathways. Activated categories reactivate their top-down pathways. Categories STM, Feature patterns STM. Feature-Category resonance [synchronize, amplify, prolong]s system response. Resonance triggers learning in bottom-up and top-down adaptive weights: adaptive resonance! /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:179:
  • image p057fig02.03 Some basic anatomical and physiological properties of individual neurons. See the text for additional discussion.
    ||
    physiologycell body potentialaxonal signalchemical transmitter
    anatomynerve cell bodyaxonsynaptic knob, synapse
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:180:
  • image p058fig02.04 Serial learning paradigm: Learning the temporal order of events by practicing them in the order that they occur in time.
    || Learning a global arrow in time. How do we learn to encode the temporal order of events in LTM? serial learning. [w=intra, W=inter]trial interval. "... data about serial verbal learning (Figure 2.4) seemed to suggest that events can go "backwards in time". ..." /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:181:
  • image p059fig02.05 Bowed serial position curve. This kind of data emphasizes the importance of modelling how our brains give rise to our minds using nonlinear systems of differential equations.
    || Effects of [inter, intra]trial intervals (Hovland 1938). # of errors vs list position. [w (sec), W (sec)] = (2 6) (4 6) (2 126) (4 126). Nonoccurance of future items reduces the number of errors in response to past items. Thes data require a real-time theory for their explanation! that is, DIFFERENTIAL equations. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:182:
  • image p059fig02.06 The bowed serial position curve illustrates the sense in which "events can go backwards in time" during serial learning.
    || Bow due to backward effect in time. If the past influenced the future, but no conversely: # of errors vs list position; Data (Hoyland Hull, Underwood, etc). /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:183:
  • image p060fig02.07 Position-specific-forward and backward error gradients illustrate how associations can form in both the forward and backward directions in time before the list is completely learned.
    || Error gradients: depend on list position. # of responses vs list position:
    list beginninganticipatory errorsforward in time
    list middleanticipatory and perseverative errorsforward and backward in time
    list endperseverative errorsbackward in time
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:184:
  • image p061fig02.08 The existence of forward and backward associations, such as from A to B and from B to A is naturally explained by a network of neurons with their own activities or STM traces, and bidirectional connections between them with their own adaptive weights or LTM traces.
    || How these results led to neural networks (Grossberg 1957). Networks can learn forward and backward associations! Practice A->B also learn B<-A. Because learning AB is not the same as learning BA, you need STM traces, or activations, xp at the nodes, or cells, and LTM traces, or adaptive weights, zg, for learning at the synapses. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:185:
  • image p063fig02.09 The Additive Model describes how multiple effects add up influence the activities, or STM, traces of neurons.
    || STM: Additive model (Grossberg, PNAS 1967, 1968).
    Short-term memory (STM)
    trace activation
    signaladaptive weightLong-term memory (LTM)
    trace
    xi(j)fi(xi(t))*Bijzij(t)xj(t)
    learning rate?passive decaypositive feedbacknegative feedbackinput
    d[dt: xi(t)] = - Ai*xi + sum[j=1 to n: fj(xj(t))*Bji*zji] - sum[j=1 to n: gj(xj)*Cp*Zp] + Ii
    Special case : d[dt: xi(t)] = - Ai*xi + sum[j=1 to n: fj(xj(t))*zp] + Ii /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:186:
  • image p064fig02.10 The Shunting Model includes upper and lower bounds on neuronal activities. These bound have the effect of multiplying additive terms by excitatory and inhibitory automatic gain terms that enable such models to preserve their sensitivity to inputs whose size may vary greatly in size through time, while also approximately normalizing their total activities.
    || STM: Shunting Model (Grossberg, PNAS 1967, 1968). Mass action in membrane equations. Bi/Ci -> xi(t) -> O -> -Fi/Ei. Bounded activations, automatic gain control. d[dt: xi(t)] = - Ai*xi + (Bi - Ci*xi)sum[j=1 to n: fj(xj(t))*Dji*yji*zji + Ii] - (Ei*Xi + Fi)*sum[j=1 to n: gj(xj)*Gji*Yji*Zji + Ji]. Includes the Additive Model. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:187:
  • image p064fig02.11 Medium-Term Memory (MTM) and Long-Term Memory (LTM) equations complement the Additive and Shunting Models of STM. MTM is typically defined by a chemical transmitter that is released from the synaptic knobs of a neuron (Figure 2.03). Its release or inactivation in an activity-dependent way is also called habituation. LTM defines how associative learning occurs between a pair of neurons whose activities are approximately correlated through time. See the text for details.
    || Medium and Long Term memory.
    MTMhabituative transmitter gated[dt: yki(t)] = H*(K - yki) - L*fk(xk)*yki
    LTMgated steepest descent learningd[dt: zki(t)] = Mk*fk(xk)*(hi(xi) - zki)
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:188:
  • image p065fig02.12 Three sources of neural network research: [binary, linear, continuous nonlinear]. My own research has contributed primarily to the third.
    || Three sources of neural network research.
    BinaryLinearContinuous and non-Linear
    neural network signal processingSystems theoryNeurophysiology and Psychology
    McCullogh-Pitts 1943
    ... Xi(t+1) = sgn{sum[j: Aij*Xj(t) - Bi}
    Von Neumann 1945
    Calanielio 1961
    Rosenblatt 1962
    Widrow 1962
    Anderson 1968
    Kohonen 1971
    Hodgkin, Huxley 1952
    Hartline, Ratliff 1957
    Grossberg 1967
    Von der Malsburg 1973
    digital computerY-A*X
    cross-correlate
    steepest descent
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:189:
  • image p068fig02.13 Hartline's lab developed a model to describe signal processing by the retina of the horseshoe crab.
    || Neurophysiology (network): lateral inhibition in limulus retina of horseshoe crab (Hartline, Ratliff, Miller 1963, Nobel Prize)
    hi = ei - sum[j=1 to n: {∫[dv, 0 to t: e^(-A*(t-v))*hj(v)] - Γj}(+) *Bji
    ei = spiking frequency without inhibition
    hi = spiking frequency with inhibition
    [w - r]+ vs i, Precursor of ADDITIVE network model. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:190:
  • image p068fig02.14 Hodgkin and Huxley developed a model to explain how spikes travel down the squid giant axon.
    || Neurophysiology (single cell): spike potentials in squid giant axon (Hodgekin, Huxley 1952, Nobel Prize). time -> (dendrites -> cell body -> axon).
    C*dp[dt: V] = α*dp^2[dX^2: V] + (V(+) - V)*g(+) + (V(-) - V)*g(-) + (V^p - V)*g^p
    g(+) = G(+)(m,h), g(-) = G(-)(n), G^p = const, [m, h, n] - ionic processes, V - voltage
    Precursor of Shunting network model (Rail 1962). (Howell: see p075fig02.24 Membrane equations of neurophysiology. Shunting equation /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:191:
  • image p071fig02.15 The noise saturation dilemma: How do neurons retain their sensitivity to the relative sizes of input patterns whose total sizes can change greatly through time?
    || Noise-Saturation Dilemma (Grossberg 1968-1973). Bounded activities from multiple input sources.
    If activities xi are sensitive to SMALL inputs, then why don't the saturate to large outputs?
    If xi are sensitive to LARGE inputs, then why don't small inputs get lost in system noise?
    The functional unit is a spatial activity pattern . /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:192:
  • image p071fig02.16 To solve the noise-saturation dilemma, individual neurons in a network that is receiving a distributed spatial patterns of inputs need to remain sensitive to the ratio of input to them divided by all the inputs in that spatial pattern. Although the inputs are delivered to a finite number of neurons, the input and activity patterns are drawn continuously across the cells for simplicity.
    || Noise-Saturation Dilemma. [Ii, xi] vs t. [Input, Activity] pattern [small -> noise, large -> saturation]. Problem: remain sensitive to input RATIOS θi = Ii / sum[j: Ij] as total input I = sum[j: Ij] -> ∞. Many kinds of data exhibit sensitivity to ratios of inputs. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:193:
  • image p072fig02.17 Brightness constancy.
    || Vision: brightness constancy, contrast normalization. Compute RATIOS of reflected light. Reflectance processing. p72c1h0.45 "... In other words, the perceived brightness of the gray disk is constant despite changes in the overall illumination. On the other hand, if only the gray disk were illuminated at increaing intensities, with the annulus illuminated at a constant intensity, then the gray disk would look progressively brighter. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:194:
  • image p072fig02.18 Vision: brightness contrast. Conserve a total quantity, Total activity normalization.
    LUCERatio scales in choice behavior
    ZEILERAdaptation level theory

    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:195:
  • image p073fig02.19 Computing with cells: infinity does not exist in biology!
    || Computing in a bounded activity domain, Gedanken experiment (Grossberg 1970). Vm sub-areas [xm, B - xm], I(all m)], m=[1, i, B].
    Bexcitable sites
    xi(t)excited sites (activity, potential)
    B - xi(t)unexcited sites
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:196:
  • image p073fig02.20 Shunting saturation occurs when inputs get larger to non-interacting cells.
    || Shunting saturation. [xi(t), B - xi(t)].
    (a)(b)
    d[dt: xi] = -A*xi + (B - xi)*Ii
    (a) Spontaneous decay of activity xi to equilibrium
    (b) Turn on unexcited sites B - xo by inputs Ii (mass action)
    Inadequate response to a SPATIAL PATTERN of inputs: Ii(t) = θi*I(t)
    θirelative intensity (cf. reflectance)
    I(t)total intensity (cf. luminance)
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:197:
  • image p073fig02.21 How shunting saturation turns on all of a cell's excitable sites as input intensity increases.
    || Shunting saturation. At equilibrium:
    0 = d[dt: xi] = -A*xi + (B - xi)*Ii
    xi = B*Ii / (A + Ii) = B*θi*I / (A + θi*I) -> B as I -> ∞
    Ii = θi*I, I = sum[j: Ij]
    I small: lost in noise; I large: saturates
    Sensitivity loss to relative intensity as total intensity increases. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:198:
  • image p073fig02.22 An on-center off-surround network is capable of computing input ratios.
    || Computing with patterns.
    How to compute the pattern-sensitive variable: θi = Ii / sum[k=1 to n: Ik]?
    Needs interactions! What type? θi = Ii / sum[k ≠ i: Ik]
    Ii↑ ⇒ θi↑ excitation, Ik↑ ⇒ θk↓, k ≠ i inhibition
    On-center off-surround network. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:199:
  • image p074fig02.23 The equations for a shunting on-center off-surround network. Shunting terms lead to many beautiful and important properties of these networks, which are found ubiquitously, in one form or another, in all cellular tissues.
    || Shunting on-center off-surround network.
    Mass action: d[dt: xi] = -A*xi +(B - xi)*Ii -xi*sum[k≠i: Ik]
    Turn on unexcited sitesTurn off excited sites
    At equilibrium:
    0 = d[dt: xi] = -(A + Ii + sum[k≠i: Ik])*xi + B*Ii = -(A + I)*xi + B*Ii
    xi = B*Ii/(A + I) = B*θi*I/(A + I) = θi* B*I/(A + I)No saturation!
    Infinite dynamical range
    Automatic gain control
    Compute ratio scale
    Weber law
    x = sum[k-1 to n: xk] = B*I/(A + I) ≤ B Conserve total activity
    NORMALIZATION
    Limited capacty
    Real-time probability
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:200:
  • image p075fig02.24 The membrane equations of neurophysiology describe how cell voltages change in response to excitatory, inhibitory, and passive input channels. Each channel is described by a potential difference multiplied by a conductance. With the special choices shown in the lower right-hand corner, this equation defines a feedforward shuntin on-center off-surround network.
    || Membrane equations of neurophysiology.
    C*dp[dt] = (V(+) - V)*g(+) +(V(-) - V)*g(-) +(V(p) - V)*g(p)
    Shunting equation (not additive)
    V Voltage
    V(+), V(-), V(p) Saturating voltages
    g(+), g(-), g(p) Conductances
    V(+) = B, C = 1; V(-) = V(p) = 0; g(+) = Ii; g(-) = sum[k≠i: Ik];
    lower V: V(+) = V(p) Silent inhibition, upper V: V(+). (Howell: see p068fig02.14 Grossberg's comment that Hodgkin&Huxley model was a "... Precursor of Shunting network model (Rail 1962) ..."). /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:201:
  • image p076fig02.25 An on-center off-surround network can respond to increasing on-center excitatory inputs without a loss of sensitivity. Instead, as the off-surround input increases, the region of a cell's maximal sensitivity to an increasing on-center input shifts to a range of larger inputs. This is because the off-surround divides the effect of the on-center input, an effect that is often called a Weber law.
    || Web er law, adaptation, and shift property (Grossberg 1963).
    Convert to logarithmic coordinates:
    K = ln(Ii), Ii = e^K, J = sum[k≠i: Ik]
    xi(K,J) = B*Ii/(A + Ii + J) = B*e^K/(A + e^K + J)
    x(K + S, J1) = x(K, J2), S = ln((A + J1)/(A + J2)) size of SHIFT. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:202:
  • image p076fig02.26 The mudpuppy retina exhibits the shift property that occurs in the feedforward shunting on-center off-surround network in Figure 2.25. As a result, its sensitivity also shifts in response to different background off-surrounds, and therefore exhibits no compression (dashed purple lines).
    || Mudpuppy retina neurophysiology.
    I center, J background
    a) Relative figure-to-ground
    b) Weber-Fechner I*(A + J)^(-I)
    c) No hyperpolarization, SHUNT: Silent inhibition
    d) Shift property(Werblin 1970) xi(K,J) vs K = ln(I)
    Adaptation- sensitivity shifts for different backgrounds. NO COMPRESSION. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:203:
  • image p077fig02.27 A schematic of the on-center off-surround network that occurs in the mudpuppy retina, including three main cell types: receptors, horizontal cells, and bipolar cells.
    || Mechanism: cooperative-competitive dynamics.
    On-center off-surround (Kuffler 1953) cat retina
    Subtractive lateral inhibition (Hartline, Ratcliff 1956/7+) limulus retina.
    R receptor -> H horizontal -> B bipolar (Werblin, Dowling, etal 1969+) mudpuppy retina. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:204:
  • image p077fig02.28 Silent inhibition is replaced by hyperpolarization when the inhibitory saturating potential is smaller than the passive saturating potential. Then an adpatation level is created that determines how big input ratios need to be to activate their cells.
    || Weber Law and adaptation level.
    Hyperpolarization vs Silent inhibition
    d[dt: xi] = -A*xi +(B - xi)*Ii -(xi + C)*sum[k≠i: Ik]
    At equilibrium:
    0 = d[dt: xi] = -(A + Ii + )*xi +B*Ii -C*sum[k≠i: Ik]
    = -(A + I)*xi +(B + C)*Ii -C*I
    = -(A + I)*xi +(B + C)*I*[θi -C/(B + C)]
    xi = (B + C)*I/(A + I)* [θi -C/(B + C)]
    Weber Law Reflectance Adaptation level
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:205:
  • image p078fig02.29 How the adaptation level is chosen to enable sufficiently distinct inputs to activate their cells.
    || Weber Law and adaptation level.
    xi = (B + C)*I/(A + I)* [θi -C/(B + C)]
    Weber Law Reflectance Adaptation level
    V(+) >> V(-) ⇒ B >> C ⇒ C/(B + C) << 1
    Adaptation level theory (Zeiler 1963). /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:206:
  • image p078fig02.30 Choosing the adaptation level to achieve informational noise suppression.
    || Noise suppression. Attenuate Zero Spatial frequency patterns: no information. Ii vs i (flat line), xi vs i (flat line at zero)
    B >> C: Try B = (n - 1)*C or C/(B + C) = 1/n
    Choose a uniform input pattern (no distinctive features): All θi = 1/n
    xi = (B + C)*I/(A + I)*[θi -C/(B + C)] = 0 no matter how intense I is. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:207:
  • image p078fig02.31 How noise suppression enables matching of bottom-up and top-down input patterns.
    || Noise suppression -> pattern matching. mismatch (out of phase) suppressed, match (in phase) amplifies pattern. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:208:
  • image p079fig02.32 Matching amplifies the matched pattern due to automatic gain control. See terms I and J in the equation.
    || Substrate of resonance. Match (in phase) of BU and TD input patterns AMPLIFIES matched pattern due to automatic gain control by shunting terms. J = sum[i: Ji], I = sum[i: Ii], θi = (Ii + Ji)/(I + J)
    xi = (B + C)*(I + J)/(A + I + J)*[θi -C/(B + C)]
    Need top-down expectations to be MODULATORY. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:209:
  • image p080fig02.33 An opposite-attracts rule during the development of intracellular connections can lead to a mature network that realizes informational noise suppression.
    || How do noise suppression parameters arise? Symmetry-breaking during morphogenesis? Opposites attract rule.
    Intracellular parameters C/B = 1/(1 - n) Intercellular parameters
    Predicts that:
    • Intracellular excitatory and inhibitory saturation points can control the growth during development of :
    • Intercellular excitatory and inhibitory connections.
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:20: /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:210:
  • image p080fig02.34 How to achieve informational noise suppression in a network with multiple parallel processing channels.
    || Symmetry-breaking: dynamics and anatomy.
    Dynamics:
    • excitatory range is amplified
    • inhibitory range is compressed
    Anatomy:
    • narrow on-center
    • broad off-surround
    Noise suppression: attenuates uniform patterns
    Contour direction: enhances pattern gradients /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:211:
  • image p081fig02.35 The equilibrium activities of a shunting netwok with Gaussian on-center off-surround kernels are sensitive to the ratio-contrasts of the input patterns that they process. The terms in the denominator of the equilibrium activities accomplish this using the shunting on-center and off-surround terms.
    || Ratio-contrast detector. flat versus [Gaussian Cki, flattened Gaussian? Eki]
    d[dt: xi] = -A*xi +(B - xi)*sum[k≠i: Ik]*Cki -(xi + D)*sum[k=1 to n: Ik*Eki]
    Cki = C*e^(-μ*(k - i)^2), Eki = E*e^(-μ*(k - i)^2)
    At equilibrium: xi = I*sum[k=1 to n: θk*Fki] / (A + I*sum[k=1 to n: θk*Gki])
    Fki = B*Cki -D*Eki (weighted D.O.G)
    Gki = Cki +Eki (S,O,G)
    • Reflectance processing
    • Contrast normalization
    • Discount illuminant
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:212:
  • image p081fig02.36 Informational noise suppression in network with Gaussian on-center and off-surround function as contour detectors that are sensitive to ratio-contrast.
    || Noise suppression and contour detection.
    If B*sum[k=1 to n: Cki] <= D*sum[k=1 to n: Eki] then:
    • uniform patterns are suppressed
    • contrasts are selectively enhanced
    • contours are detected
    Ii vs i, xi vs i
    Responses are selective to [REFLECTANCE, SPATIAL SCALE], eg color [feature, surface] contours. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:213:
  • image p082fig02.37 My models begin with behavioral data, since brains are designed to achieve behavioral success. The text explains how models evolve in stages, through a process of successive refinements, or unlumpings. These unlumpings together carry out a kind of conceptual evolution, leading to models that can explain and predict ever larger psychological and neurobiological databases.
    || Modelling method and cycle.
    Behavioral data -(art of modeling)-> Design principles <- Neural data <-(brain predictions)- Mathematical model and analysis -(behavioral predictions)-> Behavioural data
    Operationalizes "proper level of abstraction"
    Operationalizes that you cannot "derive a brain" in one step. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:214:
  • image p085fig02.38 Our models have been used in many large-scale applications to engineering and technology. Linking brain to behavior explains how brain mechanisms give rise to psychological functions, and do so autonomously. The combination of mechanism, function, and autonomy helps to explain their value in helping to solve outstanding problems in technology.
    || Modeling method and cycle.
    Behavioral data -(art of modeling)-> Design principles <- Neural data <-(brain predictions)- Mathematical model and analysis -(behavioral predictions)-> Behavioural data
    Technology: Mathematical model and analysis <-> Technological applications
    At every stage, spin off new model designs and mechanisms to technologist who need autonomous intelligent applications. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:222:
  • image p087fig03.01 A macrocircuit of key visual processes (in green) and the cortical areas in which they primarily occur (in red), from the retina to the Prefrontal Cortex (PFC), including both the What and Where cortical streams. The [bottom-up, horizontal, and top-down] interactions help each of these processes to overcome computationally complementary processing deficiencies that they would experience without them, and aso to read-out top-down expectations that help to stabilize learning while they focus attention on salient objects and positions.
    || Emerging unified theory of visual intelligence. [What, Where] streams. Bottom-up and top-down interactions overcome COMPLEMENTARY processing deficiencies. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:223:
  • image p089fig03.02 What do you think lies under the two grey disks? (on a checkers board)
    || p089c1h0.55 "... As your eye traverses the entire circular boundary (Howell: of a grey disk on a checkerboard), the contrast keeps flipping between light-to-dark and dark-to-light. Despite these contrast reversals, we perceive a single continuous boundary surrounding the gray disk. ...". /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:224:
  • image p090fig03.03 Kanizsa square and reverse-contrast Kanizsa square precepts. The spatial arrangement of pac-men, lines, and relative contrasts determines the perceived brightness of the squares, and even if they exhibit no brightness difference from their backgrounds, as in (b). These factors also determine whether pac-men will appear to be amodally completed behind the squares, and how far behind them.
    || p089c2h0.65 "...
    a) The percept of the square that abuts the pac-men is a visual illusion that is called the Kanizsa square. The enhanced brightness of the square is also an illusion.
    c) shows that these boundaries can be induced by either collinear edges or perpendicular line ends, and that both kinds of inducers cooperate to generate an even stronger boundary.
    d) if the perpendicular lines cross the positions of the illusory contours, then they can inhibit the strength of these contours. ..." /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:225:
  • image p091fig03.04 A cross-section of the eye, and top-down view of the retina, shao how the blind spot and retinal veins can occlude the registration of light signals at their positions on the retina.
    || Eye: [optic nerve, ciliary body, iris,lens, pupil, cornea, sclera, choroid, retina]. Human retina: [fovea, blind spot, optic nerve]. see alsi cross-section of retinal layer. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:226:
  • image p092fig03.05 A cross-section of the retinal layer. Note that light stimuli need to go through all retinal layers before they reach the photoreceptor layer at which the light signals are registered.
    || light stimuli ->
    retinal layerscellular composition
    inner limiting membrane
    retinal nerve fibreganglion nerve fibres
    ganglion cellganglion
    inner plexiformamacrine
    inner nuclearhorizontal
    outer plexiform
    outer limiting membrane
    photoreceptorrod
    photoreceptorcone
    retinal pigment epithelium
    <- signal transduction. http://brain.oxfordjournals.org/content/early/2011/01/20/brain.awq346 /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:227:
  • image p093fig03.06 Every line is an illusion because regions of the line that are occluded by the blind spot or retinal veins are completed at higher levels of brain processing by boundary completion and surface filling-in.
    || Every line is an illusion!
    Boundary completionWhich boundaries to connect?
    Surface filling-inWhat color and brightness do we see?
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:228:
  • image p094fig03.07 The processes of boundary completion and surface filling-in are computationally complementary.
    ||
    Boundary completionSurface filling-in
    outwardinward
    orientedunoriented
    insensitive to direction of contrastsensitive to direction-of-contrast
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:229:
  • image p095fig03.08 Computer simulation of a Kanizsa square percept. See the text for details.
    || p094c2h0.2 "...
    b) shows the feature contours that are induced just inside the pac-man boundaries.
    c) feature contours fill-in within the square boundary
    d) create a percept of enhanced brightness throughout the square surface ..." /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:230:
  • image p095fig03.09 Simulation of a reverse-contrast Kanizsa square percept. See the text for details.
    || p094c2h0.5 "...
    b) whereas bright feature contours are induced just inside the boundaries of the two black pac-men at the bottom of the figure, dark feature contours are induced inside the boundaries of the two white pac-man at the top of the figure
    c) the square boundary is recognized
    d) Because these dark and bright feature contours are approximately balanced, the filled-in surface color is indistinguishable from the filled-in surface color outside of the square, ... but [the square boundary is] not seen ..." /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:231:
  • image p096fig03.10 The visual illusion of eon color spreading. Neither the square nor the blue color that are percieved within it are in the image that defines a neon color display. The display consists only of black and blue arcs.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:232:
  • image p096fig03.11 Another example of neon color spreading. The image is composed of black and blue crosses. See the text for details.
    || Howell: note the appearance of illusory red squares /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:233:
  • image p098fig03.12 In this picture of Einstein's face, [edges, texture, shading] are overlaid.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:234:
  • image p100fig03.13 The Ehrenstein percept in the left panel is significantly weakened as the orientations of the lines that induce it deviate from being perpendicular deviate from being perpendicular to the illusory circle.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:235:
  • image p100fig03.14 Boundaries are completed with the orientations that receive the largest total amount of evidence, or support. Some can form in the locally preferred orientations that are perpendicular to the inducing lines, while others can form through orientations that are not locally preferred, thus showing that there is initially a fuzzy band of almost perpendicular initial grouping orientations at the end of each line.
    || Perpendicular induction at line ends wrt [circular, square] boundaries
    line ends localglobal
    perpendicular, crisppreferredpreferred
    NOT perpendicular, fuzzyunpreferredpreferred
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:236:
  • image p100fig03.15 A fuzzy band of possible initial grouping orientations allows grouping to get started. Cooperative-competitive feedback via a hierarchical resolution of uncertainty chooses a sharp final grouping that has the most evidence to support it.
    || before choice: transient; after choice: equilibrium /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:237:
  • image p102fig03.16 T's and L's group together based on shared orientations, not identities.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:238:
  • image p102fig03.17 The relative positions of the squares give rise to a percept of three regions. In the middle region, emergent diagonal groupings form, despite the fact that all the orientations in the image are verticals and horizontals.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:239:
  • image p103fig03.18 Computer simulations in [b, c, e, f] of groupings in response to different spatial arrangements in [a,c, e, g] of inducers that are composed of short vertical boundaries. Note the emergent horizontal groupings in [d, f, h] and the diagonal groupings in h, despite the fact that all its inducers have vertical orientations.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:23: /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:240:
  • image p103fig03.19 As in Figure 3.18, emergent groupings can form whose orientations differ from thos of the inducing stimuli.
    || Thats how multiple orientations can induce boundary completion of an object. [diagonal, perpendicular, parallel] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:241:
  • image p104fig03.20 Sean Williams: how boundaries can form
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:242:
  • image p104fig03.21 Four examples of how emergent boundaries can form in response to different kinds of images. These examples show how boundary webs can shape themselves to textures, as in (c), and shading, as in (d), in addition to lines, as in (a). In all these cases, the boundaries are invisible, but reveal themselves by supporting filling-in of surface brightness and color within their form-sensitive webs.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:243:
  • image p105fig03.22 Depth-selective boundary representations capture brightness and colors in surface filling-in domains. See the text for details.
    || 3D vision and figure-ground separation. multiple-scale, depth-selective boundary webs. refer to Figure 3.21(d)
    depth increasing ↓boundariessurfaces
    BC inputsurface capture!
    FC input
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:244:
  • image p105fig03.23 The pointillist painting A Sunday on la Grande Jatte by Georges Seurat illustrates how we group together both large-scale coherence among the pixels of the painting, as well as forming small groupings around the individual dabs of color.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:245:
  • image p106fig03.24 In response to the Synthetic Aperture image (upper corner left), a shunting on-center off-surround network "discounts the illiminant" and thereby normalizes cell activities to compute feature contours, without causing saturation (upper right corner). Multiple-scale boundaries form in response to spatially coherent activities in the feature contours (lower left corner) and create the webs, or containers, into which the feature contours fill-in the final surface representations (lower right corner).
    || Do these ideas work on hard problems? SAR!
    input imagefeature contoursboundary contoursfilled-in surface
    Synthetic Aperture Radar: sees through weather 5 orders of magnitude of power in radar returndiscounting the illuminant
    • normalizes the image: preseves RELATIVE activities without SATURATION
    • shows individual PIXELS
    boundaries complete between regions where normalized feature contrasts changefilling-in averages brightnesses within boundary compartments
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:246:
  • image p107fig03.25 The Roofs of Collioure by Matisse. See the text for details
    || p107c1h0.6 "... [Matisse] showed how patches of pure color, when laid down properly on a canvas, could be grouped by the brain into emergent boundarues, without the intervention of visible outlines. ... The trick was that these emergent boundaries, being invisible, or amodal, did not darken the colors in the surface representations. In this sense, Matisse intuitively realized that "all boundaries are invisible" through the masterful way in which he arranged his colors on canvas to generate boundaries that could support compelling surface representations. ..." /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:247:
  • image p107fig03.26 How "drawing directly in color" leads to colored surface representations. Amodal boundary webs control the filling-in of color within these surface representations. See the text for details.
    || color patches on canvas -> [surface color and form, Amodal boundary web]. Amodal boundary web -> surface color and form. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:248:
  • image p108fig03.27 Matisse's painting Open Window, Collioure 1905 combines continuously colored surfaces with color patches that created surface representations using amodal boundaries, as in Figure 3.26. Both kinds of surfaces cooperate to form the final painterly percept.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:249:
  • image p108fig03.28 The watercolor illusion of Baingio Pinna 1987 can be explained using spatial competition betweeen like-oriented boundary signals. This occurs at what I have called the First Competitive Stage. This is one stage in the brain's computation of hypercomplex cells, which are also called endstopped complex cells. Why the blue regions seem to bulge in depth may be explained using multple-scale, depth-selective boundary webs. See ther text for details.
    || Baigio Pinna. Watercolor illusion 1987. Filled-in regions bulge in depth. Multiple-scale, depth-selective boundary web! /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:250:
  • image p109fig03.29 The 3D percepts that are generated by chiaroscuro and trompe l'oeil both exploit the same kind of multiple-scale, depth-selective boundary webs that create the impression of a 3D bulge of the blue regions in the watercolor percept in Figure 3.28.
    || Chiascuro - Rembrandt self-portraitm Trompe l'oeil - Graham Rust. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:251:
  • image p109fig03.30 The triptych of Joe Baer, called Primary Light Goup: Red, Green, and Blue 1964-1965, generates watercolor illusion percepts which, when displayed side by side in a museum, create a striking impression. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:252:
  • image p110fig03.31 Henry Hensche's painting of The Bather is suffused with light.
    || p109c2h0.8 (Hawthorne 1938/60) wrote "... (pp 25-26) the outline and color of each spot of color against every other spot of color it touches, is the only kind of drawing you need to bother about ...Let color make form- do not make form and color it. ...". p110c1h0.6 (Robichaux 1997, p27) "... The untrained eye is fooled to think he sees forms by the model edges, not with color ... Fool the eye into seeing form without edges. (p33) Every form change must be a color change. ...". /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:253:
  • image p110fig03.32 Claude Monet's painting of Poppies Near Argenteuil. See the text for details.
    || Claude Monet Poppies Near Argenteuil 1873. p110c2h0.35 "... the red poppies and the green field around them are painted to have almost the same luminescence; that is, they are almost equiluminant. As a result, the boundaries between the red and green regions are weak and positionally unstable, thereby facilitating an occasional impression of the poppies moving in a gentle breeze, especially as one's attention wanders over the scene. ...". /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:254:
  • image p112fig03.33 Various ways that spatial gradients in boundary webs can cause self-luminous percepts. See the text for details.
    || Boundary web gradient can cause self luminosity. Similar to watercolor illusion. Gloss by attached highlight (Beck, Prazdny 1981), glare. (Bresan 2001) Double brilliant illusion, (Grossberg, Hong 2004) simulation. p111c2h0.5 "... This effect may be explained as the result of the boundary webs that are generated in response to the luminance gradients and how they control the filling-in of lightness within themselves and abutting regions. ... Due to the mutually inhibitory interactions across the boundaries that comprise these boundary webs, more lightness can spread into the central square as the steepness of the boundary gradients increases. ...". /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:255:
  • image p112fig03.34 Examples of Ross Bleckner's self-luminous paintings.
    || Self-luminous paintings (Ross Bleckner). Galaxy painting (1993), Galaxy with Birds (1993). p112c2h0.15 "... Bleckner does this, not by painting large surface areas with high reflectances or bright colors, but rather creating compositions of small, star-like, circular regions that are perceived as self luminous ...". /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:256:
  • image p113fig03.35 The Highest Luminance As White (HLAW) rule of (Hans Wallach 1948) works in some cases (top row) but not others (bottom row). /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:257:
  • image p113fig03.36 The Blurred Highest Luminance As White (BHLAW) rule that I developed with my PhD student, Simon Hong, works in caseswhere the rule of Hans Wallach fails, as can be seen by comparing the simulation in Figure 3.35 with the one in this figure.
    || Blurred Highest Luminance As White (BHLAW) rule (Grossberg, Hong 2004, 2006). Spatial integration (blurring) adds spatial context to lightness perception. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:258:
  • image p114fig03.37 How the Blurred Highest Luminance as White rule sometimes normalizes the highest luminance to white (left panel) but at other times normalizes it to be self-luminous (right panel). See the text for details.
    || perceived reflectance vs cross-section of visual field. [white level, anchored lightness, self-luminous*, BHCAW]. *self-luminous only when conditions are right. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:259:
  • image p114fig03.38 Four color-field spray paintings of Jules Olitski. The text explains why they generate surfaces percepts with such ambiguous depth.
    || Jules and his friends (1967), Lysander-1 (1970), Instant Loveland (1968), Comprehensive Dream (1965). p114c2h0.4 "... it is impossible to visually perceive descrete colored units within the boundary webs in Olitski's spray paintings. ... create a sense of ambiguous depth in the viewer, similar to staring into a space filled with colored fog, or into a sunset free of discrete clouds. Olitski intentionally created this effect. ...". /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:25: /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:260:
  • image p115fig03.39 Two of Gene Davis's paintings in full color (top row) and in the monochromatic versions (bottom row). The text text explains how they achieve their different percepts of grouping and relative depth.
    || Gene Davis [Black popcorn, Pink flamingo] in [full color, monchromatic]. p115c1h0.8 "... His paintings ... are built up from vertical stripes. They do not contain size differences, shading, or recognizable objects. ...". p115c2h0.15 "... For starters, color similarities and/or almost equal luminances between stripes can influuence whether the viewer's eyes are drawn to individual stripes or groups of stripes. The achromatic versions of the two paintings more clearly show regions where the color assimilation is facilitated. ... Such form-sensitive spatial attention is called an attentional shroud. An attentional shroud, in turn, is created by a dynamical state in the brain that I call a surface-shroud resonance. ...". /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:261:
  • image p116fig03.40 A combination of T-junctions and perspective cues can create a strong percept of depth in response to 2D images, with a famous example being Leonardo da Vinci's painting of the Mona Lisa.
    || p116c2h0.05 "... Many Renaissance artists learned how to use perspective cues ... Renaissance artists also undeerstood how to use T-junctionslike the ones that occur where the vertical and horizontal edges intersect in Figure 3.40 (left column, bottom row), or in the Kanizsa square percepts in Figure 3.3, or in the zebra image in Figure 3.21b. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:262:
  • image p117fig03.41 End gaps, or small breaks or weakenings of boundaries, can form where a stronger boundary abuts a weaker, like-oriented, boundary, as occurs where black boundaries touch red boundaries in the neon color spreading image of Figure 3.11.
    || Boundary contours - lower contrast boundary signals are weakened. feature contours- no inhibition, feature signals survive and spread. MP -> [BCS, FCS]. BCS -> FCS. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:263:
  • image p117fig03.42 Two paintings by Frank Stella. See the text for details.
    || Firzubad (top row) ... and Khurasan Gate (variation) (bottom row). p117c1h0.75 "... The luminance and color structure within a painting affects how it groups and stratifies the figures within it. These processes, in turn, affect the formation of attentional shrouds that organize how spatial attention is is allocated as we view them. ..." "... Stella wrote Furzabad is a good example of of lookng for stability and trying to create as much instability as possible. 'Cause those things are like bicycle wheels spinning around'." /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:264:
  • image p120fig03.43 Four paintings by Monet of the Rouen cathedral under different lighting conditions (top row) and their monochromatic versions (bottom row). See the text for details.
    || p119c2h0.25 "... Monet uses nearby colors that are nearly equiluminant, and sharp, high-contrast luminance defined edges are sparse. He hereby creates weaker boundary signals within and between the parts of many forms, and stronger boundary signals between the forms. This combination facilitates color spreading within the forms and better separation of brightness and collor differences between forms. ... The grayscale versions of these paintings demonstrate the near equiluminance of the brushstrokes within forms, and places in which brightness and color differences significantly influence the groupings that differentiate between forms, including the differentiation between the cathedral and the sky. ..." /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:265:
  • image p120fig03.44 The Rouen cathedral at sunset generates very different boundary webs than it does in full sunlight, as illustrated by Figure 3.45.
    || Rouen Cathedral at sunset (Monet 1892-1894).
    • Lighting almost equiluminant
    • Most boundaries are thus caused by color differences, not luminance differences
    • Fine architectural details are obscured, leading to...
    • Coarser and more uniform boundary webs, so...
    • Less depth in the painting.
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:266:
  • image p121fig03.45 The Rouen cathedral in full sunlight.
    || Rouen Cathedral full sunlight (Monet 1892-1894).
    • Lighting is strongly non-uniform across most of the painting
    • Strong boundaries due to both luminance and color differences
    • Fine architectural details are much clearer, leading to...
    • Finer and more non-uniform boundary webs, so...
    • Much more detail and depth
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:267:
  • image p121fig03.46 The Rouen cathedral in full sunlight contains T-Junctions that are not salient in the painting of it at sunset. These are among the painting's features that give it a much more depthful appearance.
    || Rouen Cathedral full sunlight (Monet 1892-1894).
    • There are also more T-junctions where vertical boundaries occlude horizontal boundaries, or conversely...
    • Leading to more depth.
    p119c2h1.0 "... Such T-junction boundary occlusions ... can generate percepts of depth in the absence of any other visual clues. ...". /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:275:
  • image p123fig04.01 A classical example of how boundaries are barriers to filling-in.
    || Combining stabilized images with filling-in (Krauskopf 1963, Yarbus 1967). Image: Stabilize these boundaries with suction cup attached to retina or electronic feedback circuit. Percept: A visible effect of an invisible cause! /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:276:
  • image p124fig04.02 The verical cusp of lesser and greater illuminance is the same in both images, but the one on the left prevents brightness from flowing around it by creating closed boundaries that tighly surround the cusp. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:277:
  • image p126fig04.03 A McCann Mondrian is an excellent display with which to illustrate how our brains discount the illuminant to compute the "real" colors of objects. See the text for details.
    || Color constancy: compute ratios. McCann Mondrian. Biological advantage: never see in bright light, eg tropical fish
    Discount the illuminantCompute lightness
    Different colors seen from the same spectrum
    ... similar to those seen in white light
    Physical basis: reflectance RATIOS!
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:278:
  • image p128fig04.04 When a gradient of light illuminates a McCann Mondrian, there is a jump in the total light that is reflected at nearby positions where the reflectances of the patches change,
    || Compute reflectance changes at contours. Fill-in illuminant-discounted surface colors.
    leftright
    I + εI - ε
    A*(I + ε)B*(I - ε)
    A*(I + ε)/(B*(I - ε)) - 1 = A/B - 1 /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:279:
  • image p129fig04.05 Multiple-scale balanced competition chooses color contours where the reflectance of the patches change. These color contours discount the illuminant.
    || Compute reflectance changes at contours. Fill-in illuminant-discounted surface colors. Discount illuminant: compute color contours. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:27: /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:280:
  • image p129fig04.06 Filling-in of color contours restores a surface percept with colors that substantially discount the illuminant.
    || Compute reflectance changes at contours. Fill-in illuminant-discounted surface colors. Fill-in surface color: hierarchical resolution of uncertainty. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:281:
  • image p130fig04.07 Simulation of brightness constancy under uniform illumination.
    || Simulation of brightness constancy (Grossberg & Todorovic 1988). Uniform illumination. [stimulus (S), feature (F), boundary (B), output]. B -> F -> S -> B: Veridical! Boundary peaks are spatially narrower than feature peaks. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:282:
  • image p131fig04.08 Simulation of brightness constancy under an illimination gradient. Note that the feature content pattern (F) is the same in both cases, so too is the boundary contour (B) pattern that is derived from it, and the final filled-in surface.
    || Simulation of brightness constancy. Discount the illuminant. [stimulus (S), feature (F), boundary (B), output]. B -> F -> S -> B: not veridical, but useful! Ratio-sensitive feature contours (F). /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:283:
  • image p131fig04.09 Simulation of brightness contrast
    || Simulation of brightness contrast. [stimulus (S), feature (F), boundary (B), output]. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:284:
  • image p132fig04.10 Simulation of brightness assimilation. Note how the equal steps on the left and right sides of the luminance profile are transformed into different brightness levels.
    || Simulation of brightness assimilation. [stimulus (S), feature (F), boundary (B), output]. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:285:
  • image p132fig04.11 Simulations of a double step (left panel) and the Craik-O'Brien-Cornsweet (COCE) illusion. Note that discounting the illuminant creates similar feature contour patterns, from which the fact that the COCE looks like the double step follows immediately.
    || Simulations of double step and COCE. [stimulus (S), feature (F), boundary (B), output]. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:286:
  • image p133fig04.12 Simulation of the 2D COCE.
    || (Todorovic, Grossberg 1988). p132c2h0.6 "... 2D Craik-O'Brien-Cornsweet Effect percepts that are generated by the stimulus in the left panel of Figure 4.2. ..." /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:287:
  • image p134fig04.13 Contrast constancy shows how the relative luminances when a picture is viewed in an illumination gradient can even be reversed to restore the correct reflectances due to discounting the illuminant. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:288:
  • image p134fig04.14 The kinds of displays that Michael Paradiso and Ken Nakayamas used to catch filling-in "in the act" and which Karl Arrington then simulated using the Grossberg and Todorovic 1988 model.
    || Experiments on filling-in. Catching "filling0in" in the act (Paradiso, Nakayama 1991). (Arrington 1994 Vision Research 34, 3371-3387) simulated these data using the model of Grossberg and Todorovic 1988. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:289:
  • image p138fig04.15 Simple cells are oriented contrast detectors, not edge detectors.
    || From oriented filtering to grouping and boundary completion (Hubei, Weisel 1968). Oriented receptive fields: SIMPLE CELLS. Sensitive to : orientation, [amount, direction] of contrast, spatial scale. Oriented local contrast detectors, not edge detectors! /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:290:
  • image p139fig04.16 The simplest way to realize an odd simple cell receptive field and firing threshold.
    || "Simplest" simple cell model. need more complexity for processing natural scenes. Difference-of-Gaussian or Gabor filter (J. Daugman, D. Pollen...). Output signal vs cell activity. Threshold linear signal, half-wave rectification. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:291:
  • image p140fig04.17 Complex cells pool inputs from simple cells that are sensitive to opposite contrast polarities. Complex cells hereby become contrast invartiant, and can respond to contrasts of either polarity.
    || Complex cells: pool signals from like-oriented simple cells of opposite contrast polarity at the same position. They are "insensitive to contrast polarity". Half-wave rectification of inputs from simple cells. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:292:
  • image p141fig04.18 The images formed on the two retinas in response to a single object in the world are displaced by different amounts with respect to their foveas. This binocular disparity is a powerful cue for determining the depth of the object from an observer.
    || Binocular Disparity. Binocular disparities are used in the brain to reconstruct depth from 2D retinal inputs, for relatively near objects. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:293:
  • image p141fig04.19 A laminar cortical circuit for computing binocular disparities in layer 3B of V1 at binocular simple cells. These cells add positionally disparate inputes from like polarized monocular simple cells (layer 4 of V1). Binocular simple cells at each position that is sensitive to opposite polarities then add their outputs at complex cells in layer 2/3. Chapter 10 will explain how these laminar circuits work in greater detail.
    || Laminar cortical circuit for complex cells. [left, right] eye.
    V1 layerdescription
    2/3Acomplex cells
    3Bbinocular simple cells
    4monocular simple cells
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:295:
  • image p142fig04.20 A Glass pattern and a reverse-contrast Glass pattern give rise to a different boundary groupings because simple cells can only pool signals from like-polarity visual features. See the text for details. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:296:
  • image p143fig04.21 Oriented simple cells can respond at the ends of thick enough bar ends, but not at the ends of thin enough lines. See the text for an explanation of why this is true, and its implications for visual system design.
    || Hierarchical resolution of uncertainty. For a given field size. Different responses occur at bar ends and line ends. For a thin line no detector perpendicular to line end can respond enough to close the boundary there. Network activity. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:297:
  • image p144fig04.22 Computer simulation of how simple and complex cells respond to the end of a line (gray region) that is thin enough relative to the receptive field size (thick dashed region in the left panel). These cells cannot detect the line end, as indicated by the lack of responses there in the left panel (oriented short lines denote the cells' preferred positions and orientations, and their lengths denote relative cell activations). Such an end gap is corrected in the responses of hypercomplex cells that create a boundary at the line end which is called an end cut (right panel). See the text for details.
    || End gap and end cut simulation (Grossberg, Mingolia 1985). End gap, filter size, end cut. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:298:
  • image p145fig04.23 If end gaps were not closed by end cuts, then color would flow out of every line end!
    || A perceptual disaster in the feature contour system. feature contour, line boundary. input -> [boundary, surface]. boundary -> surface. Color would flow out of every line end! as it does during neon color spreading. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:299:
  • image p145fig04.24 A brain's task in creating an end cut to replace an ambiguous end gap requires that it be sensitive to the pattern of signals across the network, not just the activities of individual neurons.
    || Hierarchical resolution of uncertainty. End Cuts. The boundary system must CREATE a line end at next processing stage: Every line end is illusory! input -> ambiguous -> end cut. vertical -> vertical, ambiguous -> horizontal. A pattern-to-pattern map, not a pixel-to-pixel map. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:300:
  • image p146fig04.25 Networks of simple, complex, and hypercomplex cells can create end cuts as an example of hierarchical resolution of uncertainty. See the text for details.
    || How are end cuts created? (Grossberg 1984) Two stages of short-range competition. 1st stage: Simple cells -> complex cells -> hypercomplex - endstopped complex. First competitive stage- across position, same orientation; Second competitive stage- same position, across orientation. -> cooperation. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:301:
  • image p148fig04.26 End cuts are formed during neon color spreading in the same way that they are formed at line ends.
    || End cut during neon color spreading.
    FIRST competitive stageSECOND competitive stage
    within orientationacross orientation
    across positionwithin position
    to generate end cuts. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:302:
  • image p149fig04.27 Bipole cells can form boundaries that interpolate end cuts, and use their cooperative-competitive interactions to choose the boundary groupings that have the most support from them.
    || Bipole cells: boundary completion. long-range cooperation & short-range inhibition: complete winning boundary groupings and suppress weaker boundaries. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:303:
  • image p150fig04.28 Bipole cells have two branches (A and B), or poles, in their receptive fields. They help to carry out long-range boundary completion.
    || Bipole property. Boundary completion via long-range cooperation. Completing boundaries inwardly between pairs or great numbers of inducers in an oriented way. fuzzy "AND" gate. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:304:
  • image p151fig04.29 Experimental evidence of bipole cells in cortical area V2 was reported by Von der Heydt, Peterhans, and Baumgarter (1984).
    || Bipoles: first neurophysiological evidence (V2) (von der Heydt, Peterhans, Baumgartner 1984, Peterhans, von der Heydt 1988). (Grossberg 1984) prediction.
    Ordering:
    Stimulus (S)
    probe location *
    cells in V2
    response?
    ...(S)*...YES
    ...*...(S)NO
    (S)...*...NO
    (S)...*...(S)YES
    (S)...*...
    (more contrast)
    NO
    (S)...*.....(S)YES
    Evidence for receptive field. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:305:
  • image p151fig04.30 Anatomical evidence for long-range horizontal connections has also been reported, as illustrated by the example above from (Bosking etal 1997).
    || Anatomy: horizontal connections (V1) (Bosking etal 1997). tree shrew. [10, 20]*[20, 10, 0, -10, -20] (degrees). /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:306:
  • image p152fig04.31 The predicted bipole cell receptive field (upper left corner) has been supported by both neurophysiological data and psychophysical data, and used in various forms by many modelers. See the text for details.
    || Bipoles through the ages. (Grossberg 1984; Grossberg, Mongolla 1985). (Field, Hayes, Hess 1993) "association field". (Heitger, von der Heydt 1993). (Williams, Jacobs 1997). cf. "relatability" geometric constraints on which countours get to group (Kellman & Shipley 1991). Also "tensor voting" (Ullman, Zucker, Mumford, Guy, Medioni, ...). /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:307:
  • image p153fig04.32 The double filter network embodies simple, complex, and hypercomplex (or endstopped complex) cells. It feeds into a network of bipole cells that can complete boundaries when it properly interacts with the double filter.
    || Double filter and grouping network. Cells : simple -> complex -> hypercomplex (endstopping) -> bipole
    Grouping networkbipole cells
    Double filterhypercomplex cells
    endstopping
    complex cells
    simple cells
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:308:
  • image p156fig04.33 A tripartite texture (top row) and two bipartite textures (bottom row) that illustrate how emergent boundary groupings can segregate textured regions from one another. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:309:
  • image p157fig04.34 Some textures that were simulated with mixed success by the complex channels model. In particular, the model gets the wrong answer for the textures in (g) and (i). The Boundary Contour System model of Figure 4.32, which includes both a double filter and a bipole grouping network, simulates the observed results. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:310:
  • image p159fig04.35 Spatial impenetrability prevents grouping between the pac-men figures in the left figure, but not in the figure on the right.
    || p158c2h0.75 "... In the image shown in the left panel, the horizontal boundaries of the background squares interfere with vertical boundary completion by vertically-oriented bipole cells, again by spatial impenetrability. In contrast, the vertical boundaries of the background squares are collinear with the vertical pac-man inducers, thereby supporting formation of the square boundaries. Finer aspects of these percepts, such as why the square ... (right panel) appears to lie in front of four partially occuded circular discs, as regularly occurs when the Kanizsa square can form (eg Figure 3.3), can be understood using FACADE theory mechanism that will shown below to explain many figure-ground percepts using natural extensions to the three dimensional world of boundary and and surface mechanisms that we have already discussed. ..." /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:311:
  • image p159fig04.36 Graffiti art by Banksy exploits properties of amodal boundary completion and spatial impenetrability.
    || p159c1h0.75 perceptual psychologist Nava Rubin "... When the wall is smooth, Banksy leaves the regions previously covered by stencil unpainted, relying of observers' perception to segregate figural regions from the (identically colored) background. But when the wall is patterned with large-scale luminance edges - eg due to bricks - Banksy takes the extra time to fill in unpainted figural regions with another color (Rubin 2015). ..." /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:312:
  • image p161fig04.37 Kanizsa squares that form either collinearly to their inducers (left panel) or perpendicular to them (right panel) confirm predictions of the BCS boundary completion model.
    || Analog-sensitive boundary completion. contour strength vs Kanizsa square image. Increases with "support ratio" (Shipley, Kellman 1992). Inverted-U (Lesher, Mingoloa 1993; cf Soriano, Spillmann, Bach 1994)(shifted gratings). p370h0.6 BCS = Boundary Contour System, FCS = Feature Contour System. p161c1h0.85 "... As predicted by the BCS, they found an Inverted-U in contour strength as a function of line density. ... This effect may be explained by the action of the short-range competition that occurs before the stage of long-range cooperative grouping by bipole cells (Figure 4.32). It is thus another example of the balance between cooperative and competitive mechanisms. ..." /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:313:
  • image p162fig04.38 How long-range cooperation among bipole cells and short-range competition by hypercomplex cells work together to generate the inverted-U in boundary strength that is found in the data of Figure 4.37 (right panel).
    || Cooperation and competition during grouping.
    few lineswide spacing, inputs outside spatial range of competition, more inputs cause higher bipole activity
    more linesnarrower spacing, slightly weakens net input to bipoles from each inducer
    increasing line densitycauses inhibition to reduce net total input to bipoles
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:314:
  • image p163fig04.39 A schematic of the LAMINART model that explains key aspects of laminar visual cortical anatomy and dynamics. LGN -> V1 [6, 4, 2/3] -> V2 [6, 4, 2/3]
    || p163c1h0.6 "... The first article abount laminar computing ... proposed how the laminar cortical model could process 2D pictures using bottom-up filtering and horizontal bipole grouping interactions (Grossbeerg, Mingolla, Ross 1997). In 1999, I was able to extend the model to also include top-down circuits for expectation and attention (Grossberg 1999)(right panel). Such a synthesis of laminar bottom-up, horizontal, and top-down circuits is characteristic of the cerebral cortex (left panel). I called it LAMINART because it began to show how properties of Adaptive Resonance Theory, or ART, notably the ART prediction about how top-down expectations and attention work, and are realized by identical cortical cells and circuits. You can immediately see from the schematic laminar circuit diagram ... (right panel) that circuits in V2 seem to repeat circuits in V1, albeit with a larger spatial scale, despite the fact that V1 and V2 carry out different functions, How this anatomical similarity can coexist with functional diversity will be clarified in subsequent sections and chapters. It enable dfifferent kinds of biological intelligence to communicate seamlessly while carrying out their different psychological functions. ..." /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:315:
  • image p164fig04.40 The Koffka-Benussi ring. See the text for details.
    || p164c2h0.25 "... [left image] The luminance of the ring is intermediate between the luminances of the two background regions. Its perceived brightness is also between the brightnesses of the two background regions, and appears to be uniform throughout. The right image differs from the left only in that a vertical line divides the two halves of the ring where it intersects the two halves in the background. Although the luminance of the ring is still uniform throughout, the two halves of the rig now have noticeably different brightnesses, with the left half of the ring looking darker than the right half. How can drawing a line have such a profound effect on the brightnesses of surface positions that are so far away from the line? ..." /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:316:
  • image p165fig04.41 The Kanizsa-Minguzzi ring. See the text for details.
    || p165c1h0.6 "... (left panel), the annulus is divided by two line segments into annular sectors of unequal area. Careful viewing shows that the smaller sector looks a little brighter than the larger one. (Kanizsa, Minguzzi 1986) noted that "this unexpected effect is not easily explained. In fact, it cannot be accounted for by any simple psychological mechanism such as lateral inhibition or freuency filtering. Furthermore, it does not seem obvious to invoke oganizational factors, like figural belongingness or figure-ground articulation."". p165c2h0.35 "... (Grossberg, Todorovic 1988). Our main claim is that the two radial lines play two roles, one in the formation of boundaries with which to contain the filling-in process, and the other as a source of feature contour signals that are filled-in within the annular regions to create a surface brightness percept. ..." /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:317:
  • image p166fig04.42 Computer simulation of Kanizsa-Minguzzi ring percept. See the text for details. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:318:
  • image p167fig04.43 (a) How bipole cells cause end cuts. (b) The Necker cube generates a bistable percept of two 3D parallelopipeds. (c) Focusing spatial attention on one of the disks makes it look both nearer and darker, as (Tse 1995) noted and (Grossbert, Yazdanbakhsh 1995) explained.
    || T-junction sensitivity. image -> bipole cells -> boundary. (+) long-range cooperation, (-) short-range competition. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:319:
  • image p168fig04.44 Macrocircuit of the main boundary and surface formation stages that take place from the lateral geniculate nucleus, or LGN, through cortical areas [V1, V2, V4]. See the text for details.
    ||
    left eyebinocularright eye
    V4 binocular surface
    V2 monocular surfaceV2 layer 2/3 binocular boundaryV2 monocular surface
    V2 layer 4 binocular boundary
    V1 monocular surfaceV1 monocular boundaryV1 binocular boundaryV1 monocular boundaryV1 monocular surface
    LGNLGN
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:320:
  • image p168fig04.45 How ON and OFF feature contour (FC) activities give rise to filled-in surface regions when they are adjacent to a like oriented boundary, but not otherwise. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:321:
  • image p170fig04.46 Surface regions can fill-in using feature contour inputs (+ and - signs) if they are adjacent to, and collinear with, boundary contour inputs (solid) line, as in (a), but not otherwise, as in (b). /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:322:
  • image p170fig04.47 A double-opponent network processes output signals from opponent ON and OFF Filling-In DOmains, or FIDOs.
    || OFF FIDO -> shunting networks -> ON FIDO -> shunting networks-> opponent interation -> FIDO outputs /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:323:
  • image p171fig04.48 How closed boundaries contain filling-in of feature contour signals, whereas open boundaries allow color to spread to both sides of the boundary.
    || Before filling-in: boundary contour, illuminant-discounted feature contour; After filling-in: no gap, gap /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:324:
  • image p171fig04.49 An example of DaVinci stereopsis in which the left eye sees more of the wall between A and C than the right eye does. The region between B and C is seen only by the left eye because the nearer wall between C and D occludes it from the right eye view. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:325:
  • image p173fig04.50 This figure illustrates how a closed boundary can be formed in a prescribed depth due to addition of binocular and monocular boundaries, but not at other depths.
    || How are closed 3D boundaries formed? V1 Binocular, V2 boundary, V2 surface; Prediction: monocular and horizontal boundaries are added to ALL binocular boundaries along the line of sight. Regions that are surrounded by a CLOSED boundary can depth-selectively contain filling-in of lightness and colored signals. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:326:
  • image p174fig04.51 The same feedback circuit that ensures complementary consistency between boundaries and surfaces also, automatically, initiates figure-ground separation! See the text for details.
    || before feedback: [V1 -> V2 pale stripe -> V2 thin stripe, "attention pointers" (Cavanagh etal 2010)]; after feedback: [V1 + V2 thin stripe] -> V2 pale stripe via contrast sensitive [exhitation, inhibition] for depths [1, 2] -> object recognition /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:327:
  • image p174fig04.52 An example of how the 3D LAMINART model can transform the two monocular images of the random dot stereogream in the top row into the three depth-separated surfaces representations in the bottom row.
    || Stereogram surface percepts: surface lightnesses are segregated in depth (Fang and Grossberg 2009). [left, right] inputs, [far, fixation, near] planes. Contrast algorithms that just compute disparity matches and let computer code build the surface eg (Marr, Poggio, etal 1974). /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:328:
  • image p176fig04.53 The on-center off-surround network within position and across depth helps to explain why brighter Kanizsa squares look closer.
    || inhibition vs. depth. p176c1h0.25 "... to qualitatively understand how this example of proximity-luminance covariance works. It follows directly from the boundary pruning by surface contour feedback signals (Figure 4.51) that achieves complementary consistency and initiates figure-ground perception. ...". p176c1h0.45 "... these inhibitory sigals are part of an off-surround network whose strength decreases as the depth difference increases between the surface that generates the signal and its recipient boundaries. ...". p176c1h0.8 "... Within FACADE theory, the perceived depth of a surface is controlled by the boundaries that act as its filling-in generators and barriers (Figure 3.22), since these boundaries select the depth-sselective FIDOs within whin filling-in can occur, and thereby achieve surface capture. These boundaries, in turn, are themselves strengthened after surface-to-boundary contour feedback eliminates redundant boundaries that cannot support sucessful filling-in (Figure 4.51). These surface contour feedback signals have precisely the properties that are needed to explain why brighter Kanizsa squares look closer! ..." /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:329:
  • image p178fig04.54 Initial steps in figure-ground separation. See the text for details.
    ||
    topLeftrepeats the image in Figure 1.3
    topRightshows again the long-range cooperation and short-range compeition that are controlled by the bipole grouping process (Figure 4.43a middle panel)
    bottomLeftshows the end gaps that are caused by these bipole grouping mechanisms
    bottomRightshows how surface filling-in is contained within the closed horizontal rectangular boundary, but spills out of the end gaps formed in the other two rectangles
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:330:
  • image p178fig04.55 Amodal completion of boundaries and surfaces in V2.
    || Separated V2 boundaries: near, far (amodal boundary completion); Separated V2 surfaces: ?horizonal, vertical? (amodal surface filling-in). /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:331:
  • image p179fig04.56 Final steps in generating a visible, figure-ground separated, 3D surface representation in V4 of the unoccluded parts of opaque surfaces.
    || Visible surface perception.
    Boundary enrichment:nearfarasymmetry between near & far
    V4horizontal rectanglehorizontal & vertical rectanglescannot use these (overlapping?) boundaries for occluded object recognition
    V2horizontal rectanglevertical rectangleuse these boundaries for occluded object recognition
    Visible surface filling-in:filling-in of entire vertical rectanglepartial filling in of horizontal rectanglevisible percept of unoccluded [vertical] surface
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:332:
  • image p181fig04.57 Percepts of unimodal and bistable transparency (top row) as well as of a flat 2D surface (bottom row, left column) can be induced just by changing the relative contrasts in an image with a fixed geometry.
    || X junction /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:333:
  • image p182fig04.58 LAMINART model processing stage that are sufficient to explain many percepts of transparency, including those summarized in Figure 4.57.
    || [left, right] eye, [LGN, V1 [6, 4, 3B, 2/3 A], V2 [4, 2/3]], [mo, bi]nocular cart [simple, complex] cells, [excita, inhibi]tory cart [connection, cell]s. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:341:
  • image p186fig05.01 Humans and other autonomous adaptive intelligent agents need to be able to learn both many-to-one and one-to-many maps.
    || Learn many-to-one (compression, naming) and one-to-many (expert knowledge) maps /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:342:
  • image p186fig05.02 Learning a many-to-one map from multiple visual fonts of a letter to the letter's name requires a stage of category learning followed by one of asscociatively learned mapping.
    || Many-to-one map- two stages of compression: visual categories, auditory categories /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:343:
  • image p186fig05.03 Many-to-one maps can learn a huge variety of kinds of predictive information.
    || Many-to-one map, two stage compression: IF-THEN rules: [symptom, test, treatment]s; length of stay in hospital /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:344:
  • image p189fig05.04 The hippocampus is one of several brain regions that are important in learning and remembering about objects and events that we experience throughout life. The book will describe several hippocampal processes that contribute to this achievement in different ways.
    || hypothalmic nuclei, amygdala, hippocampus, cingulate gyrus, corpus callosum, thalamus /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:345:
  • image p192fig05.05 ON and OFF cells in the LGN respond differently to the sides and ends of lines.
    || [ON, OFF]-center, [OFF, ON]-surround (respectively). OFF-center cells maximum response at line end (interior), ON-center cells maximum response along sides (exterior) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:346:
  • image p192fig05.06 Bottom-up and top-down circuits between the LGN and cortical area V1. The top-down circuits obey the ART Matching Rule for matching with bottom-up input patterns and focussing attention on expected critical features.
    || Model V1-LGN circuits, version [1, 2]. retina -> LGN relay cells -> interneurons -> cortex [simple, endstopped] cells -> cortex complex cells /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:347:
  • image p193fig05.07 A more detailed description of the connections between retinal ganglion cells, the LGN, and V1.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:348:
  • image p193fig05.08 The patterns of LGN activation and inhibition on the sides and ends of a line without the top-down feedback (A) and with it (C). The top-down distribution of excitation (+) and inhibition (-) are shown in (B).
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:349:
  • image p194fig05.09 A computer simulation of the percept (D) that is generated by feature contours (B) and boundary contours (C) in response to an Ehrenstein disk stimulus (A).
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:350:
  • image p198fig05.10 A competitive learning circuit learns to transform distributed feature patterns into selective responses of recognition categories.
    || Competitive learning and Self-Organized Maps (SOMs). input patterns -> feature level (F1) -> adaptive filter (T=ZS) -> /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:351:
  • image p199fig05.11 Instar learning enables a bottom-up adaptive filter to become selectively tuned to particular feature patterns. Such pattern learning needs adaptive weights that can either increase or decrease to match the featural activations that they filter.
    || Instar learning STM->LTM: need both increases and decreases in strength for the LTM pattern to learn the STM pattern /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:352:
  • image p200fig05.12 The duality of the outstar and instar networks is evident when they are drawn as above.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:353:
  • image p200fig05.13 Instar and outstar learning are often used to learn the adaptive weights in the bottom-up filters and top-down expectations that occur in ART. The ART Matching Rule for object attention enables top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features.
    || Expectations focus attention: feature pattern (STM), Bottom-Up adaptive filter (LTM), Category (STM), competition, Top-Down expectation (LTM); ART Matching Rule: STM before top-down matching, STM after top-down matching (attention!) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:354:
  • image p200fig05.14 Outstar learning enables individual sampling cells to learn distributed spatial patterns of activation at the network of cells that they sample. Again, both increases and decreases in LTM traces must be possible to enable them to match the activity pattern at the sampled cells.
    || Outstar learning, need both increases and decreases in ???? /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:355:
  • image p201fig05.15 An outstar can learn an arbitrary spatial pattern of activation at its sampled nodes, or cells. The net pattern that is learned is a time average of all the patterns that are active at the sampled nodes when the sampling node is active.
    || Spatial learning pattern, outstar learning. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:356:
  • image p202fig05.16 In the simplest example of category learning, the category that receives the largest total input from the feature level is chosen, and drives learning in the adaptive weights that abut it. Learning in this "classifying vector", denoted by zi, makes this vector more parallel to the input vector from the feature level that is driving the learning (dashed red arrow).
    || Geometry of choice and learning /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:357:
  • image p202fig05.17 This figure summarizes the simplest equations whereby the adaptive weights of a winning category learn the input pattern that drove it to win, or more generally a time-average of all the input patterns that succeeded in doing so.
    || Geometry of choice and learning, learning trains the closest LTM vector /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:358:
  • image p205fig05.18 How catastrophic forgetting can occur in a competitive learning or self-organizing map model due to basic properties of competition and associative learning.
    || Learning from pattern sequences, practicing a sequence of spatial patterns can recode all of them! When is learning stable? Input patterns cannot be too dense relative to the number of categories; Either: not to many distributed inputs relative to the number of categories, or not too many input clusters /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:359:
  • image p207fig05.19 The ART hypothesis testing and learning cycle. See the text for details about how the attentional system and orienting system interact in order to incorporate learning of novel categories into the corpus of already learned categories without causing catastophic forgetting.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:360:
  • image p211fig05.20 The PN and N200 event-related potentials are computationally complementary events that are computed within the attentional and orienting systems.
    || PN and N200 are complementary waves. PN [top-down, conditionable, specific] match; N200 [bottom-up, unconditionable, nonspecific] mismatch /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:361:
  • image p211fig05.21 Sequences of P120, N200, and P300 event-related potentials occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    || ERP support for mismatch-mediated reset: event-related potentials: human scalp potentials. ART predicted correlated sequences of P120-N200-P300 Event Related Potentials during oddball learning. P120 mismatch; N200 arousal/novelty; P300 STM reset. Confirmed in (Banquet and Grossberg 1987) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:362:
  • image p213fig05.22 Suppose that a very different exemplar activates a category than the one that originally learned how to do this.
    || By prior learning, X1 at F1 is coded at F2, Suppose that X2 incorrectly activates the same F2 code. How to correct the error? The problem occurs no matter how you define an "error" /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:363:
  • image p213fig05.23 A category, symbol, or other highly compressed representation cannot determine whether an error has occurred.
    || Compression vs error correction. past vs present. Where is the knowledge than an error was made? Not at F2! The compressed code cannot tell the difference! X2 is at F1 when (green right triangle GRT) is at F2 defines the error. There is a mismatch between X1 and X2 at F1. How does the system know this? /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:364:
  • image p214fig05.24 Learning of a top-down expectation must occur during bottom-up learning in the adaptive filter in order to be able to match the previously associated feature pattern with the one that is currently active.
    || Learning top-down expectations. When the code (green right triangle GRT) for X1 was learned at F2, GRT learned to read-out X1 at F1. [Bottom-Up, Top-Down] learning /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:365:
  • image p214fig05.25 The sequence of events whereby a novel input pattern can activate a category which, in turn, reads out its learned top-down expectation to be matched against the input pattern. Error correction thus requires the use of a Match Detector that has properties of the Processing Negativity ERP.
    || How is an error corrected. During bottom-up learning, top-down learning must also occur so that the pattern that is read out top-down can be compared with the pattern that is activated by bottom-up inputs. Match detector: Processing Negativity ERP. 1. top-down, 2. conditionable, 3. specific, 4. match /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:366:
  • image p214fig05.26 When a big enough mismatch occurs, the orienting system is activated and sends a burst of nonspecific arousal to the category level. This Mismatch Detector has properties of the N200 ERP.
    || Mismatch triggers nonspecific arousal. Mismatch at F1 eleicits a nonspecific event at F2. Call this event nonspecific arousal. N200 ERP Naatanen etal: 1. bottom-up, 2. unconditionable, 3. nonspecific, 4. mismatch /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:367:
  • image p215fig05.27 Every event activates both the attentional system and the orienting system. This text explains why.
    || Attentional and Orienting systems. Every event has a cue (specific) and an arousal (nonspecific) function /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:368:
  • image p215fig05.28 How a mismatch between bottom-up and top-down input patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level.
    || Mismatch -> inhibition -> arousal -> reset. BU input orienting arousal, BU+TD mismatch arousal and reset. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:369:
  • image p220fig05.29 Vigilance is a gain parameter on inputs to the orienting system that regulates whether net excitation from bottom-up inputs or inhibition from activated categories will dominate the orienting system. If excitation wins, then a memory search for a better matching will occur. If inhibition wins, then the orienting system will remain quiet, thereby enabling resonance and learning to occur.
    || Vigilance control [resonate and learn, reset and search]. ρ is a sensitivity or gain parameter /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:370:
  • image p221fig05.30 When a predictive disconfirmation occurs, vigilance increases enough to drive a search for a more predictive category. If vigilance increases just enough to exceed the analog match between features that survive top-down matching and the entire bottom-up input pattern, then minimax learning occurs. In this case, the minimum amount of category generalization is given up to correct the predictive error.
    || Match tracking realizes minimax learning principle. Given a predictive error, vigilance increases just enough to trigger search and thus acrifices the minimum generalization to correct the error ... and enables expert knowledge to be incrementally learned. predictive error -> vigilance increase just enough -> minimax learning /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:371:
  • image p221fig05.31 A system like Fuzzy ARTMAP can learn to associate learned categories in one ART network with learned categories in a second ART network. Because both bottom-up and top-down interactions occur in both networks, a bottom-up input pattern to the first ART network can learn to generate a top-down output pattern from the second ART network.
    || Fuzzy ARTMAP. Match tracking realizes minimax learning principle: vigilence increases to just above the match ratio of prototype / exemplar, thereby triggering search /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:372:
  • image p224fig05.32 Learning the alphabet with two different levels of vigilance. The learning in column (b) is higher than in column (a), leading to more concrete categories with less abstract prototypes. See the text for details.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:373:
  • image p225fig05.33 Some early ARTMAP benchmark studies. These successes led to the use of ARTMAP, and many variants that we and other groups have developed, in many large-scale applications in engineering and technology that has not abated even today.
    || see Early ARTMAP benchmark studies /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:374:
  • image p225fig05.34 ARTMAP was successfully used to learn maps of natural terrains with many advantages over those of mapping projects that used AI expert systems. The advantages are so great that many mapping projects started to use this technology.
    || AI expert system - 1 year: field identification of natural regions; derivation of ad hoc rules for each region by expert geographers; correct 80,000 of 250,000 site labels; 230m (site-level) scale. ARTMAP system - 1 day: rapid, automatic, no natural regions or rules; confidence map; 30m (pixel-level) scale can see roads; equal accuracy at test sites /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:375:
  • image p226fig05.35 I had shown in 1976 how a competitive learning or self-organizing map model could undergo catastrophic forgetting if the input environment was sufficiently dense and nonstationary, as illustrated by Figure 5.18. Later work with Gail Carpenter showed how, if the ART Matching Rule was shut off, repeating just four input patterns in the correct order could also casue catastrophic forgetting by causing superset recoding, as illustrated in Figure 5.36.
    || Code instability input sequences. D C A; B A; B C = ; |D|<|B|<|C|; where |E| is the number of features in the set E. Any set of input vectors that satisfy the above conditions will lead to unstable coding if they are periodically presented in the order ABCAD and the top-down ART Matching Rule is shut off. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:376:
  • image p226fig05.36 Column (a) shows catastrophic forgetting when the ART Matching Rule is not operative. It is due to superset recoding. Column (b) shows how category learning quickly stabilizes when the ART Martching Rule is restored.
    || Stabel and unstable learning, superset recoding /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:377:
  • image p228fig05.37 A macrocircuit of the neurotrophic Spectrally Timed ART, or nSTART, model. I developed nSTART with my PhD student Daniel Franklin. It proposes how adaptively timed learning in the hippocampus, bolstered by Brain Derived Neurotrophic Factor, or BDNF, helps to ensure normal memory consolidation.
    || habituative gates, CS, US, Thalamus (sensory cortex, category learning, conditioned reinforcer learning, adaptively time learning and BDNF), Amygdala (incentive motivation learning), Hippocampus (BDNF), Prefrontal Cortex (attention), Pontine nuclei, Cerebellum (adaptively timed motor learning) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:378:
  • image p230fig05.38 The Synchronous Matching ART, or SMART, model includes spiking neurons in a laminar cortical hierarchy. I developed SMART with my PhD student Massimiliano Versace. By unlumping LAMINART to include spiking neurons, finer details of neurodynamics, such as the existence of faster gamma oscillations during good enough matches, and slower beta oscillations during bad enough mismatches, could be shown as emergent properties of network interactions.
    || Second order thalamus -> specific thalamic nucleus -> Thalamic reticulate nucleus -> neocortical laminar circuit [6ll, 6l, 5, 2/3, 1] -> Higher order cortex. Similar for First order thalamus -> First order cortex, with interconnection to Second order, nonspecific thalamic nucleus /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:379:
  • image p231fig05.39 The SMART hypothesis testing and learning cycle predicts that vigilance increases when a mismatch in subcortical regions like the nonspecific thanlamus activates the nucleus basalis of Meynert which, in turn, broadcasts a burst of the neurotransmitter acetylcholine, or ACh, to deeper cortical layers. Due to the way in which LAMINART proposes that cortical matching and mismatching occurs, this ACh burst can increase vigilance and thereby trigger a memory search. See the text for details.
    || [BU input, [, non]specific thalamic nucleus, thalamic reticulate nucleus, neocortical laminar circuit] cart [Arousal, Reset, Search, Vigilance] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:380:
  • image p232fig05.40 Computer simulation of how the SMART mode generates (a) gamma oscillations if a good enough match occurs, or (c) beta oscillations if a bad enough match occurs. See the text for details.
    || Brain oscillations during match/mismatch, data, simulation. (a) TD corticothalamic feedback increases synchrony (Sillito etal 1994) (b) Match increases γ oscillations (c) Mismatch increases θ,β oscillations /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:381:
  • image p232fig05.41 (a)-(c). The sequence of interlaminar events that SMART predicts during a mismatch reset. (d) Some of the compatible neurophysiological data.
    || Mismatch causes layer 5 dendritic spike that trigger reset. (a) Arousal causes increase in nonspecific thalamic nuclei firing rate and layer 5 dendritic and later somatic spikes (Karkum and Zhu 2002, Williams and Stuart 1999) (b) Layer 5 spikes reach layer 4 via layer 6i and inhibitory neurons (Lund and Boothe 1975, Gilbert and Wiesel 1979) (c) habituative neurotransmitters in layer 6i shift the balance of active cells in layer 4 (Grossberg 1972, 1976) (d) Dendritic stimulation fires layer 5 (Larkum and Zhu 2002) stimulation apical dendrites of nonspecific thalamus /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:382:
  • image p233fig05.42 Mismatch-induced beta oscillations have been reported in at least three parts of the brain: V1, V4, and hippocampus. Althpough there may be other reasons for beta oscillations in the brain, those that are caused by a mismatch should be studied in concert with the gamma oscillations that occur during a good enough match. See tyhe text for details.
    || Is there evidence for the [gamma, beta] prediction? Yes, in at least three parts of the brain, (Buffalo EA, Fries P, Ladman R, Buschman TJ, Desimone R 2011, PNAS 108, 11262-11267) Does this difference in average oscillation frequencies in the superficial and deep layers reflect layer 4 reset? Superficial recording γ (gamma), Deep recording β (beta) (Berke etal 2008, hippocampus; Buschman and Miller 2009, FEF) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:383:
  • image p236fig05.43 The activation of the nucleus basalis of Meynert, and its subsequent release of ACh into deeper layers of neocortex, notably layer 5, is assumed to increase vigilance by reducing afterhyperpolarization (AHP) currents.
    || Vigilance control: mismatch-mediated acetylcholine release (Grossberg and Versace 2008). Acetylcholine (ACh) regulation by nonspecific thalamic nuclei via nucleus basalis of Meynert reduces AHP in layer 5 and causes a mismatch/reset thereby increasing vigilance. HIGH vigilance ~ sharp code, LOW vigilance ~ coarse code /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:384:
  • image p240fig05.44 When an algebraic exemplar model is realized using only local computations, it starts looking like an ART prototype model.
    || How does the model know which exemplars are in category A? BU-TD learning. How does a NOVEL test item access category A? /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:385:
  • image p241fig05.45 The 5-4 category structure is one example of how an ART network learns the same kinds of categories as human learners. See the text for details.
    || 5-4 Category structure. A1-A5: closer to the (1 1 1 1) prototype; B1-B4: closer to the (0 0 0 0) prototype /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:386:
  • image p242fig05.46 Computer simulations of how two variants of Distributed ARTMAP incrementally learn the 5-4 category structure. See the text for details.
    || Distributed ARTMAP with [self-supervised learning, post-training LTM noise] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:387:
  • image p245fig05.47 How long-range excitatory connections and short-range disynaptic inhibitory connections realize the bipole grouping law.
    || stimulus -> boundary representation -> layer 2/3 /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:388:
  • image p246fig05.48 Microcircuits of the LAMINART model that I developed with Rajeev Raizada. See the text for details of how they integrate bottom-up adaptive filtering, horizontal bipole grouping, and top-down attentional matching that satisfied the ART Matching Rule.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:389:
  • image p248fig05.49 This circuit of the LAMINART model helps to explain properties of Up and Down states during slow wave sleep, and how disturbances in ACh dynamics can disrupt them.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:397:
  • image p252fig06.01 A surface-shroud resonance begins to form when the surface representations of objects bid for spatial attention. In addition to these topographic excitatory inputs, there is long-range inhibition of the spatial attention cells that determines which inputs will attract spatial attention.
    || Bottom-up spatial attention competition. [more, less] luminous perceptual surfaces -> competition -> spatial attention /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:398:
  • image p253fig06.02 After bottom-up surface inputs activate spatial attentional cells, they send top-down topographic excitatory signals back to the surface representations. This recurrent shunting on-center off-surround network contrast enhances larger attentional activities while approximately normalizing the total spatial attentional activity. A surface-shroud resonance hereby forms that selects an attentional shroud, enhances the perceived contrast of the attended surface (light blue region), and maintains spatial attention on it.
    || Surface-shroud resonance. perceptual surfaces -> competition -> spatial attention. (Carrasco, Penpeci-Talgar, and Eckstein 2000, Reynolds and Desimone 2003) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:399:
  • image p254fig06.03 These interactions of the ARTSCAN Search model enable it to learn to recognize and name invariant object categories. Interactions between spatial attention in the Where cortical stream, via surface-shroud resonances, and object attention in the What cortical stream, that obeys the ART Matching Rule, coordinate these learning, recognition, and naming processes.
    || Retinal image -> On & OFF cell contrast normalization (retina/LGN) -> polarity [, in]sensitive contrast enhancement (V1) -> object [boundary (V2), surface (V2/V4)] -> surface contour (V2); What stream categories: volition control (BG). object boundary (V2) <-> view (ITp) <-> view integrator (ITp) <-> object (ITa) <-> [object-value (ORB), value (Amyg)] <-> name (PFC) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:400:
  • image p255fig06.04 The ARTSCAN Search model can also search for a desired target object in a scene, thereby clarifying how our brains solve the Where's Waldo problem.
    || similar ilustration as Figure 06.03, with some changes to arrows /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:401:
  • image p257fig06.05 A curve tracing task with monkeys was used by Roelfsema, Lamme, and Spekreijse in 1998 to demonstrate how spatial attention can flow along object boundaries. See the text for details.
    || Attention flows along curves: Roelfsema etal 1998: Macaque V1. fixation (300ms) -> stimulus (600ms RF - target curve, distractor) -> saccade. Crossed-curve condition: attention flows across junction between smoothly connected curve segments, Gestalt good continuation /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:402:
  • image p258fig06.06 Neurophysiological data and simulation of how attention can flow along a curve. See the text for details.
    || Simulation of Roelfsema etal 1998, data & simulation. Attention directed only to far end of curve. Propagates along active layer 2/3 grouping to distal neurons. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:403:
  • image p258fig06.07 A top-down spotlight of attention can also be converted into a shroud. This process begins when the spotlight triggers surface filling-in within a region. Figure 6.8 shows how it is completed.
    || Reconciling spotlights and shrouds: top-down attentional spotlight becomes a shroud. spotlight of attention, surface filling-in /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:404:
  • image p259fig06.08 The distributed ARTSCAN, or dARTSCAN, model includes spatial attention in both PPC and PFC, and both fast-acting attention, triggered by transient cells in Where cortical areas such as MT, and slower-acting surface-shroud resonances in What cortical areas such as V4 and PPC. See the text for details.
    || dARTSCN spatial attention hierarchy, Fast (Where stream) Slow (What stream) (Foley, Grossberg, and Mingolia 2012). [transient cells (MT) ->, object surfaces (V4) <->] [object shrouds (PPC) <-> spatial shrouds (PPC/PFC)] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:405:
  • image p260fig06.09 Crowding in the periphery of the eye can be avoided by expanding the size and spacing of the letters to match the cortical magnification factor.
    || Crowding: visible objects and confused recognition. Accurate target recogition requires increased flanker spacing at higher eccentricity /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:406:
  • image p260fig06.10 The cortical magnification factor transforms (A) artesian coordinates in the retina into (B) log polar coordinates in visual cortical area V1.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:407:
  • image p261fig06.11 If the sizes and distances between the letters stays the same as they are received by more peripheral parts of the retina, then all three letters may be covered by a single shroud, thereby preventing their individual perception and recognition.
    || Crowding: visible objects and confused recognition. log compression and center-surround processing cause... input same eccentricity, surface, object shroud, crowding threshold. object shrouds merge! /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:408:
  • image p261fig06.12 Pop-out of the L among T's can easily occur when inspecting the picture to the left. In the picture to the right, a more serial search is needed to detect the vertical red bar due to overlapping conjunctions of features.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:409:
  • image p265fig06.13 The basal ganglia gate perceptual, cognitive, emotional, and more processes through parallel loops.
    || [motor, ocularmotor, dorsolateral, ventral-orbital, anterior cingulate] vs. [Thalamus, pallidum-subs, nigra, Striatum, Cortex] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:410:
  • image p267fig06.14 Feedback from object surfaces to object boundaries uses surface contours. This feedback assures complementary consistency and enables figure-ground separation. A corollary discharge of the surface contours can be used to compite salient object feature positions.
    || Perceptual consistency and figure-ground separation. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:411:
  • image p268fig06.15 The largest salient feature signal is chosen to determine the next target position of a saccadic eye movement. This This target position signal self-inhibits to enable the next most salient position to be foveated. In this way, multiple feature combinations of the object can be foveated and categorized. This process clarifies how the eyes can explire even novel objects before moving to other objects. These eye movements enable invariant categories to be learned. Each newly chosen target position is, moreover, an "attention pointer" whereby attention shifts to the newly foveated object position.
    || How are saccades within an object determined? Figure-ground outputs control eye movements via V3AA! Support for prediction (Theeuwes, Mathot, and Kingstone 2010), More support: "attention pointers" (Cavanaugh etal 2010), Even more support (Backus etal 2001, Caplovitz and Tse 2006, Galletti and Battaglia 1989, Nakamura and Colby 2000) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:412:
  • image p270fig06.16 The same target position signal that can command the next saccade also updates a gain field that predictively maintains the attentional shroud in head-centered coordinates, even before the eye movement is complete. This process keeps the shroud invariant under eye movements, so that it can continue to inhibit reset of an emerging invariant category as t is associated with multiple object views, even while the conscious surface representation shifts with each eye movement in retinotopic coordinates. This pdating process is often called predictive re mapping.
    || Predictive remapping of eye movements! From V3A to LIP. [spatial attention, object attention, figure-ground separation, eye movement remapping, visual search]. (Beauvillaib etal 2005, Carlson-Radvansky 1999, Cavanaugh etal 2001, Fecteau & Munoz 2003, Henderson & Hollingworth 2003, Irwin 1991) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:413:
  • image p271fig06.17 Persistent activity in IT cells is just what is needed to enable view-invariant object category learning by ARTSCAN to be generalized to [view, position, size]-invariant category learning by positional ARTSCAN, or pARTSCAN. See the text for details.
    || Persistent activity in IT. Physiological data show that persistent activity exist in IT (Fuester and Jervey 1981, Miyashita and Chang 1988, Tomita etal 1999). Adapted from (Tomita etal 1999 Nature) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:414:
  • image p272fig06.18 The pARTSCAN model can learn [view, position, size]-invariant categories by adding view category integrator cells that have the properties of persistent neurons in IT. These integrator cells get reset with the invariant object category, not the view category.
    || pARTSCAN: positionally-invariant object learning. (Cao, Grossberg, Markowitz 2011). IT cells with persistent activities are modeled by view category integrators in ITp. View-specific category cells are RESET as the eyes move within the object. View category integrator cells are NOT RESET when the view-specific category is reset. They are RESET along with invariant object category cells when a spatial attention shift occurs. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:415:
  • image p272fig06.19 The various parts of this figure explain why persistent activity is needed in order to learn positionally-invariant object categories, and how this fails when persistent activity is not available. See the text for details.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:416:
  • image p273fig06.20 pARTSCAN can simulate the IT cell recoding that Li and DiCarlo reported in their swapping experiments because the swapping procedure happens without causing a parietal reset burst to occur. Thus the originally activated invariant category remains activated and can get associated with the swapped object features.
    || Simulation of Li and DiCarlo swapping data. data (Li and DiCarlo 2008), model (Cao, Grossberg, Markowitz 2011). normalized response vs. exposure (swaps and/or hours) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:417:
  • image p274fig06.21 pARTSCAN can also simulate the trade-off in IT cell responses between position invariance and selectivity that was reported by Zoccolan etal 2007. This trade-off limits the amount of position invariance that can be learned by a cortical area like V1 that is constrained by the cortical magnification factor.
    || Trade-off in IT cell response properties. Inferotemporal cortex cells with greater position invariance respond less selectively to natural objects. invariance-tolerance, selectivity-sparseness. data (Zoccolan etal 2007) model (Grossberg, Markowitzm, Cao 2011). position tolerance (PT, degrees) vs sparseness (S) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:418:
  • image p274fig06.22 pARTSCAN can simulate how IT cortex processes image morphs, when it learns with high vigilance. See the text for details.
    || Akrami etal simulation: a case of high vigilance. tested on morphs between image pairs /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:419:
  • image p275fig06.23 Data from (Akrami etal 2009) and our simulation of it. See the text for details.
    || IT responses to image morphs. data vs model /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:41:
    p370 Chapter 11 means (Grossberg 2021) page 370, Chapter 11
    p002sec Illusion and realitymeans (Grossberg 2021) page 2, section Illusion and reality
    p013fig01.09means (Grossberg 2021) page 13, Figure 1.09 (1.9 as in book)
    p030tbl01.02 means (Grossberg 2021) page 30, Table 1.02 (1.2 as in book)
    p111c2h0.5means (Grossberg 2021) page 111, column 2, height from top as fraction of page height
    || text...Are notes in addition to [figure, table] captions, mostly comprised of text within the image, but also including quotes of text in the book. Rarely, it includes comments by Howell preceded by "Howell". The latter are distinct from "readers notes" (see, for example : reader Howell notes).
    p044 Howell: grepStr 'conscious' means a comment by reader Howell, extracted using the grep string shown, referring to page 44 in (Grossberg 2021)
    /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:420:
  • image p275fig06.24 Left and right eye stereogram inputs are constructed to generate percepts of objects in depth. These percepts include the features of the objects, not only their relative depths, a property that is not realized in some other models of steriopsis. See the text for details.
    || Sterogram surface percepts: surface lightnesses are segregated in depth (Fand, Grossberg 2009). Contrast algorithms that just compute disparity matches and let computer code build the surface, eg (Marr, Poggio 1974) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:421:
  • image p276fig06.25 In addition to the gain field that predictively maintains a shroud in head-centered coordinates during saccades, there are gain fields that predictively maintain binocular boundaries in head-centered coordinates so that they can maintain binocular fusion during saccades and control the filling-in of surfaces in retinotopic coordinates.
    || Surface-shroud resonance. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:422:
  • image p277fig06.26 Gain fields also enable predictive remapping that maintain binocular boundary fusion as the eyes move betweeen objects. See the text for details.
    || Predictive remapping maintains binocular boundary fusion even as eyes move between objects. retinotopic boundary -> invariant boundary (binocular) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:423:
  • image p278fig06.27 A surface-shroud resonance through the Where stream enables us to consciously see an object while a feature-category resonance into the What stream enables us to recognize it. Both kinds of resonances can synchronize via visual cortex so that we can know what an object is when we see it.
    || What kinds of resonances support knowing vs seeing? What stream [knowing, feature-prototype resonance], Where stream [seeing, surface-shroud resonance] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:424:
  • image p278fig06.28 If the feature-category resonances cannot form, say due to a lesion in IT, then a surface-shroud resonance can still support conscious seeing of an attended object, and looking at or reaching for it, even if the individual doing so knows nothing about the object, as occurs during visual agnosia. The surface-shroud resonance supports both spatial attention and releases commands that embody the intention to move towards the attended object.
    || What kinds of resonances support knowing vs seeing? visual agnosia: reaching without knowing Patient DF (Goodale etal 1991). Attention and intention both parietal cortical functions (Anderson, Essick, Siegel 1985; Gnadt, Andersen 1988; Synder, Batista, Andersen 1997, 1998) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:432:
  • image p283fig07.01 The usual boundary processing stages of [simple, complex, hypercomplex, bipole] cells enable our brains to correct uncontrolled persistence of previously excited cells just by adding habituative transmitter gates, or MTM traces, at appropriate places in the network.
    || Boundary processing with habituative gates. spatial competition with habituative gates, orientational competition: gated dipole, bipole grouping /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:433:
  • image p284fig07.02 Psychophysical data (top row) and simulation (bottom row) of how persistence decreases with flash illuminance and duration.
    || Persistence data and simulations. (Francis, Grossberg, Mingolia 1994 Vision Research, 34, 1089-1104). Persistence decreases with flash illuminance and duration (Bowen, Pola, Matin 1974; Breitmeyer 1984; Coltheart 1980). Higher luminance or longer duration habituates the gated dipole ON channel more. Causes larger and faster rebound in the OFF channel to shut persisting ON activity off. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:434:
  • image p285fig07.03 Persistence decreases with flash illuminance and duration due to the way in which habituative transmitters regulate the strength of the rebound in response to offset of a stimulating input, and how this rebound inhibits previously activated bipole cells.
    || Persistence data and simulations (Francis, Grossberg, Mingolia 1994 Vision Research, 34, 1089-1104). Persistence decreases with flash illuminance and duration. Horizontal input excites a horizontal bipole cell, which supports persistence. Offset of the horizontal input causes a rebound of activity in the vertical pathway, which inhibits the horizontal bipole cell, thereby terminating persistence. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:435:
  • image p286fig07.04 Illusory contours persist longer than real contours because real contours have more inducers whose rebound at contour offset can cause faster boundary reset. Illusory contours also take longer to form than real contours, which explains the increasing portion of the curve.
    || Persistence data and simulations (Meyer, Ming 1988; Reynolds 1981). Increasing portion of curve is due to formation time of the illusory contour. Longer persistence is due to fewer bottom-up inducers of an illusory contour that has the same length as a real contour: only illuminance-derived edges generate reset signals. When bottom-up inducers are inhibited by OFF cell rebounds, their offset gradually propagates to the center of the illusory contour. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:436:
  • image p286fig07.05 This figure shows the propagation through time of illusory contour offset from the rebounded cells that got direct inputs to the center of the contour.
    || Persistence data and simulations. Illusory contours persist longer than real contours (Meyer, Ming 1988; Reynolds 1981). When bottom-up inducers are inhibited by OFF cell rebounds, their offset gradually propagates to the center of the illusory contour. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:437:
  • image p287fig07.06 The relative durations of persistence that occur due to an adaptation stimulus of the same or orthogonal orientation follow from the properties of the habituative gated dipoles that are embedded in the boundary completion system.
    || Persistence data and simulations. Change in persistence depends on whether adaptation stimulus has same or orthogonal orientation as test grating (Meyer, Lawson, Cohen 1975). If adaptation stimulus and test stimulus have the same orientation, they cause cumulative habituation, which causes a stronger reset signal, hence less persistence. When they are orthogonal, the competition on the ON channel is less, hence more persistence. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:438:
  • image p287fig07.07 Persistence increases with distance between a target and a masking stimulus due to weakening of the spatial competition in the first competitive stage of hypercomplex cells.
    || Persistence data and simulations. Persistence increases with distance between a target and a masking stimulus (Farrell, Pavel, Sperling 1990). There is less spatial competition from the masker to the target when they are more distant, hence the target is more persistent. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:446:
  • image p290fig08.01 Motion in a given direction pools all possible contrast-sensitive sources of information that are moving in that direction.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:447:
  • image p291fig08.02 Complex cells can respond to motion in opposite directions and from features with opposite contrast polarities.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:448:
  • image p292fig08.03 The MacKay and waterfall illusion aftereffects dramatically illustrate the different symmetries that occur in the orientational form stream and the directional motion stream.
    || Form and motion aftereffects. different inhibitory symmetries govern orientation and direction. illusions: [Form- MacKay 90°, Motion- waterfall 180°]. stimulus, aftereffect percept /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:449:
  • image p293fig08.04 Most local motion signals on a moving object (red arrows) may not point in the direction of the object's real motion (green arrows). This problem besets every neuron due to the fact that it receives signals only in a space-limited aperture.
    || Most motion signals may not point in an object's direction of motion. Aperture problem. EVERY neuron's receptive field experiences an aperture problem. How doe the brain use the small number of [correct, unambiguous] motion signals to compute an object's motion direction? /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:450:
  • image p295fig08.05 The perceived direction of an object is derived either from a small subset of feature tracking signals, or by voting among ambiguous signals when feature tracking signals are not available.
    || Aperture problem. Barberpole illusion (Wallach). How do sparse feature tracking signals capture so many ambiguous motion signals to determine the perceived motion direction? /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:451:
  • image p296fig08.06 In the simplest example of apparent motion, two dots turning on and off out of phase in time generate a compelling percept of continuous motion between them.
    || Simplest long-range motion paradigm. ISI- interstimulus interval, SOA- stimulus onset synchrony /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:452:
  • image p296fig08.07 When two flashes turn on and off out of phase with the correct range of interstimulus intervals, and not too far from one another, then either beta motion of phi motion are perceived.
    || Beta and Phi motion percepts. Beta motion: percepts of continuous motion of a well-defined object across empty intervening space. Phi motion: sense of "pure" motion without a concurrent percept of moving object. (Exner 1875) http://www.yorku.ca/eye/balls.htm /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:453:
  • image p297fig08.08 When a second flash is more intense than the first flash, then apparent motion may occur from the second to the first flash.
    || Delta motion: motions from the second to the first flash. Data: (Kolers 1972; Korte 1915). Simulation: (Grossberg, Rudd 1992). This occurs when the luminance or contrast of the second flash is large compared to that of the first flash. Sustained and transient cells obey shunting dynamics whose averaging rates speed up with output intensity. The first flash to wane is the one that will be the source of the G-wave. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:454:
  • image p297fig08.09 Simulation of motion in opposite directions that is perceived when two later flashes occur on either side of the first flash.
    || Split motion. Data: (H.R. Silva 1926), Simulation: (Grossberg, Rudd 1992) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:455:
  • image p298fig08.10 Simulation of the motion speed-up that is perceived when flash duration decreases.
    || "The less you see it, the faster it moves". Data: (Giaschi, Anstis 1989), Simulation: (Grossberg, Rudd 1992). ISI = 0, flash duration decreases; SOA = constant, flash duration decreases /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:456:
  • image p298fig08.11 This formotion percept is a double illusion due to boundary completion in the form stream followed by long-range apparent motion using the completed bioundaries in the motion stream.
    || Form-motion interactions. Apparent motion of illusory contours (Ramachandran 1985). Double illusion! Illusory contour is created in form stream V1-V2. Apparent motion of illusory contours occurs in motion stream due to a V2-MT interaction. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:457:
  • image p300fig08.12 A single flash activates a Gaussian receptive field across space whose maximum is chosen by a winner-take-all recurrent on-center off-surround network.
    || Gaussian receptive fields are sufficient! (Grossberg, Rudd 1992). Single flash. Suppose that a single flash causes a narrow peak of activity at the position where it occurs. It generates output signals through a Gaussian filter that produces a Gaussian activity profile at the next processing stage., A recurrent on-center off-surround network chooses the maximum activity and suppresses samaller activities. Winner-take-all /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:458:
  • image p300fig08.13 As a flash waxes and wanes through time, so too do the activities of the cells in its Gaussian receptive field. Because the maximum of each Gaussian occurs at the same position, nothing is perceived to move.
    || Temporal profile of a single flash. Suppose that a single flash quickly turns on to maximum activity, stays there for a short time, and then shuts off. It causes an increase in activity, followed by an exponential decay of activity. The corresponding Gaussian profile waxes and wanes through time. Since the peak position of the Gaussian does not change through time, nothing moves. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:459:
  • image p300fig08.14 Visual inertia depicts how the effects of a flash decay after the flash shuts off.
    || Inertia (%) vs ISI (msec) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:460:
  • image p301fig08.15 If two flashes occur in succession, then the cell activation that is caused by the first one can be waning while the activation due to the second one is waxing.
    || Temporal profile of two flashes. Of two flashes occur in succession, the waning of the activity due to the first flash may overlap with the waxing of the activity due to the second flash. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:461:
  • image p301fig08.16 The sum of the waning Gaussian activity profile due to the first flash and the waxing Gaussian activity profile due to the second flash has a maximum that moves like a travelling wave from the first to the second flash.
    || Travelling wave (G-wave): long-range motion. If the Gaussian activity profiles of two flashes overlap sufficiently in space and time, then the sum of Gaussians produced by the waning of the first flash added to the Gaussian produced by the waxing of the second flash, can produce a single-peaked travelling wave from the position of the first flash to that of the second flash. The wave is then processed through a WTA choice network (Winner Take All). The resulting continuous motion percept is both long-range and sharp. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:462:
  • image p302fig08.17 An important constraint on whether long-rang apparent motion occurs is whether the Gaussian kernel is broad enough to span the distance between successive flashes.
    || Motion speed-up with increasing distance: For a fixed ISI, how does perceived velocity increase with distance between the flashes? Gaussian filter : Gp = exp{ -(j-i)^2 / (2*K^2) }. The largest separation, L_crit, for which sufficient spatial overlap between two Gaussians centered at locations i and j will exist to support a travelling wave of summed peak activity is : L_crit = 2*K /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:463:
  • image p302fig08.18 This theorem shows how far away (L), given a fixed Gaussian width, two flashes can be to generate a wave of apparent motion between them.
    || G-wave properties (Grossberg 1977). Let flashes occur at positions i=0 and i=L. Suppose that d[dt: x0] = -A*x0 + J0; d[dt: xL] = -A*xL + JL; Define G(w,t) ...; Theorem 1 max_w G(w,t) moves continuously through time from w=0 to w=L if and only if L <= 2*K. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:464:
  • image p303fig08.19 The dashed red line divides combinations of flash distance L and Gaussian width K into two regions of no apparent motion (above the line) and apparent motion (below the line).
    || No motion vs motion at multiple scales. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:465:
  • image p303fig08.20 The G-wave speeds up with the distance between flashes at a fixed delay, and has a consitent motion across multiple spatial scales.
    || G-wave properties (Grossberg 1977). Theorem 2 (Equal half-time property) The time at which the motion signal reaches position w=L/2. Apparent motion speed-up with distance: this half-time is independent of the distance L between the two flashes. Consistent motion across scales: half-time is independent of the scale size K. Method of proof: elementary algebra and calculus (Grossberg, Rudd 1989 appendix) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:466:
  • image p304fig08.21 A computer simulation of the equal half-time property whereby the apparent motions within different scales that respond to the same flashes all reach the half-way point in the motion trajectory at the same time.
    || Equal half-time property: how multiple scales cooperate to generate motion percept. Travelling waves from Gaussian filters of different sizes bridge the same distance in comparable time. The time needed to bridge half the distance between flashes is the same. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:467:
  • image p304fig08.22 Data (top image) and simulation (bottom image) of Korte's laws. The laws raise the question of how ISIs in the hundreds of milliseconds can cause apparent motion.
    || Korte's Laws, Data: (Korte 1915) Simulation: (Francis, Grossberg 1996) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:468:
  • image p305fig08.23 Despite its simplicity, the Terus display can induce one of four possible percepts, depending on the ISI.
    || Ternus motion. ISI [small- stationary, intermediate- element, larger- group] motion http://en.wikipedia.org/wiki/Ternus_illusion /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:469:
  • image p305fig08.24 When each stimulus has an opposite contrast relative to the background, element motion is eliminated and replaced by group motion at intermediate values of the ISI.
    || Reverse-contrast Ternus motion. ISI [small- stationarity, intermediate- group (not element!), larger- group] motion. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:470:
  • image p306fig08.25 The Motion BCS model can explain and simulate all the long-range apparent motion percepts that this chapter describes.
    || Motion BCS model (Grossberg, Rudd 1989, 1992) Level 1: discount illuminant; Level 2: short-range filter, pool sustained simple cell inputs with like-oriented receptive fields aligned in a given direction. Sensitive to direction-of-contrast; Level 3: Transient celss with unoriented receptive field. Sensitive to direction-of-change /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:471:
  • image p306fig08.26 The 3D FORMOTION model combines mechanisms for determining the relative depth of a visual form with mechanisms for both short-range and long-range motion filtering and grouping. A formotion interaction from V2 to MT is predicted to enable the motion stream to track objects moving in depth.
    || 3D Formotion model (Chey etal 1997; Grossberg etal 2001; Berzhanskaya etal 2007). Form [LGN contours -> simple cells orientation selectivity -> complex cells (contrast pooling, orientation selectivity, V1) -> hypercomplex cells (end-stopping, spatial sharpening) <-> bipole cells (grouping, cross-orientation competition) -> depth-separated boundaries (V2)], Motion: [LGN contours -> transient cells (directional stability, V1) -> short-range motion filter -> spatial competition -> long-range motion filter and boundary selection in depth (MT) <-> directional grouping, attentional priming (MST)] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:472:
  • image p307fig08.27 The distribution of transients through time at onsets and offsets of Ternus display flashes helps to determine whether element motion or group motion will be perceived.
    || Ternus motion. Element motion: zero or weak transients at positions 2 and 3; Group motion: strong transients at positions 2 and 3. Conditions that favor visual persistence and thus perceived stationarity of element (2,3) favor element motion (Braddick, Adlard 1978; Breitmeyer, Ritter 1986; Pantle, Peteresik 1980) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:473:
  • image p308fig08.28 The Gaussian distributions of activity that arise from the three simultaneous flashes in a Ternus display add to generate a maximum value at their midpoint. The motion of this group gives rise to group motion.
    || Ternus group motion simulation. If L < 2*K, Gaussian filter of three flashes forms one global maximum. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:474:
  • image p310fig08.29 When the individual component motions in (A) and (B) combine into a plaid motion (C), both their perceived direction and speed changes.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:475:
  • image p311fig08.30 The data of (Castet etal 1993) in the left image was simulated in the right image by the 3D FORMOTION model that I developed with my PhD student Jonathan Chey. These data provide insight into how feature tracking signals propagate from the ends of a line to its interior, where they capture consistent motion directional signals and inhibit inconsistent ones.
    || Solving the aperture problem. A key design problem: How do amplified feature tracking signals propagate within depth to select the cirrect motion directions at ambiguous positions? This propagation from feature tracking signals to the line interior determines perceived speed in Castet etal data, which is why speed depends on line tilt and length. Data: (Castet etal 1993), Simulation: (Chey etal 1997) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:476:
  • image p311fig08.31 Processing stages of the Motion BCS convert locally ambiguous motion signals from transient cells into a globally coherent percept of object motion, thereby solving the aperture problem.
    || Why are so many motion processing stages needed? change sensitive receptors -> directional transient cells -> directional short-range filter -> spatial and directional competition -> directional long-range filter (MT) <-> Directional grouping network /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:477:
  • image p312fig08.32 Schematic of motion filtering circuits.
    || Level 1: Change sensitive units -> Level 2: transient cells -> Level 3: short-range spatial filters -> Level 4: intra-scale competition -> Level 5: inter-scale competition /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:478:
  • image p312fig08.33 Processing motion signals by a population of speed-tuned neurons.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:479:
  • image p314fig08.34 The VISTARS model for visually-based spatial navigation. It uses the Motion BCS as a front end and feeds it output signals into two computationally complementary cortical processing streams for computing optic flow and target tracking information.
    || VISTARS navigation model (Browning, Grossberg, Mingolia 2009). Use FORMOTION model as front end for higher level navigational circuits: input natural image sequences -> estimate heading (MT+)-MSTd -> additive processing -> estimate object position (MT-)-MSTv direction and speed subtractive processing -> Complementary Computing. [optic flow navigation, object tracking] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:480:
  • image p315fig08.35 The output signals from the directional grouping network obey the ART Matching Rule. They thereby select consistent motion directional signals while suppressing inconsistent ones, and do not distort that the spared cells code. The aperture problem is hereby solved by the same mechanism that dynamically stabilizes the learning of directional grouping cells.
    || How to select correct direction and preserve speed estimates? Prediction: Feedback from MSTv to MT- obeys ART Matching Rule; Top-down, modulatory on-center, off-surround network (Grossberg 1976, 1980; Carpenter, Grossberg 1987, 1991); Explains how directional grouping network can stably develop and how top-down directional attention can work. (Cavanough 1992; Goner etal 1986; Sekuer, Ball 1977; Stelmach etal 1994). Directional grouping network (MSTv) <-> Directional long-range filter (MT). Modulatory on-center selects chosen direction and preserves speed. Off-surround inhibits incompatible directions. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:481:
  • image p316fig08.36 How the directional grouping network, notably properties of the ART Matching Rule, enables a small set of amplified feature tracking signals at the ends of a line to select consistent directions in the line interior, while suppressing inconsistent directions.
    || Motion capture by directional grouping feedback. Directional grouping network (MSTv) <-> Directional long-range filter (MT). It takes longer to capture ambiguous motion signals in the line interior as the length of the line increases cf (Castet etal 1993) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:482:
  • image p317fig08.37 Processing stages that transform the transient cell inputs in response to a tilted moving line into a global percept of the object's direction of motion. The orientations of the lines denote the directional preferences of the corresponding cells, whereas line lengths are proportional to cell activities.
    || Motion capture by directional grouping feedback (Chey, Grossberg, Mingolia 1997). thresholded short-range filter outputs, directional long-range filter cell activities a 3 times, directional short-range filter cells, directionally-sensitive transient cells /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:483:
  • image p319fig08.38 The neurophysiological data from MT (left image) confirms the prediction embodied in the simulation of MT (right image) concerning the fact that it takes a long time for MT to compute an object's real direction of motion.
    || Solving the aperture problem takes time. MT Data (Pack, Born 2001), MT simulation (Chey, Grossberg, Mingolia 1997) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:484:
  • image p320fig08.39 Simulation of the barberpole illusion direction field at two times. Note that the initial multiple directions due to the feature tracking signals at the contiguous vertical and horizontal sides of the barberpole (upper image) get supplanted by the horizontal direction of the two horizontal sides (lower image).
    || Barberpole illusion (one line) simulation /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:485:
  • image p321fig08.40 Visible occluders capture the boundaries that they share with moving edges. Invisible occluders do not. Consequently, the two types of motions are influenced by different combinations of feature tracking signals.
    || Motion grouping across occluders (J. Lorenceau, D. Alais 2001). Rotating contours observed through apertures. Determine direction of a circular motion. [, in]visible occluders http://persci.mit.edu/demos/square/square.html /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:486:
  • image p322fig08.41 A percept of motion transparency can be achieved by using motion grouping feedback that embodies the "asymmetry between near and far" along with the usual opponent competition between opposite motion directions.
    || Motion transparency. near: big scale; far: small scale MSTv, "Asymmetry between near and far" Inhibition from near (large scales) to far (small scales) at each position /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:487:
  • image p323fig08.42 The chopsticks illusion not only depends upon how feature tracking signals are altered by visible and invisible occluders, but also upon how the form system disambiguates the ambiguous region where the two chopsticks intersect and uses figure-ground mechanisms to separate them in depth.
    || Chopsticks: motion separation in depth (Anstis 1990). [, in]visible occluders [display, percept] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:488:
  • image p324fig08.43 Attention can flow along the boundaries of one chopstick and enable it to win the orientation competition where the two chopsticks cross, thereby enabling bipole grouping and figure-ground mechanisms to separate them in depth within the form cortical stream.
    || The ambiguous X-junction. motion system. Attention propagates along chopstick and enhances cell activations in one branch of a chopstick. MT-MST directional motion grouping helps to bridge the ambiguous position. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:489:
  • image p325fig08.44 Attentional feedback from MST-to-MT-to-V2 can strengthen one branch of a chopstick (left image). Then bipole cell activations that are strengthened by this feedback can complete that chopstick's boundaries across the ambiguous X region (right image).
    || The role of MT-V1 feedback. Motion-form feedback: MT-to-V2 feedback strengthens boundaries of one bar. Bipole boundary completion: Bipole grouping helps to complete bar boundary even if motion grouping does not cross the gap. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:490:
  • image p325fig08.45 The feedback loop between MT/MST-to-V1-to-V2-to-MT/MST enables a percept of two chopsticks sliding one in front of the other while moving in opposite directions.
    || Closing formotion feedback loop. [formotion interaction, motion grouping] V1 -> V2 -> (MT <-> MST) -> V1 /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:491:
  • image p326fig08.46 How do we determine the relative motion direction of a part of a scene when it moves with a larger part that determines an object reference frame?
    || How do we perceive relative motion of object parts? /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:492:
  • image p327fig08.47 Two classical examples of part motion in a moving reference frame illustrate the general situation where complex objects move while their multiplie parts may move in different directions relative to the direction of the reference frame.
    || Two kinds of percepts and variations (Johansson 1950). Symmetrically moving inducers: each do moves along a straight path, each part contributes equally to common motion; Duncker wheel (Duncker 1929): one dot moves on a cycloid, the other dot (the "center") moves stright, unequal contributipon from parts; If the dot is presented alone: seen as cycloid; if with center: seen as if it were on the rim of a wheel. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:493:
  • image p328fig08.48 How vector subtraction from the reference frame motion direction computes the part directions.
    || How vector decomposition can explain them. Common motion subtracted from retinal motion gives part motion: [retinal, common, part] motion /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:494:
  • image p328fig08.49 A directional peak shift in a directional hypercolumn determines the part directions relative to a moving reference frame.
    || What is the mechanism of vector decomposition? (Grossberg, Leveille, Versace 2011). Prediction: directional peak shift! ...specifically, a peak shift due to Gaussian lateral inhibition. [retinal, part, common, relative] motion. shunting dynamics, self-normalization, contrast gain control /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:495:
  • image p329fig08.50 The common motion direction of the two dots builds upon illusory contours that connect the dots as they move through time. The common motion directin signal can flow along these boundaries.
    || How is common motion direction computed? retinal motion. Bipole grouping in the form stream creates illusory contours between the dots. V2-MT formotion interaction injects the completed boundaries into the motion stream where they capture consistent motion signals. Motion of illusory contours is computed in the motion stream: cf. Ramanchandran /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:496:
  • image p329fig08.51 Large and small scale boundaries differentially form illusory contours between the dots and boundaries that surround each of them respectively. These boundaries capture the motion signals that they will support via V2-to-MT formotion interaction. The MST-to-MT directional peak shift has not yet occurred.
    || Large scale: near. Can bridge gap between dots to form illusory contours. Spatial competition inhibits inner dot boundaries.; Small scale: far. Forms boundaries around dots. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:497:
  • image p330fig08.52 Direction fields of the object frame (left column) and of the two dot "parts" (right column) show the correct motion directions after the peak shift top-down expectation acts.
    || Simulation of motion vector decomposition. [Larger scale (nearer depth), Small scale (farther depth)] vs [Down, Up] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:498:
  • image p330fig08.53 Simulation of the various directional signals of the left dot through time. Note the amplification of the downward directional signal due to the combined action of the short-range and long-range directional signals.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:499:
  • image p331fig08.54 The simulated part directions of the rotating dot through time after the translational motion of the frame does its work via the top-down peak shift mechanism.
    || Cycloid. Motion directions of a single dot moving slowly along a cycloid curve through time. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:49:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:500:
  • image p331fig08.55 The rightward motion of the dot that determines the frame propagates along the illusory contour between the dots and thereby dominates the motion directions along the rim as well, thereby setting the stage for the peak shift mechanism.
    || Duncker Wheel: large scale. [cyc;oid, center] velocity -> rightward common velocity. Stable rightward motion at the center captures motion at the rim. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:501:
  • image p332fig08.56 Simulation of the Duncker Wheel motion through time. See the text for details.
    || Duncker Wheel: small scale. Temporal procession of activity in eight directions. Wheel motion as seen when directions are collapsed. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:502:
  • image p332fig08.57 The MODE model uses the Motion BCS as its front end, followed by a saccadic target selection circuit in the model LIP region that converts motion directions into movement directions. These movement choices are also under basal ganglia (BG) control. More will be explained about the BG in Chapters 13 and 15.
    || MODE (MOtion DEcision) model (Grossberg, Pilly 2008, Vision Research). Change sensitive receptors -> directional transient cells -> directiponal short-range filter -> spatial and directional competition -> directional long-range filter (MT) <-> directional grouping network (MSTv) -> saccadic target selection <-> gsting mechanism (BG). Representation of problem that solves the aperture problem (change sensitive receptors (CSR) -> directional grouping network (DGN, MSTv)). Gated movement choice (saccadic target selection & gating mechanism) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:503:
  • image p333fig08.58 Neurophysiological data (left image) and simulation (right image) of LIP data during correct trials on the RT task. See the text for details.
    || LIP responses during RT task correct trials (Roltman, Shadlen 2002). More coherence in favored direction causes faster cell activation. More coherence in opposite direction causes faster cell inhibition. Coherence stops playing a role in the final stages of LIP firing. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:504:
  • image p334fig08.59 Neurophysiological data (left column) and simulations (right column) of LIP responses for the FD task during both [correct, error] trials. See the text for details.
    || LIP responses for the FD task during both [correct, error] trials (Shadlen, Newsome 2001). LIP encodes the perceptual decision regardless of the true direction of the dots. Predictiveness of LIP responses on error trials decreases with increasing coherence. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:505:
  • image p334fig08.60 Behavioral data (left image) and simulation (right image) about accuracy in both the RT and FD tasks. See text for details
    || Behavioral data: % correct vs % coherence (Mazurek etal 2003; Roltman, Shadien 2002). More coherence in the motion causes more accurate decisions. RT task accuracy at weaker coherence levels is slightly better than FD task accuracy. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:506:
  • image p335fig08.61 Behavioral data (left image) and simulation (right image) about speed in correct and error trials of the RT task. See text for details.
    || Behavioral data: speed, correct and error trials (RT task) (Roltman, Shadien 2002). More coherence in the motion causes faster reaction time. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:507:
  • image p335fig08.62 More remarkable simulation fits (right column) to LIP neurophysiology data (left column) about where and when to move the eyes.
    || LIP encodes not only where, but also when, to move the eyes. ...No Bayes(Roltman, Shadien 2002). Firing rate (sp/s) vs time (ms). Slope of firing rate (sp/s^2) vs % correct. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:515:
  • image p338fig09.01 The brain regions that help to use visual information for navigating in the world and tracking objects are highlighted in yellow.
    || How does a moving observer use optic flow to navigate while tracking a moving object? [What ventral, Where dorsal] retina -> many locations -> PFC /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:516:
  • image p338fig09.02 Heading, or the direction of self-motion (green dot), can be derived from the optic flow (red arrows) as an object, in this case an airplane landing, moves forward.
    || Heading and optic flow (Gibson 1950). Optic flow: scene motion generates a velocity field. Heading: direction of travel- self-motion direction. Heading from optic flow, focus of expansion (Gibson 1950). Humans determine heading accurately to within 1-2 degrees. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:517:
  • image p339fig09.03 When an observer moves forward, an expanding optic flow is caused. Eye rotations cause a translating flow. When these flows are combined, a spiral flow is caused. How do our brains compensate for eye rotations to compute the heading of the expanding optic flow?
    || Optic flow during navigation (adapted from Warren, Hannon 1990) [observer, retinal flow]: [linear movement, expansion], [eye rotation, translation], [combined motion, spiral] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:518:
  • image p339fig09.04 This figure emphasizes that the sum of the expansion and translation optic flows is a spiral optic flow. It thereby raises the question: How can the translation flow be subtracted from the spiral flow to recover the expansion flow?
    || Eye rotations add a uniform translation to an flow field. Resulting retinal patterns are spirals. Expansion + translation = spiral /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:519:
  • image p340fig09.05 An outflow movement command, also called efference copy or corollary discharge, is the souce ot the signals whereby the commanded eye movement position is subtracted from spiral flow to recover expansion flow and, with it, heading.
    || Subtracting efference copy. Many experiments suggest that the brain internally subtracts the translational component due to eye movements. Efference copy subtracts the translational component using pathways that branch from outflow movement commands to the eye muscles. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:51:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:520:
  • image p340fig09.06 Corollary discharges are computed using a branch of the outflow movement commands that move their target muscles.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:521:
  • image p340fig09.07 Log polar remapping from the retina to cortical area V1 and beyond converts expansion, translation, and spiral flows on the retina into parallel flows, with different orientations, on the cortical map.
    || Log polar remapping of optic flow. retina -> cortex. Any combination of expansion and circular motion centered on the fovea maps to cortex as a single direction. Retinal Cartesian coordinates (x,y) map to cortical polar coordinates (r,theta). This makes it easy to compute directional receptive fields in the cortex! /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:522:
  • image p341fig09.08 How the various optic flows on the retina are mapped through V1m MT, and MSTd to then compute heading in parietal cortex was modeled by (Grossberg, Mingolia, Pack 1999), using the crucial transformation via V1 log polar mapping into parallel cortical flow fields.
    || MSTd model (Grossberg, Mingolia, Pack 1999). Retinal motion -> V1 log polar mapping -> Each MT Gaussian RF sums motion in preferred direction -> Each MSTd cell sums MT cell inputs with same log polar direction -> Efference copy subtracts rotational flow from MSTd cells. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:523:
  • image p341fig09.09 Responses of MSTd cells that are used to compute heading. See the text for details.
    || Cortical area MSTd (adapted from Graziano, Anderson, Snowden 1994). MSTd cells are sensitive to spiral motion as combinations of rotation and expansion. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:524:
  • image p342fig09.10 Model simulations of how the peak of MSTd cell activation varies with changes of heading.
    || Heading in log polar space: Retina -> log polar -> MSTd cell. Log polar motion direction correlates with heading eccentricity. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:525:
  • image p342fig09.11 Psychophysical data (left panel) and computer simulation (right column) of the importance of efference copy in real movements. See the text for details.
    || Heading: move to wall and fixate stationary object (adapted from Warren, Hannon 1990). Inaccurate for simulated eye rotation, accurate for real eye rotation, need confirmation by efference copy! /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:526:
  • image p343fig09.12 Transforming two retinal views of the Simpsons into log polar coordinates dramatizes the problem that our brains need to solve in order to separate, and recognize, overlapping figures.
    || View 1 cortical magnification. View 2 How do we know if we are still fixating on the same object?! /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:527:
  • image p343fig09.13 When one scans the three different types of pears in the left image, as illustrated by the jagged blue curve with red movement end positions, and transforms the resulting retinal images via the cortical magnification factor, or log polar mapping, the result is the series of images in the right column. How do our brains figure out from such confusing data which views belong to which pear?
    || View-invariant object learning and recognition Three pears: Anjou, Bartlett, Comice. Which is the Bartlett pear? During unsupervised scanning and learning about the world, no one tells the brain what views belong to which objects while it learns view-invariant object categories. Cortical magnificantion in V1. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:528:
  • image p344fig09.14 (top row, left column) By fitting MT tuning curves with Gaussian receptive fields, a tuning width of 38° is estimated, and leads to the observed standard spiral tuning of 61° in MSTd. (bottom row, left column) The spiral tuning estimate in Figure 9.16 maximizes the position invariant of MSTd receptive fields. (top row, right column) Heading sensitivity is not impaired by these parameter choices.
    || [Spiral tuning (deg), position invariance (deg^(-1)), heading sensitivity] versus log polar direction tuning σ (deg) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:529:
  • image p345fig09.15 Double opponent directional receptive fields in MT are capable of detecting the motion of objects relative to each other and their backgrounds.
    || Motion opponency in MT (Born, Tootell 1992). Motion opponent (Grossberg etal), Differential motion (Royden etal), Subtractive motion cells (Neumann etal). ON center directionally selective: [excit, inhibit]ed by motion in [one, opponent] direction. OFF surround directionally selective: [excit, inhibit]ed by motion in [opponent, center] direction. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:530:
  • image p346fig09.16 A macrocircuit of some of the main brain regions that are used to move the eyes. Black boxes denote areas belonging to the saccadic eye movement systes (SAC), white boxes the smooth pursuit eye system (SPEM), and gray boxes, both systems. The abbreviations for the different brain regions are: LIP - Lateral Intra-Parietal area; FPA - Frontal Pursuit Area; MST - Middle Superior Temporal area; MT - Middle Temporal area; FEF - Frontal Eye Fields; NRPT - Nucleus Reticularis Tegmenti Pontis; DLPN - Dorso-Lateral Pontine Nuclei; SC - Superior Colliculus; CBM - CereBelluM; MVN/rLVN - Medial and Rostro-Lateral Vestibular Nucleii; PPRF - a Peri-Pontine Reticular Formation; TN - Tonic Neurons
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:531:
  • image p347fig09.17 The leftward eye movement control channel in the model that I developed with Christopher Pack. See the text for details.
    || retinal image -> MT -> MST[v,d] -> pursuit /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:532:
  • image p347fig09.18 These circuits between MSTv and MSTd enable predictive target tracking to be achieved by the pursuit system, notably when the eyes are successfully foveating a moving target. Solid arrows depict excitatory connections, dashed arrows depict inhibitory connections.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:533:
  • image p348fig09.19 How a constant pursuit speed that is commanded by MSTv cells starts by using target speed on the retina and ends by using backgound speed on the retina in the reverse direction during successful predictive pursuit.
    || target speed on retina, background speed on retina, pursuit speed command by MSTV cells /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:534:
  • image p349fig09.20 Using virtual reality displays (left image), (Fajen, Warren 2003) collected data (right two images) about how observers avoid obstacles (open circular disks) as a function of their distance and angular position as they navigate towards a fixed goal (x). These data illustrate how goals act as attractors while obstacles act as repellers.
    || Steering from optic flow (Fajen, Warren 2003). goals are attractors, obstacles are repellers. Damped spring model explains human steering data. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:535:
  • image p349fig09.21 How attractor-repeller dynamics with Gaussians change the net steering gradient as the goal is approached.
    || Steering dynamics: goal approach. body-centered coordinates [obstacle, goal, heading] -> steering /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:536:
  • image p350fig09.22 How the negative Gaussian of an obstacle causes a peak shift to avoid the obstacle without losing sight of how to reach the goal.
    || Steering dynamics: obstacle avoidance. body-centered coordinates [obstacle, goal, heading] -> steering /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:537:
  • image p350fig09.23 Unidirectional transient cells respond to changes in all image contours as an auto navigates and urban scene while taking a video of it.
    || Unidirectional transient cells (Baloch, Grossberg 1997; Berzhanskaya, Grossberg, Mingolia 2007). Transient cells respond to leading and trailing boundaries. Transient cells response, driving video /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:538:
  • image p351fig09.24 Directional transient cells respond most to motion in their preferred directions.
    || Directional transient cells. 8 directions, 3 speeds /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:539:
  • image p351fig09.25 By the time MT+ is reached, directional transient cells and directional filters have begun to extract more global directional information from the image.
    || M+ computes global motion estimate. Estimate global motion from noisy local motion estimates. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:53:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:540:
  • image p352fig09.26 The final stage of the model computes a beautiful expansion optic flow that permits an easy estimate of the heading direction, with an accuracy that matches that of human navigators.
    || The model generates accurate heading (Warren, Hannon 1990; Royden, Crowell, Banks 1994). Maximally active MSTd cell = heading estimate. Accuracy matches human data. Random dots [mean +-1.5°, worst +-3.8°], Random dots with rotation [accurate with rotations <1°/s, rotation increases, error decreases], OpenGL & Yosemite benchmark +-1.5°, Driving video +-3°. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:548:
  • image p354fig10.01 The lamiar cortical circuit that realizes how we pay attention to an object sends signals from layer 6 of a higher cortical level to layer 6 of a lower cortical level and then back up to layer 4. This "folded feedback" circuit realizes a top-down, modulatory on-center, off-surround circuit that realizes the ART Matching Rule.
    || Top-down attention and folded feedback. Attentional signals also feed back into 6-to-4 on-center off-surround. 1-to-5-to-6 feedback path: Macaque (Lund, Booth 1975) cat (Gilbert, Wiesel 1979). V2-to-V1 feedback is on-center off-surround and affects layer 6 of V1 the most (Bullier etal 1996; Sandell, Schiller 1982). Attended stimuli enhanced, ignored stimuli suppressed. This circuit supports the predicted ART Matching Rule! [LGN, V[1,2][6->1]] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:549:
  • image p355fig10.02 Distinguishing processes of seeing vs knowing has been difficult because they interact so strongly.
    || Seeing vs. Knowing. Seeing and knowing [operate at different levels of the brain, use specialized circuits], but they [interact via feedback, use similar cortical designs, feedback is needed for conscious perception]. Cerebral Cortex: Seeing [V1-V4, MS-MST], Knowing [IT, PFC]. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:550:
  • image p356fig10.03 Laminar computing achieves at least three basic properties of visual processing that have analogs in all biologically intelligent behaviors. These properties may be found in all cortical circuits in specialized form.
    || What does Laminar Computing achieve? 1. Self-stabilizing development and learning; 2. Seamless fusion of a) pre-attentive automatic bottom-up processing, b) attentive task-selective top-down processing; 3. Analog coherence: Solution of Binding Problem for perceptual grouping without loss of analog sensitivity. Even the earliest visual cortical stages carry out active adaptive information processing: [learn, group, attention]ing /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:551:
  • image p357fig10.04 Laminar Computing achieves its properties by computing in a new way that sythesizes the best properties of feedforward and feedback interactions, analog and digital computations, and preattentive and attentive learning. The property of analog coherence enables coherent groupings and decisions to form without losing sensitivity to the amount of evidence that supports them.
    || Laminar Computing: a new way to compute. 1. Feedforward and feedback: a) Fast feedforward processing when data are unambiguous (eg Thorpe etal), b) slower feedback chooses among ambiguous alternatives [self-normalizing property, real-time probabiligy theory], c) A self-organizing system that trades certainty against speed: Goes beyond Bayesian models! 2. Analog and Digital: Analog Coherence combines the stability of digital with the sensitivity of analog. 3. Preattentive and Attentive Learning: Reconciles the differences of (eg) Helmholtz and Kanizsa, "A preattentive grouping is its own 'attentional' prime" /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:552:
  • image p359fig10.05 Activation of V1 is initiated, in part, by direct excitatory signals from the LGN to layer 4 of V1.
    || How are layer 2/3 bipole cells activated? Direct bottom-up activation of layer 4. LGN -> V1 layer 4. Strong bottom-up LGN input to layer 4 (Stratford etal 1996; Chung, Ferster 1998). Many details omitted. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:553:
  • image p359fig10.06 Another, albeit indirect, pathway from LGN exists that can also excite layer 4 of V1. Why are not these two pathways redundant? The answer, ultimately, how to do with how cortex learns, as well as with how it pays attention. See the text for details.
    || Another bottom-up input to layer 4: Why?? Layer 6-to-4 on-center off-surround (Grieve, Sillito 1991, 1995; Ahmedetal 1994, 1997). LGN projects to layers 6 and 4. Layer 6 excites spiny stellates in column above it. Medium range connections onto inhibitory neurons. 6-t-4 path acts as on-center off-curround. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:554:
  • image p359fig10.07 The two bottom-up pathways from LGN to layer 4 of V1 can together activate layer 4 and contrast-normalize layer 4 responses.
    || Bottom-up contrast normalization (Grossberg 1968, 1973; Sperling, Sondhi 1968; Heeger 1992; Douglas etal 1995; Shapley etal 2004). Together, direct LGN-to-4 path and 6-to-4 on-center off-surround provide contrast normalization if cells obey shunting or membrane equation dynamics. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:555:
  • image p360fig10.08 The bottom-up on-center off-surround from LGN-to-6-to-4 has a modulatory on-center because of its role in realizing the ART Matching Rule and, with it, the ability of the cortex to dynamically stabilize its learned memories.
    || Modulation of priming by 6-to-4 on-center (Stratford etal 1996; Callaway 1998). On-center 6-to-4 excitation is inhibited down to being modulatory (priming, subthreshold). On-center 6-to-4 excitation cannot activate layer 4 on its own. Clarifies need for direct path. Prediction: plays key role in stable grouping, development and learning. ART Matching Rule! /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:556:
  • image p360fig10.09 Perceptual grouping is carried out in layer 2/3 by long-range horizontal excitatory recurrent connections, supplemented by short-range disynaptic inhibitory connections that together realize the bipole grouping properties that are diagrammed in Figure 10.10.
    || Grouping starts in layer 2/3. LGN-> 6-> 4-> 2/3: 1. Long-range horizontal excitation links collinear, coaxial receptive fields (Gilbert, Wiesel 1989; Bosking etal 1997; Schmidt etal 1997) 2. Short-range disynaptic inhibition of target pyramidal via pool of intraneurons (Hirsch, Gilbert 1991) 3. Unambiguous groupings can form and generate feedforward outputs quickly (Thorpe etal 1996). /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:557:
  • image p361fig10.10 Bipole grouping is achieved by long-range horizontal recurrent connections that also give rise to short-range inhibitory interneurons which inhibit nearby bipole cells as well as each other.
    || Bipole property controls perceptual grouping. Collinear input on both sides. Excitatory inputs summate. Inhibitory inputs normalize, Shunting inhibition! Two-against-one. Cell is excited. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:558:
  • image p362fig10.11 Feedback between layer 2/3 to the layer 6-to-4-to-2/3 feedback loop chooses the strongest grouping in cases where there is more than one. If only one grouping exists, then the circuit can function very quickly in a feedforward manner. When multiple groupings exist, the cortex "runs as fast as it can" to select the one with the most evidence to support it using the self-normalizing inhibition in the layer 6-to-4 off-surround.
    || How is the final grouping selected? Folded feedback LGN-> 6-> 4-> 2/3. 1. Layer 2/3 groupings feed back into 6-to-4 on-center off-surround: a) direct layer 2/3 -to-6 path; b) can also go via layer 5 (Blasdel etal 1985; Kisvarday etal 1989). 2. Strongest grouping enhanced by its on-center. 3. Inputs to weaker groupings suppressed by off-surround. 4. Interlaminar feedback creates functional columns. Activities of conflicting groupings are reduced by self-normalizing inhibition, slowing processing; intracortical feedback selects and contrast-enhances the winning grouping, speeding processing. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:559:
  • image p363fig10.12 The same laminar circuit design repeats in V1 and V2, albeit with specializations that include longer horizontal grouping axoms and figure-ground separation interactions.
    || V2 repeats V1 circuitry at larger spatial scale, LGN-> V1[6,4,2/3]-> V2[6,4,2/3]. V2 layer 2/3 horizontal axons longer-range than in V1 (Amir etal 1993). Therefore, longer-range groupings can form in V2 (Von der Heydt etal 1984) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:55:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:560:
  • image p364fig10.13 The bottom-up adaptive filter, intracortical grouping circuit, and intercortical top-down attentional circuit all use the same competitive decision circuit between layers 6 and 4, called the attention-preattention interface, with which to select the featural patterns that will be processed.
    || Bottom-up filters and intracortical grouping feedback use the same 6-to-4 decision circuit, LGN-> Vx[6,4,2/3]. competitive decision circuit, modulatory on-center off-surround network. Top-down intercortical attention also uses the same 6-to-4 decision circuit! /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:561:
  • image p364fig10.14 This figure emphasizes how preattentive intracortical groupings and top-down intercortical attention share the same modulatory on-center, off-surround layer 4-to-6 decision circuit.
    || Explanation: grouping and attention share the same modulatory decision circuit. Layer 6-6-4-2/3 pathway shown; also a layer 6-1-2/3 path. intercortical attention, both act via a modulatory on-center off-surround decision circuit, intracortical feedback from groupings /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:562:
  • image p367fig10.15 Data (left column) and simulation (right column) of how attention prevents a masking stimulus from inhibiting the response to the on-center of the cell from which the recording was made.
    || Attention protects target from masking stimulus (Reynolds etal 1999; Grossberg, Raizada 2000). /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:563:
  • image p367fig10.16 Neurophysiological data (left image) and simulation (right image) of how a low-contrast target can be facilitated if it is surrounded by a paid (31May2023 Howell - is word correct?) of collinear flankers, and suppresssed by them if it has high contrast.
    || Flankers can enhance or suppress targets (Polat etal 1998; Grossberg, Raizada 2000). target alone, target + flankers, flankers alone. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:564:
  • image p368fig10.17 Neurophysiological data (left image) and simulation (right image) showing that attention has a greater effect on low contrast than high contrast targets.
    || Attention has greater effect on low contrast targets (DeWeerd etal 1999; Raizada, Grossberg 2001). Threshold increase (deg) vs Grating contrast (%), [no, with] attention /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:565:
  • image p368fig10.18 Neurophysiological data (left image) and simulation (right image) of relative on-cell activities when the input to that cell may also be surroubded by iso-orientation or perpendicular textures.
    || Texture reduces response to a bar: iso-orientation suppression (Knierim, van Essen 1992), perpendicular suppression (Raizada, Grossberg 2001) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:566:
  • image p369fig10.19 Data from (Watanabe etal 2001) showing perceptual learning of the coherent motion direction, despite the lack of extra-foveal attention and awareness of the moving stimuli.
    || Unconscious perceptual learning of motion direction, % correct for two tests, compared to chance level results. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:574:
  • image p371fig11.01 FACADE theory explains how the 3D boundaries and surfaces are formed with which we see the world in depth.
    || 3D Vision and figure-ground perception (Grossberg 1987, 1994, 1997). How are 3D boundaries and 3D surfaces formed? How the world looks without assuming naive realism. Form And Color And DEpth theory (FACADE). Prediction: Visible figure-ground-separated Form-And-Color-And-DEpth are represented in cortical area V4. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:575:
  • image p372fig11.02 FACADE theory explains how multiple depth-selective boundary representations can capture the surface lightnesses and colors at the correct depths. The fact that both surface qualia and depth are determined by a single process implies that, for example, a change in brightness can cause a change in depth.
    || 3D surface filling-in. From filling-in of surface lightness and color to filling-in of surface depth. Prediction: Depth-selective boundary-gated filling-in defines the 3D surfaces that we see. Prediction: A single process fills-in lightness, color, and depth. Can a change in brightness cause a change in depth? YES! eg proximity-luminance covariance (Egusa 1983, Schwartz, Sperling 1983). Why is depth not more unstable when lighting changes? Prediction: Discounting the illuminant limits variability. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:576:
  • image p373fig11.03 Both contrast-specific binocular fusion and contrast-invariant boundary perception are needed to properly see the world in depth.
    || How to unify contrast-specific binocular fusion with contrast-invariant boundary perception? Contrast-specific binocular fusion: [Left, right] eye view [, no] binocular fusion. Contrast-invariant boundary perception: contrast polarity along the gray square edge reverses; opposite polarities are pooled to form object boundary. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:577:
  • image p374fig11.04 The three processing stages of monocular simple cells, and complex cells accomplish both contrast-specific binocular fusion and contrast-invariant boundary perception.
    || Model unifies contrast-specific binocular fusion and contrast-invariant boundary perception (Ohzawa etal 1990; Grossberg, McLoughlin 1997). [Left, right] eye V1-4 simple cells-> V1-3B simple cells-> V1-2/3A complex cells. Contrast-specific stereoscopic fusion by disparity-selective simple cells. Contrast-invariant boundaries by pooling opposite polarity binocular simple cells at complex cells layer 2/3A. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:578:
  • image p374fig11.05 The brain uses a contrast constraint on binocular fusion to help ensure that only contrasts which are derived from the same objects in space are binoculary matched.
    || Contrast constraint on binocular fusion. Left and right input from same object has similar contrast, Percept changes when one contrast is different. Fusion only occurs between bars of similar contrast (McKee etal 1994) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:579:
  • image p375fig11.06 The contrast constraint on binocular fusion is realized by obligate cells in layer 3B of cortical area V1.
    || Model implements contrast constraint on binocular fusion (cf. "obligate" cells Poggio 1991). An ecological constraint on cortical development. [left, right] eye cart V1-[4 monocular simple, 3B binocular simple, complex2/3A] cells. Inhibitory cells (red) ensure that fusion occurs when contrasts in left and right eye are approximately equal. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:57:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:580:
  • image p375fig11.07 The 3D LAMINART model uses both monocular and binocular simple cells to binocularly fuse like image contrasts. The remainder of the model generates 3D boundary and surface representations of multiple kinds of experiments as well as of natural scenes.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A], V2 thin stripe [4->2/3A], V4]. V1 blob [V1-4 monocular, V1 interior binocular] simple cells. [complex, simple, inhibitory] cells, on-center off-surround /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:581:
  • image p376fig11.08 The contrast constraint on binocular fusion is not sufficient to prevent many of the false binocular matches that satisfy this constraint.
    || How to solve the correspondance problem? How does the brain inhibit false matches? Contrast constraint is not enough. [stimulus, multiple possible binocular matches] - Which squares in the two retinal images must be fused to form the correct percept? /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:582:
  • image p376fig11.09 The disparity filter in V2 helps to solve the correspondence problem by eliminating spurious contrasts using line-of-sight inhibition.
    || Model V2 disparity filter solves the correspondence problem. An ecological constraint on cortical development. [left, right] eye view: False matches (black) suppressed by line-of-sight inhibition (green lines). "Cells that fire together wire together". /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:583:
  • image p376fig11.10 The 3D LAMINART model shows how the disparity filter can be integrated into the circuit that completes 3D boundary representations using bipole grouping cells. It also explains how surface contours can strengthen boundaries that succeed in generating closed filling-in domains.
    || 3D LAMINART model (Cao, Grossberg 2005). [left, right] eye cart [LGN, V1 blob [4->3B->2/3A] surface contour, V2 thin stripe (monocular surface) [4->2/3A], V2 interior [disynaptic inhibitory interneurons, bipole grouping cells, disparity filter, V4 binocular surface]. [complex, simple, inhibitory] cells, on-center off-surround /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:584:
  • image p377fig11.11 DaVinci stereopsis phenomena occur when only one eye can receive visual inputs from part of a 3D scene due to occlusion by a nearer surface.
    || How does monocular information contribute to depth perception? DaVinci steropsis (Gillam etal 1999). Only by utilizing monocular information can visual system create correct depth percept. [left, right] eye view /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:585:
  • image p378fig11.12 How monocular and binocular information are combined in V1 and V2 in the 3D LAMINART model.
    || Model utilizes monocular information. [left, right] eye cart V1-[4 monocular simple, 3B binocular simple, complex2/3A [mo,bi]nocular] cells, V2-4 binocular complex cells. black = monocular cells, blue = binocular cells. In V2, monocular inputs add to binocular inputs along the line of sight and contribute to depth perception. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:586:
  • image p379fig11.13 How the 3D LAMINART model explains DaVinci stereopsis. All the stages of boundary and surface formation are color coded to clarify their explanation. Although each mechanism is very simple, when all of them act together, the correct depthful surface representation is generated. See the text for details.
    || DaVinci stereopsis (Nakayama, Shimojo 1990). An emergent property of the previous simple mechanisms working together. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] pair [Binocular match: boundaries of thick bar -> Add monocular boundaries along lines-of-sight -> Line-of-sight inhibition kills weaker vertical boundaries -> 3D surface percept not just a disparity match!] pair [Binocular match: right edge of thin and thick bars -> Strongest boundaries: binocular and monocular boundaries add -> Vertical boundaries from monocular left edge of thin bar survive -> Filling-in contained by connected boundaries]. cart [very near, near, fixation plane, far, very far] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:587:
  • image p380fig11.14 The model explanation of DaVinci stereopsis when the input stimuli have opposite contrast polarities.
    || Polarity-reversed Da Vinci stereopsis (Nakayama, Shimojo 1990). Same explanation! (... as Figure 11.13 ...) [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:588:
  • image p381fig11.15 The same model mechanisms explain the surface percept that is generated by the variant of DaVinci stereopsis that Gillam, Blackburn, and Nakayama studied in 1999.
    || DaVinci stereopsis (Gillam, Blackburn, Nakayama 1999). same model mechanisms. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:589:
  • image p382fig11.16 The version of DaVinci steropsis wherein three narrow rectangles are binocularly matched with one thick rectangle can also be explained is a similar way.
    || DaVinci stereopsis of [3 narrow, one thick] rectangles (Gillam, Blackburn, Nakayama 1999). [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:590:
  • image p383fig11.17 The bars in the left and right images that are in the same positions are marked in red to simplify tracking how they are processed at subsequent stages.
    || The Venetian blind effect (Howard, Rogers 1995). Every second bar on L in same position as every third bar on R. These bars are marked in red; see them match in Fixation Plane. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:591:
  • image p384fig11.18 Surface and surface-to-boundary surface contour signals that are generated by the Venetian blind image.
    || Venetian blind effect (Howard, Rogers 1995). Every second bar on L in same position as every third bar on R. PERCEPT: 3-bar ramps sloping up from L to R with step returns. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:592:
  • image p385fig11.19 Dichoptic masking occurs when the bars in the left and right images have sufficiently different contrasts.
    || Dichoptic masking (McKee, Bravo, Smallman, Legge 1994). [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:593:
  • image p385fig11.20 Dichoptic masking occurs in Panum's limiting case for reasons explained in the text.
    || Dichoptic masking in Panum's limiting case (McKee, Bravo, Smallman, Legge 1995). Panum's limiting case is a simplified version of the Venetian blind effect! [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:594:
  • image p386fig11.21 A simulation of the Craik-O'Brien-Cornsweet Effect when viewed on a planar surface in depth.
    || Craik-O'Brien-Cornsweet Effect. Can the model simulate other surface percepts? eg surface brightness. The 2D surface with the image on it is viewed at a very near depth. Adapts (Grossberg, Todovoric 1988) to 3D. [V1 binocular boundaries -> V2 initial boundaries -> V2 final boundaries -> V4 surface] cart [very near, near, fixation plane, far, very far] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:595:
  • image p387fig11.22 Simulation of the boundaries that are generated by the Julesz stereogram in Figure 4.59 (top row) without (second row) and with (third row) surface contour feedback.
    || Boundary cart [V2-2, V2, V1] cart [near, fixation, far] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:596:
  • image p388fig11.23 Simulation of the surface percept that is seen in response to a sparse stereogram. The challenge is to assign large regions of ambiguous white to the correct surface in depth.
    || [left, right] retinal input. Surface [near, fixation, far] V4 /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:597:
  • image p388fig11.24 Boundary groupings capture the ambiguous depth-ambiguous feature contour signals and lift them to the correct surface in depth.
    || [surface, boundary] cart [near, fixation, far] V2. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:598:
  • image p389fig11.25 Boundaries are not just edge detectors. If they were, a shaded ellipse would look flat, and uniformly gray.
    || 3D vision and figure-ground separation. Multiple-scale, depth-selective boundary webs. [dark-light, light-dark] boundaries -> complex cells! If boundaries were just edge detectors, there would be just a bounding edge of the ellipse. After filling-in, it would look like this:. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:599:
  • image p390fig11.26 Although larger scales sometimes look closer (left image), that is not always true, as the right image of (Brown, Weisstein 1988) illustrates. The latter percept is, moreover, bistable. These images show the importance of interactions between groupings and multiple scales to determine perceived surface depths.
    || Multiple-scale depth-selective groupings determine perceived depth (Brown, Weisstein 1988). As an object approaches, it gets bigger on the retina. Does a big scale (RF) always signal NEAR? NO! The same scale can signal either near or far. Some scales fuse more than one disparity. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:59:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:600:
  • image p391fig11.27 (left image) Each scale can binocularly fuse a subset of spatial scales, with larger scales fusing more scales and closer ones than small scales. (right image) Cortical hypercolumns enable binocular fusion to occur in a larger scale even as rivalry occurs in a smaller scale.
    || Multiple-scale grouping and size-disparity correlation. Depth-selective cooperation and competition among multiple scales determines perceived depth: a) Larger scales fuse more depth; b) Simultaneous fusion and rivalry. Boundary prining using surface contours: Surface-to-boundary feedback from the nearest surface that is surrounded by a connected boundary eliminates redundant boundaries at the same position and further depths. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:601:
  • image p391fig11.28 (left image) Ocular dominance columns respond selectively to inputs from one eye or the other. (right image) Inputs from the two eyes are mapped into layer 4C of V1, among other layers.
    || Cortex V1[1, 2/3, 4A, 4B, 4C, 5, 6], LGN /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:602:
  • image p392fig11.29 Boundary webs of the smallest scales are closer to the boundary edge of the ellipse, and progressively larger scale webs penetrate ever deeper into the ellipse image, due to the amount of evidence that they need to fire. Taken together, they generate a multiple-scale boundary web with depth-selective properties that can capture depth-selective surface filling-in.
    || 3D vision and figure-ground separation. Multiple-scale, depth-selective boundary webs. Instead, different size detectors generate dense boundary webs at different positions and depths along the shading gradient. Small-far, Larger-nearer, Largest-nearest. Each boundary web captures the gray shading in small compartments at its position and depths. A shaded percept in depth results. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:603:
  • image p392fig11.30 Multiple scales interact with bipole cells that represent multiple depths, and conversely. See the text for details.
    || How multiple scales vote for multiple depths. Scale-to-depth and depth-to-scale maps. Smallest scale projects to, and receives feedback from, boundary groupings that represent the furthest depths. Largest scale connects to boundary groupings that represent all depths. multiple-[depth, scale] dot [grouping, filter] cells. [small <-> large] vs [far <-> near] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:604:
  • image p393fig11.31 (Todd, Akerstrom 1987) created a series of 2D images from discrete black patches on a white disk and showed how the perceived depth varies with the factors summarized in the figure. The LIGHTSHAFT model quantitatively simulated their data.
    || Factors determining depth-from-texture percept. Perceived depth varies with texture element width, but only when elements are elongated and sufficiently aligned with one another to form long-range groupings. Data of (Todd, Akerstrom 1987) simulated by the LIGHTSHAFT model of (Grossberg, Kuhlmann 2007). [HP, LP, CCE, CCS, RO] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:605:
  • image p393fig11.32 Kulikowski stereograms involve binocular matching of out-of-phase (a) Gaussians or (b) rectangles. The latter can generate a percept of simultaneous fusion and rivalry. See the text for why.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:606:
  • image p394fig11.33 The Kaufman stereogram also creates a percept of simultaneous fusion and rivalry. The square in depth remains fused and the perpendicular lines in the two images are pervceived as rivalrous.
    || 3D groupings determine perceived depth, stereogram (Kaufman 1974). Vertical illusory contours are at different disparities than those of bounding squares. Illusory square is seen in depth. Vertical illusory contours are binocularly fused and determine the perceived depth of the square. Thin, oblique lines, being perpendicular, are rivalrous: simultaneous fusion and rivalry. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:607:
  • image p395fig11.34 A comparison of the properties of other rivalry models with those of the 3D LAMINART model (surrounded by red border). Significantly, only 3D LAMINART explains both stable vision and rivalry (green border).
    || Comparison of rivalry models /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:608:
  • image p396fig11.35 Three properties of bipole boundary grouping in V2 can explain how boundaries oscillate in response to rivalry-inducing stimuli. Because all boundaries are invisible, however, these properties are not sufficient to generate a conscious percept of rivalrous surfaces.
    || 3 V2 boundary properties cause binocular rivalry. 1. Bipole grouping, 2. Orientational competition, 3. Actovity-dependent habituation /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:609:
  • image p397fig11.36 Simulation of the temporal dynamics of rivalrous, but coherent, boundary switching.
    || Simulation of 2D rivalry dynamics. [Inputs, Temporal dynamics of V2 layer 2/3 boundary cells] cart [left, right] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:610:
  • image p398fig11.37 Simulation of the no swap baseline condition of (Logothetis, Leopold, Sheinberg 1996).
    || [Binocular, [left, right] eye] activity /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:611:
  • image p399fig11.38 Simulation of the swap condition of (Logothetis, Leopold, Sheinberg 1996).
    || [Binocular, [left, right] eye] activity /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:612:
  • image p399fig11.39 Simulation of the eye rivalry data of (Lee, Blake 1999).
    || [Binocular, [left, right] eye] activity /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:613:
  • image p400fig11.40 When planar 2D parallelograms are justaposed, the resultant forms generate 3D percepts that are sensitive to the configuration of angles and edges in the fugure. See the text for why.
    || 3D representation of 2D images, Monocular cues (eg angles) can interact together to yield 3D interpretation. Monocular cues by themselves are often ambiguous. Same angles and shapes, different surface slants. How do these ambiguous 2D shapes contextually define a 3D object form? /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:614:
  • image p401fig11.41 The 3D LAMINART model proposes how angle cells and disparity-gradient interact through learning to generate 3D representations of slanted objects.
    || 3D LAMINART model. [LGN, V1, V2, V4] Four key additions: 1. Angle cells - tuned to various angles; 2. Disparity-gradient cells - tuned to disparity gradients in the image; 3. weights from [angle to disparity-gradient] cells - learned while viewing 3D image; Colinear grouping between [angle to disparity-gradient] cells - disambiguates ambiguous groupings. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:615:
  • image p401fig11.42 A hypothetical cortical hypercolumn structure proposes how angle cells and disparity-gradient cells, including bipole cells that stay within a given depth, may self-organize during development.
    || Hypercolumn representation of angles [leftm right] cart [far-to-near, zero, near-to-far] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:616:
  • image p402fig11.43 A pair of disparate images of a scene from the University of Tsukuba. Multiview imagre database.
    || input [left, right] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:617:
  • image p402fig11.44 3D scenic reconstruction of the image in Figure 11.43 by the 3D LAMINART model.
    || Disparity [5, 6, 8, 10, 11, 14]: images of objects in common depth planes /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:618:
  • image p403fig11.45 The multiple boundary and surface scales that were used to simulate a reconstruction of the SAR image in Figure 3.24.
    || SAR processing by multiple scales. [boundaries before completion, boundaries after completion, surface filling-in] versus scale [small, medium, large]. large scale bipole /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:61:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:626:
  • image p405fig12.01 A What ventral cortical stream and Where/How dorsal cortical stream have been described for audition, no less than for vision.
    || Partietal lobe: where; Temporal lobe: what. V1-> [[what: IT], [where: PPC-> DLPFC]]. A1-> [[what: [ST-> VLPFC], VLPFC], [where: [PPC-> DLPFC], DLPFC]]. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:627:
  • image p406fig12.02 The Vector Integration to Endpoint, or VITE, model of arm trajectory formation enables the three S's of a movement to be realized: Synergy formation, Synchrony of muscles within a synergy, and variable Speed that is under volitional control (G). This is accomplished by subtracting a present position vector (P) from a target position vector (T) to form a difference vector (V) which moves P towards T at a speed that is determined by G.
    || The three S's of movement control. T-> D [-> [[D]+G]-> P->], P-> D (inhib), G-> [[D]+G]. 1. Synergy - Defining T determines the muscle groups that will contract during the movement. 2. Synchrony - When G turns on, all muscle groups for which D != 0 contract by variable amounts in equal time. Because G multiplies D, it does not change the direction in which P moves to acquire T: straight line movement. 3. Speed - P integrates D at rate G until P = T. Increasing (decreasing) G makes the movement faster (slower). /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:628:
  • image p407fig12.03 Neurophysiological data showing how motor cortical cells code different vectors that are sensitive to both the direction of the commanded movement and its length.
    || (a) Single primary motor cortex neuron, onset of movement -> on..., radial architecture... (b) Motor cortex neuronal population, radial architecture... /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:629:
  • image p409fig12.04 (top half) Neurophysiological data of vector cell responses in motor cortex. (bottom half) VITE model simulations of a simple movement in which the model's difference vector simulates the data as an emergent property of network interactions.
    || Neurophysiological data. VITE model [Present Position vector, Difference vector, Outflow velocity vector, go signal]. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:630:
  • image p410fig12.05 VITE simulation of velocity profile invariance if the same GO signal gates shorter (a) or longer (b) movements. Note the higher velocities in (b).
    || [[short, long] cart [G, dP/dt]] vs time. G = GO signal, dP/dt = velocity profile. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:631:
  • image p410fig12.06 Monkeys seamlessly transformed a movement initiated towards the 2 o'clock target into one towards the 10 o'clock target when the later target was substituted 50 or 100 msec after activation of the first target light.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:632:
  • image p411fig12.07 The left column simulation by VITE shows the velocity profile when the GO signal (G) starts with the movement. The right signal column shows that the peak velocity is much greater if a second movement begins when the GO signal is already positive.
    || Higher peak velocity due to target switching. VITE simulation of higher peak speed if second target rides on first GO signal. [[first, second] target cart [G, dP/dt]] vs time. Second target GO is much higher. G = GO signal, dP/dt = velocity profile. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:633:
  • image p411fig12.08 Agonist-antagonist opponent organization of difference vector (DV) and present position vector (PPV) processing stages and how GO signals gate them.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:634:
  • image p412fig12.09 How a Vector Associative Map, or VAM, model uses mismatch learning during its development to calibrate inputs from a target position vector (T) and a present position vector (P) via mismatch learning of adaptive weights at the difference vector (D). See the text for details.
    || Vector Associative Map model (VAP). During critical period, Endogenous Random Generator (ERG+) tirns on, activates P, and causes random movements that sample workspace. When ERG+ shuts off, posture occurs. ERG- then turns on (rebound) and opens Now Print (NP) gate, that dumps P into T. Mismatch learning enables adaptive weights between T and D to change until D (the mismatch) appoaches 0. Then T and P are both correctly calibrated to represent the same positions. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:635:
  • image p413fig12.10 Processing stages in cortical areas 4 and 5 whereby the VITE model combines outflow VITE trajectory formation signals with inflow signals from the spinal cord and cerebellum that enable it to carry out movements with variable loads and in the presence of obstacles. See the text for details.
    || area 4 (rostral) <-> area 5 (caudal). /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:636:
  • image p414fig12.11 Neurophysiological data from cortical areas 4 and 5 (every other column) and simulations thereof (other columns) during a reach.
    || activation vs time. (a) area 4 phasic RT (IFV) (b) area 4 tonic (OPV) (c) area 4 phasic-tonic (OFPV) (d) area 4 phasic MT (DVV) (e) area 5 phasic (DV) (f) area 5 tonic (PPV) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:637:
  • image p415fig12.12 The combined VITE, FLETE, cerebellar, and multi-joint opponent muscle model for trajectory formation in the presence of variable forces and obstacles.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:638:
  • image p416fig12.13 The DIRECT model learns, using a circular reaction that is energized by an Endogenous Random Generator, or ERG, to make motor-equivalent volitionally-activated reaches. This circular reaction learns a spatial representation of a target in space. It can hereby make accurate reaches with clamped joints and on its first try using a tool under visual guidance; see Figure 12.16.
    || DIRECT model (Bulloch, Grossberg, Guenther 1993). learns by circular reaction. learns spatial reresentation to me4diate between vision and action. motor-equivalent reaching. can reach target with clamped joints. can reach target with a TOOL on the first try under visual guidance. How did tool use arise?! /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:639:
  • image p416fig12.14 Computer simulations of DIRECT reaches with (b) a tool, (c) a clamped elbow, and (d) with a blindfold, among other constraints.
    || Computer simulationsd of direct reaches [unconstrained, with TOOL, elbow clamped at 140°, blindfolded] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:63:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:640:
  • image p417fig12.15 The DIRECT and DIVA models have homologous circuits to learn and control motor-equivalent reaching and speaking, with tool use and coarticulation resulting properties. See the text for why.
    || From Seeing and Reaching to Hearing and Speaking, Circular reactions (Piaget 1945, 1951, 1952). Homologous circuits for development and learning of motor-equivalent REACHING and SPEAKING. DIRECT TOOL use (Bullock, Grossberg, Guenther 1993), DIVA Coarticulation (Guenther 1995) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:641:
  • image p418fig12.16 Anatomical interpretations of the DIVA model processing stages.
    || [Feedforward control system (FF), Feedback control subsystem (FB)]. Speech sound map (Left Ventral Premotor Cortex (LVPC)), Cerebellum, Articulatory velocity and position maps (Motor Cortex (MC)), Somatosensory Error Map (Inferior Parietal Cortex (IPC)), Auditory Error Map (Superior Temporal Cortex (STC)), Auditory State Map (Superior Temporal Cortex)), Somatosensory State Map (Inferior Parietal Cortex)), articulatory musculature via subcortical nuclei, auditory feedback via subcortical nuclei /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:642:
  • image p419fig12.17 The auditory continuity illusion illustrates the ART Matching Rule at the level of auditory streaming. Its "backwards in time" effect of future context on past conscious perception is a signature of resonance.
    || Auditory continuity illusion. input, percept. Backwards in time - How does a future sound let past sound continue through noise? Resonance! - It takes a while to kick in. After it starts, a future tone can maintain it much more quickly. Why does this not happen if there is no noise? - ART Matching Rule! TD harmonic filter is maodulatory without BU input. It cannot create something out of nothing. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:643:
  • image p420fig12.18 The ARTSTREAM model explains and simulates the auditory continuity illusion as an example of a spectral-pitch resonance. Interactions of ART Matching Rule and asymmetric competition mechanisms in cortical strip maps explain how the tone selects the consistent frequency from the noise in its own stream while separating the rest of the noise into another stream.
    || ARTSTREAM model (Grossberg 1999; Grossberg, Govindarajan, Wyse, Cohen 2004). SPINET. Frequency and pitch strips. Bottom Up (BU) harmonic sieve. Top Down (TD) harmonic ART matching. Exclusive allocation. Learn pitch categories based on early harmonic processing. A stream is a Spectral-Pitch Resonance! /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:644:
  • image p422fig12.19 The ARTSTREAM model includes mechanisms for deriving streams both from pitch and from source direction. See the text for details.
    || [left, right] cart Peripheral processing = [input signal-> outer & middle ear preemphasis-> basilar membrane gammatone filterbank-> energy measure]. Spectral stream layer-> spectral summation layer-> delays-> [f-, tau] plane-> pitch stream layer-> pitch summation layer. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:645:
  • image p423fig12.20 The Spatial Pitch Network, or SPINET, model shows how a log polar spatial representation of the sound frequency spectrum can be derived from auditory signals occuring in time. The spatial representation allows the ARTSTREAM model to compute spatially distinct auditory streams.
    || SPINET model (Spatial Pitch Network) (Cohen, Grossberg, Wyse 1995). 1. input sound 2. Gamma-tone filter bank 3. Shaort-term average energy spectrum 4. MAP transfer function 5. On-center off-surround and rectification 6. Harmonic weighting 7. Harmonic summation and competition -> PITCH /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:646:
  • image p424fig12.21 One of the many types of data about pitch processing that are simulated by the SPINET model. See the text for details.
    || Pitch shifts with component shifts (Patterson, Wightman 1976; Schouten 1962). Pitch vs lowest harmonic number. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:647:
  • image p424fig12.22 Decomposition of a sound (bottom row) in terms of three of its harmonics (top three rows).
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:648:
  • image p425fig12.23 ARTSTREAM simulations of the auditory continuity illusion and other streaming properties (left column, top row). When two tones are separated by silence (Input), a percept of silence also separates them in a spectral-pitch resonance. (left column, bottom row). When two tones are separated by broadband noise, the percept of tone continues through the noise in one stream (stream 1) while the remainder of the noise occurs in a different stream (stream 2). (right column) Some of the other streaming properties that have been simulated by the ARTSTREAM model.
    || Auditory continuity does not occur without noise. Auditory continuity in noise. Other simulated streaming data. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:649:
  • image p426fig12.24 Spectrograms of /ba/ and /pa/ show the transient and sustained parts of their spectrograms.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:650:
  • image p428fig12.25 (left architecture) Auditory-articulatory feedback loop whereby babbled sounds active learning in an imitative map that is later used to learn to reproduce the sounds of other speakers. An articulatory-to-auditory expectation renders learning possible by making the auditory and motor data dimensionally consistent, as in the motor theory of speech. (right architecture) Parallel streams in the ARTSPEECH model for learning speaker-independent speech and language meaning, including a mechanism for speaker normalization (right cortical stream) and for learning speaker-dependent vocalic qualities (left cortical stream).
    || left: Speaker-dependent vocalic qualities; right: Speaker-independent speech and language meaning /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:651:
  • image p430fig12.26 The NormNet model shows how speaker normalization can be achieved using specializations of the same mechanisms that create auditory streams. See the text for how.
    || [Anchor vs Stream] log frequency map. -> diagonals-> Speaker-independent acoustic item information-> [BU adaptive filter, TD learned expectation]-> leaned item recognition categories /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:652:
  • image p431fig12.27 The strip maps that occur in ARTSTREAM and NormNet are variants of a cortical design that aalso creates ocular dominance columns in the visual cortex.
    || Adult organization of V1 (Grinvald etal http://www.weizmann.ac.il/brain/images/cubes.html). (1) Occular dominance columns (OCDs): Alternating strips of cortex respond preferentially to visual inputs of each eye (R/L corresponds to Right and Left eye inputs in the figure); Orientation columns: A smooth pattern of changing orientation preference within each ODC. Organized in a pinwheel like fashion. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:654:
  • image p432fig12.28 (left image) The SpaN model simulates how spatial representations of numerical quantities are generated in the parietal cortex. (right image) Behavior numerosity data and SpaN model simulations of it.
    || (Left) preprocessor-> spatial number map-> Comparison wave. (Right) data axis: number of lever presses; model axis: node position in the spatial number axis /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:655:
  • image p433fig12.29 Learning of place-value number maps language categories in the What cortical stream into numerical strip maps in the Where cortical stream. See the text for details.
    || (1) spoken word "seven"-> (2) What processing stream- learned number category <-> (3) What-Where learned assoociations <- (4) Where processing stream- spatial number map <-(5) visual clues of seven objects /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:656:
  • image p436fig12.30 The conscious ARTWORD, or cARTWORD, laminar cortical speech model simulates how future context can disambiguate noisy past speech sounds in such a way that the completed percept is consciously heard to proceed from past to future as a feature-item-list resonant wave propagates through time.
    || cARTWORD: Laminar cortical model macrocircuit (Grossberg, Kazerounian 2011) Simulates PHONEMIC RESTORATION: Cognitive Working Memory (processed item sequences) - [Excitatory-> inhibitory-> habituative-> adaptive filter-> adaptive filter-> adaptive filter with depletable synapse-> Acoustic [item, feature] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:657:
  • image p436fig12.31 Working memories do not store longer sequences of events in the correct temporal order. Instead, items at the beginning and end of the list are oftem called first, and with the highest probability.
    || Working memory. How to design a working memory to code "Temporal Order Information" in STM before it is stored in LTM. Speech, language, sensory-motor control, cognitive planning. eg repeat a telephone number unless you are distracted first. Temporal order STM is often imperfect, eg Free Recall. [probability, order] of recall vs list position. WHY? /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:658:
  • image p437fig12.32 Data from a free recall experiment illustrate the bowed serial position curve.
    || Serial position function for free recall Data: (Murdock 1962 JEP 64, 482-488). % correct vs position of word on a 40-word list. Primacy gradient can be a mixture of STM and LTM read-out. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:659:
  • image p437fig12.33 Item and Order working memory models explain free recall data, as well as many other psychological and neurobiological data, by simulating how temporal series of events are stored as evolving spatial patterns of activity at content-addressable item categories. The categories with the largest activities are rehearsed first, and self-inhibit their activity as they do so in order to prevent tem from being rehearsed perseveratively. The laws whereby the items are stored in working memory obey basic design principles concerning list categories, or chunks, of sequences of stored items can be stably remembered.
    || Working memory models: item and order, or competitive queuing (Grossberg 1978; Houghton 1990; Page, Norris 1998). Event sequence in time stored as an evolving spatial pattern of activity. Primacy gradient of working memory activation stores correct temporal order at content-addressable cells. Maximally activated cell populations is performed next when a rehearsal wave is turned on. Output signal from chosen cell population inhibits its own activity to prevent perseveration: inhibition of return. Iterate until entire sequence is performed. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:65:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:660:
  • image p438fig12.34 The LTM Invariance Principle insists that words being stored in working memory for the first time (eg MYSELF) do not cause catastrophic forgetting of the categories that have already been learned for their subwords (eg MY, SELF, and ELF) or other subset linguistic groups.
    || LTM invariance principle. unfamiliar STM -> LTM familiar. How does STM storage of SELF influence STM storage of MY? It should not recode LTM of either MY or SELF! /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:661:
  • image p439fig12.35 The Normalization Rule insists that the total activity of stored items in working memory has an upper bound that is approximately independent of the number of items that are stored.
    || Normalization Rule (Grossberg 1978). Total STM activity has a finite bound independent of the number of items (limited capacity of STM). Activity vs Items for [slow, quick] asymptotic energy growth. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:662:
  • image p439fig12.36 (1) Inputs to Item and Order working memories are stored by content-addressable item categories. (2) The relative activities of the item categories code the temporal order of performance. (3) In addition to excitatory recurrent signals from each working memory cell (population) to itself, there are also inhibitory recurrent signals to other working memory cells, in order to solve the noise-saturation dilemma. (4) A nonspecific rehearsal wave allows the most active cell to be rehearsed first. (5) As an item is being rehearsed, it inhibits its own activity using a feedback inhibitory interneuron. Persevervation performance is hereby prevented.
    || Item and order working memories. (1) Content-addressable item codes (2) Temporal order stored as relative sizes of item activities (3) Competition between working memory cells: Competition balances the positive feedback that enables the cells to remain active. Without it, cell activities may all saturate at their maximal values-> Noise saturation dilemma again! (4) Read-out by nonspecific reheasal wave- Largest activity is the first out (5) STM reset self-inhibition prevents perseveration: [input/self-excitatory, rehearsal wave]-> [output, self-inhibition] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:663:
  • image p440fig12.37 Simulation of a primacy gradient for a short list (left image) being transformed into a bowed gradient for a longer list (right image). Activities of cells that store the longer list are smaller die to the Normalization Rule, which follows from the shunting inhibition in the working memory network.
    || Primacy bow as more items stored. [activities, final y] (Left) Primacy gradient 6 items (Right) Bowed gradient 20 items /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:664:
  • image p441fig12.38 The LTM Invariance Principle is realized if the relative sizes of the inputs to the list chunk level stay the same as more items are stored in working memory. This property, in turn, follows from shunting previously stored working memory activities when a ne4w item occurs.
    || LTM Invariance principle. Choose STM activities so that newly stored STM activities may alter the size of old STM activities without recoding their LTM patterns. In particular: New events do not change the relative activities of past event sequences, but may reduce their absolute activites. Why? Bottom-up adaptive filtering uses dot products: T(j) = sum[i=1 to n: x(i)*z(i,j) = total input to v(j). The relative sizes of inputs to coding nodes v(j) are preserved. x(i) -> w*x(i), 0 < w <= 1, leaves all past ratios T(j)/T(k) unchanged. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:665:
  • image p442fig12.39 (left column, top row) How a shunt plus normalization can lead to a bow in the stored working memory spatial pattern. Time increases in each row as every item is stored with activity 1 before it is shunted by w due to each successive item's storage, and the total working memory activity in each row is normalized to a total activity of 1. (right column, top row) When the working memory stored pattern is shunted sufficiently strongly (w > 1/2), then the pattern bows at position 2 in the list as more items are stored through time. (left column, bottom row) LTM invariance can be generalized to consider arbitrary amounts of attention u, being paid when the i_th item is stored with an arbitrary amount of shunting w(j) to the j_th item. (right colum, bottom row) The Normalization Rule can also be generalized to approach the maximum possible normalized total activity that is stored across all the working memory cells at different rates.
    || Shunt normalization -> STM bow. (topLeft) Algebraic working memory (Grossberg 1978) (topRight) Strong inhibition of new inputs by stored STM items. Bow at position 2. Can we classify all working memory codes of this type? Yes! (bottomLeft) 1. LTM invariance principle (bottomRight) 2. Normalization Rule (Kahneman, Beatty 1966) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:666:
  • image p442fig12.40 Given the hypothesis in Figure 12.39 (right column, bottom row) and a generalized concept of steady, albeit possibly decreasing, attention to each item as it is stored in working memory, only a primacy, or bowed gradient of activity across the working memory items can be stored.
    || LTM Invariance + Normalization. (... given conditions ...) Then the x(i) can ONLY form: [primacy gradient, recency gradient, unimodal bow] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:667:
  • image p443fig12.41 Neurophysiological data from the Averbeck etal sequential copying experiments show the predicted primacy gradient in working memory and the self-inhibition of activity as an item is stored. When only the last item remains stored, it has the highest activity becasuse it has been freed from inhibition by earlier items.
    || Neurophysiology of sequential copying /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:668:
  • image p444fig12.42 The LIST PARSE laminar cortical model of working memory and list chunking that I published with Lance Pearson in 2008 simulated the Averbeck etal data in Figure 12.41, as in the left column of the figure. It also simulated cognitive data about working memory storage by human subjects. See the text for details.
    || LIST PARSE: Laminar cortical model of working memory and list chunking (Grossberg, Pearson 2008). Simulates data about: [immediate, delayed, continuous] distractor free recall; immediate serial recall; and variable-speed sequential perfomance of motor acts. [velocity, acceleration] vs time (ms) from recall cue. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:669:
  • image p445fig12.43 The LIST PARSE laminar cortical Cognitive Working Memory circuit, that is proposed to occur in ventrolateral prefrontal cortex, is homologous to the LAMINART circuit circuit that models aspects of how visual cortex sees. The Motor Working Memory, VITE Trajectory Generator, and Variable-Rate Volitional Control circuits model how other brain regions, including dorsolateral prefrontal cortex, motor cortex, cerebellum, and basal ganglia, interact with the Cognitive Working Memory to control working memory storage and variable-rate performance of item sequences.
    || List parse circuit diagram. Connectivity convention. sequence chunks [<- BU filter, TD expectation ->] working memory. Working memory and sequence chunking circuit is homologous to visual LAMINART circuit! /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:670:
  • image p446fig12.44 (left column, top row) LIST PARSE can model linguistic data from human subjects. In this figure, model parameters are fixed to enable a close fit to data about error-type distributions in immediate free recall experiments, notably transposition errors. (right column, top row) Simulation and data showing bowing of the serial position curve, including an extended primacy gradient. (left column, bottom row) The simulation curve overlays data about list length effects, notably the increasing recall difficulty of longer lists during immediate serial recall (ISR). (right column, bottom row) Simulation (bottom image) and data (top image) of the limited temporal extent for recall.
    || (1. TL) Error-type distributions in immediate serial recall (Hanson etal 1996). % occurence vs serial position. Graph convention: Data- dashed lines; Simulations- solid lines. Six letter visual ISR. Order errors- transpositions of neighboring items are the most common. Model explanation: Noisy activation levels change relative oorder in primacy gradient. Similar activation of neighboring items most susceptible to noise. Model paramenters fitted on these data. (2. TR) Bowing of serial position curve (Cowan etal 1999). % correct vs serial position. Auditory ISR with various list lengths (graphs shifted rightward): For [, sub-]span lists- extended primacy, with one (or two) item recency; Auditory presentation- enhanced performance for last items. LIST PARSE: End effects- first and last items half as many members; Echoic memory- last presented item retained in separate store. (3. BL) List length effects, circles (Crannell, Parrish 1968), squares (Baddeley, Hitch 1975), solid line- simulation. % list correct vs list length. Variable list length ISR: longer lists are more difficult to recall. LIST PARSE: More items- closer activation levels and lower absolute activity level with enough inputs; Noise is more likely to produce order errors, Activity levels more likely to drop below threshold;. (4. BR) Limited temporal extent for recall (Murdoch 1961). % recalled vs retention interval (s). ISR task with distractor-filled retention intervals (to prevent rehersal): Increasing retention interval - decreases probabilty of recalling list correctly; Load dependence- longer list more affected by delays; Performance plateau- subjects reach apparent asymptote. LIST PARSE: Increase convergence of activities with time; loss of order information;. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:671:
  • image p447fig12.45 (left column) LIST PARSE simulations of the proportion of order errors as a function of serial position for 6 item lists with (a) an extended pause of 7 time units between the third and fourth items, and (b) pauses of 5 time units (solid curve) and 10 time units (dashed curve) between all items. (right column) Simulations (solid curves) and data (dashed curves) illustrating close model fits in various immediate free recall tasks.
    || (Left) Temporal grouping and presentation variability. Temporal grouping: Inserting an extended pause leads to inter-group bowing; Significantly different times of integration and activity levels across pause, fewer interchanges. (Right) Immediate free recall, and [delayed, continuous] distractor-free recall. Overt rehersal IFR task with super-span (ie 20 item) lists: Extended recency- even more extended with shorter ISIs; Increased probability of recall with diminished time from last rehearsal; Early items in list rehearsed most;. LIST PARSE (unique) for long lists: Incoming items from recency gradient; Rehearsal (re-presentation) based upon level of activity. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:672:
  • image p448fig12.46 A Masking Field working memory is a multiple-scale self-similar recurrent shunting on-center off-surround network. It can learn list chunks that respond selectively to lists of item chunks of variable length that are stored in an item working memory at the previous processing stage. Chunks that code for longer lists (eg MY vs MYSELF) are larger, and give rise to stronger recurrent inhibitory neurons (red arrows).
    || How to code variable length lists? MASKING FIELDS code list chunks of variable length (Cohen, Grossberg 1986, 1987; Grossberg, Kazerounian 2011, 2016; Grossberg, Meyers 2000; Grossberg, Pearson 2008). Multiple-scale self-similar WM: Masking field, adaptive filter. Variable length coding- Masjking fields select list chunks that are sensitive to WM sequences of variable length; Selectivity- Larger cells selectively code code longer lists; Assymetric competition- Larger cells can inhibit smaller cells more than conversely MAgic Number 7! Temporal order- different list chunks respond to the same items in different orders eg LEFT vs FELT;. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:673:
  • image p449fig12.47 This figure illustrates the self-similarity in a Masking Field of both its recurrent inhibitory connections (red arrows) and its top-down excitatory priming signals (green arrows) to the item chunk working memory.
    || Both recurrent inhibition and top-down excitatory priming are self-similar in a masking field. MYSELF <-> [MY, MYSELF] /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:674:
  • image p452fig12.48 (left column) In experiments of (Repp etal 1978), the silence duration between the words GRAY and SHIP was varied, as was the duration of the fricative noise in S, with surprising results. (right column) The red arrow directs our attention to surprising perceptual changes as silence and noise durations increase. See the text for details.
    || Perceptual integration of acoustic cues, data (Repp etal 1978). GRAY-> silence duration-> SHIP (noise duration from start of word). Noise duration vs silence duration: GRAY SHIP <-> [GREAT SHIP <-> GRAY CHIP] <-> GREAT CHIP. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:675:
  • image p453fig12.49 The ARTWORD model that I published in 2000 with my PhD student Christopher Myers simulates data such as the (Repp etal 1978) data in Figure 12.68. See the text for details.
    || ARTWORD model (Grossberg, Myers 2000). Input phonetic features-> Phonemic item working memory-> Masking Field unitized lists-> Automatic gain control-> Phonemic item working memory. [habituative gate, adaptive filter]s. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:676:
  • image p453fig12.50 The ARTWORD perception cycle shows how sequences of items activate possible list chunks, which compete among each other and begin to send their top-down expectations back to the item working memory. An item-list resonance develops through time as a result.
    || ARTWORD perception cycle. (a) bottom-up activation (b) list chunk competition (c) item-list resonance (d) chunk reset due to habituative collapse. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:677:
  • image p454fig12.51 (left column) Even as a resonance with the list chunk GRAY begins to develop, if the delay between "gray" and "chip" is increased, greater habituation of this resonance may allow the GREAT chunk to begin to win, thereby smoothly transfering the item-list resonance from GRAY to GREAT through time. (right column) Simulation of a resonant treansfer from GRAY to GREAT, and back again as the silence interval between the words {gray" and "chip" increases. The red region between GRAY and GREAT curves calls attention to when GREAT wins. See the text for details.
    || Resonant transfer, as silence interval increases. (left) Delay GRAY resonance weakens. A delayed additional item can facilitate perception of a longer list. (right) GRAY-> GREAT-> GRAY. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:678:
  • image p455fig12.52 Simulation of cARTWORD dynamics in response to the complete list /1/-/2/-/3/. The relevant responses are surrounded by a red box.
    || Presentation of a normal sequence: input /1/-/2/-/3/. |c(i,1)-5| vs time (msec). List chunks select most predictive code. Order stored in WM layers. Resonant activity of /1/-/2/-/3/ in item and feature layers corresponds to conscious speech percept. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:679:
  • image p456fig12.53 Simulation of cARTWORD dynamics in response to the partial list /1/-silence-/3/ with /2/ replaced by silence. Only the representations of these items can be seen in the red box.
    || Presentation with silence duration: input /1/-silence-/3/. |c(i,1)-5| vs time (msec). List chunks select most predictive code. Order stored in WM layers. Gap in resonant activity of /1/-silence-/3/ in item and feature layers corresponds to perceived silence. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:67:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:680:
  • image p456fig12.54 Item /2/ is restored in the correct list position in response to the list /1/-noise-/3/.
    || Presentation with noise: input /1/-noise-/3/. |c(i,1)-5| vs time (msec). List chunks select the most predictive code. Order restored in WM layers. Resonant activity of /1/-/2/-/3/ in item and feature layers corresponds to restoration of item /2/ replaced by noise in input. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:681:
  • image p457fig12.55 Item /4/ is restored in the correct list position in response to the list /1/-noise-/5/. This and the previous figure show how future context can disambiguate past noisy sequences that are otherwise identical.
    || Presentation with noise: input /1/-noise-/5/. |c(i,1)-5| vs time (msec). List chunks select the most predictive code. Order restored in WM layers. Resonant activity of /1/-/4/-/3/ in item and feature layers corresponds to restoration of item /4/ replaced by noise in input. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:682:
  • image p459fig12.56 (Grossberg, Pearson 2008) proposed that the ability of working memories to store repeated items in a sequence represents rank information about the position of an item in a list using numerical hypercolumns in the prefrontal cortex (circels with numbered sectors: 1,2,3,4). These numerical hypercolumns are conjointly activated by inputs from item categories and from the analog spatial representation of numerosity in the parietal cortex. Thes parietal representations (overlapping Gausian activity profiles that obey a Weber Law) had earlier been modeled by (Grossberg, Repin 2003). See the text for details.
    || Item-order-rank working memory, rank information from parietal numerosity cicuit (Grossberg, Peaarson 2008; Grossberg, Repin 2003). [Sensory working memory-> adaptive filter-> list chunk-> attentive prime-> Motor working memory]-> [large, small] numbers-> transfer functions with variable thresholds and slopes-> uniform input-> integrator amplitude-> number of transient sensory signals. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:683:
  • image p460fig12.57 The lisTELOS architecture explains and simulates how sequences of saccadic eye movement commands can be stored in a spatial working memory and recalled. Multiple brain regions are needed to coordinate these processes, notably three different basal ganglia loops to replace saccade storage, choice, and performance, and the supplementary eye fields (SEF) to choose the next saccadic command from a stored sequence. Because all working memories use a similar network design, this model can be used as a prototype for storing and recalling many other kinds of cognitive, spatial, and motor information. See the text for details.
    || lisTELOS model- Spatial working memory (Silver, Grossberg, Bulloch, Histed, Miller 2011). Simulates how [PPC, PFC, SEF, FEF, SC] interact with 3 BG loops to learn and perform sequences of saccadic eye movements. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:684:
  • image p461fig12.58 The lisTELOS model built upon key processes that were earlier modeled by the TELOS model. See the text for details.
    || TELOS model (Brown, Bulloch, Grossberg 1999, 2004). shows [BG nigro-[thalamic, collicular], FEF, ITa, PFC, PNR-THAL, PPC, SEF, SC, V1, V4/ITp, Visual Cortex input] and [GABA]. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:685:
  • image p462fig12.59 The TELOS model clarifies how reactive vs. planned eye movements may be properly balanced against one another, notably how a fast reactive movement is prevented from occuring in response to onset of a cue that requires a different, and more contextually appropriate, response, even if the latter response takes longer to be chosen and performed. The circuit explains how "the brain knows it before it knows" what this latter response should be by changing the balance of excitation to inhibition in the basal ganglie (BG) to keep the reactive gate stays shut until the correct target position can be chosen by a frontal-parietal resonance.
    || Balancing reactive vs. planned movements (Brown, Bulloch, Grossberg 2004). (a) shows [FEF, PPC]-> [BG, SC], and BG-> SC. (b) FTE vs time (msec) for [fixation, saccade, overlap, gap, delayed saccade] tasks. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:686:
  • image p463fig12.60 Rank-related activity in prefrontal cortex and supplementary eye fields from two different experiments. See the text for details.
    || Rank-related activity in PFC and SEF. Prefrontal cortex (Averbeck etal 2003) [sqare, inverted triangle]. Supplementary eye field (Isoda, Tanju 2002). /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:687:
  • image p464fig12.61 (left column) A microstimulating electrode causes a spatial gradient of habituation. (right column) The spatial gradient of habituation that is caused by microstimulation alters the order of saccadic performance of a stored sequence, but not which saccades are performed, using interactions between the prefrontal cortex (PFC) working memory and the supplemental eye field (SEF) saccadic choice.
    || (left) Microstimulation causes habituation (Grossberg 1968). Stimulation caused habituation. Cells close to the stimulation site habituate most strongly. (right) Stimulation biases selection PFC-> SEF-> SEF. PFC Activity gradient in working memory, SEF Microstimulation causes habituation, During selection habituated nodes are less likely to win this competition. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:688:
  • image p464fig12.62 The most habituated positions have their neuronal activites most reduced, other things being equal, as illustrated by the gradient from deep habituation (red) to less habituation (pink). The saccadic performance orders (black arrows) consequently tend to end in the most habituated positions that have been stored.
    || The most habituated position is foveated last. For each pair of cues, the cue closest to the stimulation site is most habituated -- and least likely to be selected. Because stimulation spreads in all directions, saccade trajectories tend to converge. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:689:
  • image p465fig12.63 Neurophysiological data (left image) and lisTELOS stimulation (right figure) showing how microstimulation biases saccadic performance order but not the positions to which the saccades will be directed. See the text for details.
    || Saccade trajectories converge to a single location in space. Microstimulation biased selection so saccade trajectories converged toward a single location in space. [Data, model] contra <-> Ipsi (msec) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:690:
  • image p467fig12.64 Some of the auditory cortical regions that respond to sustained or transient sounds. See text for details.
    || Some auditory cortical regions. Core <-> belt <-> parabelt. [Belt, Core, ls, PAi, Parabelt, PGa, TAs, TE, TP, TPO, st s]. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:691:
  • image p468fig12.65 Linguistic properties of the PHONET model and some of the data that it simulates. The upper left image summarizes the asymmetric transient-to-sustained gain control that helps to create invariant intraword ratios during variable-rate speech. The lower left image summarizes the rate-dependent gain control of the ARTPHONE model that creates rate-invariant working memory representations in response to sequences of variable-rate speech. The right image summarizes the kind of paradoxical VC-CV category boundary data of (Repp 1980) that ARTPHONE simulates. See the text for details.
    || (left upper) [transient, sustained] [working memory, filter, category]. (left lower) phone inputs-> [input rate estimate, features], Features w <- habituative transmitter gates -> categories-> rate invariant phonetic output, input rate estimate-> gain control-> [features, categories] rate-dependent integration of categories and features. (right) % 2-stop vs VC-CV silent interval (msec): [ib-ga, ib-ba, iga, iba]. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:692:
  • image p469fig12.66 (left column) A schematic of how preserving relative duration, as in the first and third images, of consonant and vowel pairs can preserve a percept, in this case of /ba/, but not doing so, as in the first and second images, can cause a change in percept, as from /ba/ to /wa/, as in the data of (Miller, Liberman 1979) that PHONET simulates. (right column) Changing frequency extent can also cause a /ba/ - /wa/ transition, as shown in data of (Schwab, Sawusch, Nusbaum 1981) that PHONET also simulates.
    || (left image) Maintaining relative duration as speech speeds up preserves percept (Miller, Liberman 1979). frequency vs time- [/ba/, /wa/, /ba/] (right image) Changing frequency extent causes /b/-/wa/ transition (Schwab, Sawusch, Nusbaum 1981). frequency vs time- [/ba/, /wa/] Dt extent. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:693:
  • image p469fig12.67 PHONET contains transient and sustained cells that respond to different kinds of sounds, notably the transients of certain consonants and the sustained sounds of certain vowels. It then uses the transient working memory to gain contol the integration rate of the sustained working memory to which these different detectors input.
    || Phonetic model summary. (left) Acoustic tokens [consonant, vowel]. (middle) Acoustic detectors [transient (sensitive to rate), Sustained (sensitive to duration)]. (right) Working memory, Spatially stored transient pattern (extent) + gain control-> spatially stored sustained pattern. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:694:
  • image p471fig12.68 A mismatch reset of /b/ in response to the /g/ in [ib]-[ga] can rapidly shut off the [ib] percept, leading to the percept of [ga] after an interval of silence. In contrast, resonant fusion of the two occurences of /b/ in [ib]-[ba] can cause a continuous percept of sound [iba] to occur during times at which silence is heard in response to [ib]-[ga].
    || Mismatch vs resonant fusion /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:695:
  • image p473fig12.69 Error rate and mean reaction time (RT) data from the lexical decision experiments of (Schvanevelt, McDonald 1981). ART Matching Rule properties explain these data in (Grossberg, Stone 1986).
    || (left) Error rate vs type of prime [R, N, U], [non,] word. (right) Mean RT (msec) vs type of prime [R, N, U], [non,] word. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:696:
  • image p474fig12.70 The kind of model macrocircuit that was used in (Grossberg, Stone 1986) to explain lexical decision task data.
    || inputs-> A1 <-> A2 oconic sensory features <-> A3 item and order in sensory STM <-> A4 list parsing in STM (masking field) <-> A5 semantic network (self-feedback). [A4, A5] <-> V* visual object recognition system. M1-> [outputs, A1]. M1 <-> M2 iconic motor features <-> M3 item and order in motor STM. A2-> M2. A3-> M3. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:697:
  • image p476fig12.71 Word frequency data of (Underwood, Freund 1970) that were explained in (Grossberg, Stone 1986).
    || percent errors vs frequency of old words [L-H to H-H, L-L to H-L]. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:69:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:705:
  • image p481fig13.01 Macrocircuit of the functional stages and anatomical interpretations of the Cognitive-Emotional-Motor, or CogEM, model.
    || Drive-> hypothalamus value categories <-> amygdala incentive motivational learning-> Orbitofrontal cortex- object-value categories <-> sensory cortex- invariant object categories- conditioned reinforcer learning-> amygdala-> hypothalamus. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:706:
  • image p483fig13.02 The object-value categories in the orbitofrontal cortex require converging specific inputs from the sensory cortex and nonspecific incentive motivational inputs from the amygdala in order to fire. When the orbitofrontal cortex fires, it can deliver top-down ART Matching Rule priming signals to the sensory cortical area by which it was activated, thereby helping to choose the active recognition categories there that have the most emotional support, while suppressing others, leading to attentional blocking of irrelevant cues.
    || Cognitive-Emotional-Motor (CogEM) model. Drive-> amygdala incentive motivational learning-> orbitofrontal cortex- need converging cue and incentive iputs to fire <-> sensory cortex- conditioned renforcer learning-> amygdala. CS-> sensory cortex. Motivated attention closes the cognitive-emotional feedback loop, focuses on relevant cues, and causes blocking of irrelevant cues. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:707:
  • image p483fig13.03 The predicted processing stages of CogEM have been supported by anatomical studies of connections between sensory cortices, amygdala, and orbitofrontal cortex.
    || Adapted from (Barbas 1995). sensory cortices = [visual, somatosensory, auditory, gustatory, olfactory]. sensory cortices-> amygdala-> orbital prefrontal cortex. sensory cortices-> orbital prefrontal cortex. [visual cortex, amygdala]-> lateral prefrontal cortex. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:708:
  • image p484fig13.04 The top-down feedback from the orbitofrontal cortex closes a feedback loop that supports a cognitive-emotional resonance. If this resonance can be sustained long enough, it enables us to have feelings at the same time that we experience the categories that caused them.
    || Cognitive-Emotional resonance. Basis of "core consciousness" and "the feeling of what happens". (Damasio 1999) derives heuristic version of CogEM model from his clinical data. Drive-> amygdala-> prefrontal cortex-> sensory cortex, resonance around the latter 3. How is this resonance maintained long enough to become conscious? /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:709:
  • image p484fig13.05 Classical conditioning is perhaps the simplest kind of associative learning.
    || Classical conditioning (nonstationary prediction). Bell (CS)-> (CR), Shock (US)-> Fear (UR), associative learning. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:710:
  • image p485fig13.06 (left column) An inverted-U occurs in conditioned reinforcer strength as a function of the ISI between the CS and the US. Why is learning attenuated at 0 ISI? (right column) Some classical conditioning data that illustrate the inverted-U in conditioning as a function of the ISI.
    || InterStimulus Interval (ISI) effect. Data from (Dmith etal 1969; Schneiderman, Gormezano 1964). /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:711:
  • image p485fig13.07 The paradigm of secondary conditioning. See the text for details.
    || Secondary conditioning (Advertising!). [CS1, C2] become conditioned reinforcers. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:712:
  • image p486fig13.08 The blocking paradigm illustrates how cues that do not predict different consequences may fail to be attended.
    || Blocking- minimal adaptive prediction. Phase [I, II] - CS2 is irrelevant. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:713:
  • image p486fig13.09 Equally salient cues can be conditioned in parallel to an emotional consequence.
    || Parallel processing of equally salient cues vs overshadowing (Pavlov). /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:714:
  • image p486fig13.10 Blocking follows if both secondary conditioning and attenuation of conditioning at a zero ISI occur.
    || Blocking = ISI + secondary conditioning. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:715:
  • image p487fig13.11 The three main properties of CogEM that help to explain how attentional blocking occurs.
    || CogEM explanation of attentional blocking. Internal drive input <-> Conditioned reinforcer learning (self-recurrent) <-> Competition for STM <- Motor learning. 1. Sensory representations compete for limited capacity STM. 2. Previously reinforced cues amplify their STM via positive feedback. 3. Other dues lose STM via competition. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:716:
  • image p488fig13.12 (left column) How incentive motivational feedback amplifies activity of a sensory cortical cell population. (right column) A sensory cortical cell population whose activity is amplified by incentive motivational feedback can suppress the activities of less activated populations via self-normalizing recurrent competitive interactions.
    || Motivational feedback and blocking. (left) sensory input CS, STM activity without motivational feedback, STM activity with motivational feedback. (right) STM suppressed by competition, STM amplified by (+) feedback. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:717:
  • image p489fig13.13 (top row) If a positive ISI separates onset of a CS and US, then the CS can sample the consequences of the US during the time interval before it is inhibited by it. (bottom row) A CogEM simulation of the inverted-U in conditioning as a function of the ISI betweeen CS and US.
    || Positive ISI and conditioning. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:718:
  • image p490fig13.14 In order for conditioning to work properly, the sensory representation needs to have at least two successive processing stages. See the text for why.
    || Model of Cognitive-Emotional circuit. Drive-> Drive representation-> ??? <-> Sensory STM <-CS /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:719:
  • image p490fig13.15 The CogEM circuit is an ancient design that is found even in mollusks like Aplysia. See the text for details.
    || Aplysia (Buononamo, Baxter, Byrne, Neural Networks 1990; Grossberg, Behavioral and Brain Sciences 1983). Facilitator neuron ~ drive representation. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:71:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:720:
  • image p492fig13.16 (left column) In order to satisfy all four postulates, there needs to be UCS-activated arousal of polyvalent CS-activated sampling neuron. (right column) The arousal needs to be nonspecific in order to activate any of the CSs that could be paired with the UCS.
    || Polyvalent CS sampling and US-activated nonspecific arousal. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:721:
  • image p493fig13.17 (top row) Overcoming the ostensible contradiction that seems to occur when attempting to simultaneously realize hypotheses (3) and (4). (bottom row) The problem is overcome by assuming the existence of US-activated drive representation to which CSs can be associated, and that activate nonspecific incentive motivational feedback to sensory representations.
    || Learning nonspecific arousal and CR read-out. (top) Learning to control nonspecific arousal, Learning to read-out the CR (bottom) Drive representation, Incentive motivation. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:722:
  • image p494fig13.18 Realizing the above constraints favor one particular circuit. Circuits (a) and (b) are impossible. Circuit (d) allows previously occurring sensory cues to be stored in STM. Circuit (e) in addition enables a CS can be stored in STM without initiating conditioning in the absence of a US.
    || Learning to control nonspecific arousal and read-out of the CR: two stages of CS. (d) & (e) polyvalent cells. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:723:
  • image p494fig13.19 (left column, top row) Secondary conditioning of both arousal and a specific response are now possible. (bottom row) The CogEM circuit may be naturally extended to include multiple drive representations and inputs. (right column, top row) The incentive motivational pathways is also conditionable in order to enable motivational sets to be learned.
    || Secondary conditioning. Homology: conditionable incentive motivation. Multiple drive representations and inputs. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:724:
  • image p496fig13.20 (top image) A single avalanche sampling cell can learn an arbitrary space-time pattern by sampling it as a temporally ordered series of spatial patterns using a series of outstars. Once an avalanche's sampling cell starts to fire, there is no way to stop it from performing the entire space-time pattern, no matter how dire the consequences. (bottom image) If nonspecific arousal and a specific cue input are both needed to fire the next cell in an avalanche, then environmental feedback can shut off avalanche performance at any time, and volition can speed up or slow down performance.
    || Space-time pattern learning: avalanche. (top image) CS sampling signal-> serially activated outstars-> US spacetime input pattern. Sample a space-time pattern as a sequence of sptial patterns. (bottom image) Nonspecific arousal as a command cell. Polyvalent cell: nonspecific arousal as a STOP and a GO signal. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:725:
  • image p497fig13.21 (left column) An early embodiment of nonspecific arousal was a command cell in such primitive animals as crayfish. (right column) The songbird pattern generator is also an avalanche. This kind of circuit raises the question of how the connections self-organize through developmental learning.
    || Nonspecific arousal as a command cell. Crayfish swimmerets (Stein 1971). Songbird pattern generator (Fee etal 2002)+. Motor-> RA-> HVC(RA). /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:726:
  • image p498fig13.22 (left column, top row) Adaptive filtering and conditioned arousal are both needed to regulate what cues can learn to activate particular space-time patterns. These developments lead inexorably to basic cognitive abilities, as embodied in the 3D LAMINART models for 3D vision and figure-ground perception (Chapter 11) and the 3D ARTSCAN SEARCH model for invariant object learning, recognition, and 3D search (Chapter 6). (right column, top row) Conditioned arousal enables only emotionally important cues to activate a motivationally relevant space-time pattern. (bottom row) Conditioned arousal and drive representations arise naturally from the unlumping of avalanche circuits to make them selective to motivationally important cues. The MOTIVATOR model is a natural outcome of this unlumping process (this chapter).
    || (top) Adaptive filtering and Conditioned arousal. Towards Cognition: need to filter inputs to the command cell. Towards Emotion: important signals turn arousal ON and OFF. (bottom) Conditioned arousal and Drive representations. Competition between conditioned arousal sources at drive representations, eg amygdala. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:727:
  • image p499fig13.23 (left column) Self-organization in avalanches includes adaptive filtering by outstars [?instars?], serial learning of temporal order, and learned read-out of spatial patterns by outstars. (right column) Serial learning of temporal order occurs in recurrent associative networks.
    || (left) Self-organizing avalanches [instars, serial learning, outstars]. (right) Serial list learning. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:728:
  • image p500fig13.24 Both primary excitatory and inhibitory conditioning can occur using opponent processes and their antagonistic rebounds.
    || Opponent processing. Cognitive drive associations. Primary associations: excitatory [CS, US, Fear], inhibitory [CS, US, Fear, Relief rebound]. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:729:
  • image p501fig13.25 When an unbiased transducer is embodied by a finite rate physical process, mass action by a chemical transmitter is the result.
    || Unbiased transducer (Grossberg 1968). S=input, T=output, ?SB?=SB B is the gain. Suppose T is due to release of chemical transmitter y at a synapse: release rate T = S*y (mass action); Accumulation y ~= B. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:730:
  • image p501fig13.26 A simple differential equation describes the processes of transmitter accumulation and release that do their best, at a finite rate, to carry out unbiased transduction.
    || Transmitter accumulation and release. Transmitter y cannot be restored at an infinite rate: T = S*ym y ~= B, Differential equations: d[dt: y] = A*(B - y) - S*y = accumulate - release. Transmitter y tries to recover to ensure unbiased transduction. What if it falls behind? Evolution has exploited the good properties that happen then. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:731:
  • image p502fig13.27 Despite the fact that less transmitter y is available after persistent activation by a larger input signal S, the gated output signal S*y is larger die to the mass action gating of S by y.
    || Minor mathematical miracle. At equilibrium: 0 = d[dt: y] = A*(B - y) - S*y. Transmitter y decreases when input S increases: y = A*B/(A + S). However, output S*y increases with S!: S*y = S*A*B/(A + S) (gate, mass action). /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:732:
  • image p502fig13.28 Fast increments and decrements in an input S lead to slow habituation of the habituative gate, or medium-term memory, transmitter y. The output T is a product of these fast and slow variables, and consequently exhibits overshoots, habituation, and undershoots in its response.
    || Habituative transmitter gate: Input; Habituative gate d[dt: y] = A*(B - y) - S*y; Output [overshoot, habituation, undershoot]s Weber Law. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:733:
  • image p503fig13.29 The ON response to a phasic ON input has Weber Law properties due to the divisive terms in its equilibrium response, which are due to the habituative transmitter.
    || ON-response to phasic ON-input. S1 = f(I+J): y1 = A*B/(A+S1), T1 = s1*y1 = A*B*S1/(A+S1); S2 = f(I): y2 = A*B/(A+S2), T2 = s2*y2 = A*B*S2/(A+S2);. ON = T1 - T2 = (A^2*B*(f(I+J)-f(I)) / (A+f(I)) / (A+f(I+J)) Note Weber Law. When f has a threshold, small I requires larger J to fire due to numerator, but makes suprathreshold ON bigger due to denominator. When I is large, quadratic in denominator and upper bound of f make ON small. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:734:
  • image p504fig13.30 OFF rebound occurs when the ON-input shuts off due to the imbalance that is caused by the ON input in the habituation of the transmitters in the ON and OFF channels. The relative sizes of ON responses and OFF rebounds is determined by the arousal level I.
    || OFF-rebound due to phasic input offset. Shut off J (Not I!). Then: S1 = f(I), S2 = f(I); y1 ~= A*B/(A+f(I+J)) < y2 ~= A*B/(A+f(I)) y1 and y2 are SLOW; T1 = S1*y1, T2 = S2*y2, T1 < T2;. OFF = T2 - T1 = A*B*f(I)*(f(I+J) - f(I)) / (A+f(I)) / (A + f(I+J)), Note Weber Law due to remembered previous input. Arousal sets sensitivity of rebound: OFF/ON = f(I)/A. Why is the rebound transient? Note equal f(I) inputs. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:735:
  • image p504fig13.31 Behavioral contrast can occur during reinforcement learning due to decreases in either positive or negative reinforcers. See Figure 13.32 for illustrative operant conditioning data.
    || Behavioral contrast: rebounds! Shock level vs trials. 1. A sudden decrease in frequency or amount of food can act as a negative reinforcer: Frustration. 2. A sudden decrease in frequency or amount of shock can act as a positive reinforcer: Relief. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:736:
  • image p505fig13.32 Response suppression and the subsequent antagonist rebounds are both calibrated by the inducing shock levels.
    || Behavioral contrast (Reynolds 1968). Responses per minute (VI schedule) vs Trial shock level. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:737:
  • image p505fig13.33 An unexpected event can disconfirm ongoing processing by triggering a burst of nonspecific arousal that causes antagonistic rebounds in currently active gated dipoles, whether cognitive or affective.
    || Novelty reset: rebound to arousal onset. 1. Equilibrate to I and J: S1 = f(I+J); y1 = A*B/(A+S1); S2 = f(I+J); y2 = A*B/(A+S2);. 2. Keep phasic input J fixed; increase arousal I to I* = I + ∆I: (a) OFF reaction if T1 < T2; OFF = T2 - T1 = f(I*+J)*y2 - f(I*)*y1 = { A*B*(f(I*) - f(I*+J)) - B*(f(I*)*f(I+J) - f(I)*f(I*+J)) } / (A+f(I)) / (A + f(I+J)). 3. How to interpret this complicated equation? /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:738:
  • image p506fig13.34 With a linear signal function, one can prove that the rebound increases with both the previous phasic input intensity J and the unexpectedness of the disconfirming event that caused the burst of nonspecific arousal.
    || Novelty reset: rebound to arousal onset. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:73:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:740:
  • image p506fig13.35 A shock, or other reinforcing event, can have multiple cognitive and emotional effects on different brain processes.
    || Multiple functional roles of shock. 1. Reinforcement sign reversal: An isolated shock is a negative reinforcer; In certain contexts, a shock can be a positive reinforcer. 2. STM-LTM interaction: Prior shock levels need to be remembered (LTM) and used to calibrate the effect of the present shock (STM). 3. Discriminative and situational cues: The present shock level is unexpected (novel) with respect to the shock levels that have previously been contingent upon experimental cues: shock as a [1.reinforcer, 2. sensory cue, 3. expectancy]. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:741:
  • image p509fig13.36 How can life-long learning occur without passive forgetting or associative saturation?
    || Associative learning. 1. Forgetting (eg remember childhood experiences): forgetting [is NOT passive, is Selective]; 2. Selective: larger memory capacity; 3. Problem: why doesn't memory saturate? /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:742:
  • image p510fig13.37 A disconfirmed expectation can cause an antagonistic rebound that inhibits prior incentive motivational feedback, but by itself is insufficient to prevent associative saturation.
    || Learn on-response. 1. CS-> ON, disconfirmed expectation-> antagonistic rebound, OFF-channel is conditioned 2. CS-> [ON, OFF]-> net, zero net output. What about associative saturation? /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:743:
  • image p510fig13.38 Dissociation of the read-out of previously learned adaptive weights, or LTM traces, and of the read-in of new weight values enables back-propagating dendritic action potentials to teach the new adaptive weight values.
    || Dissociation of LTM read-out and read-in. Backpropagating dendritic action potentials as teaching signals. 1. LTM Denditic spines (Rall 1960's)-> Teaching signal - retrograde action potential-> opponent competition. 2. Early predictions: Ca++ currents in learning (Grossberg 1968); role of dendritic spines in learning (Grossberg 1975). Cf experiments of (Hausser, Markram, Poo, Sakmann, Spruston, etc). /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:744:
  • image p510fig13.39 Shunting competition and informational noise suppression in affective gated dipoles, plus back-propagating action potentials for teaching signals, enable the net normalized adaptive weights to be learned. They never saturate!
    || Learn net dipole output pattern. Opponent "decision" controls learning. Cf. competitive learning. Learning signal, opponent extinction. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:745:
  • image p512fig13.40 A conditioning paradigm that illustrates what it means for conditioned excitators to extinguish.
    || Conditioned excitor extinguishes. 1. Learning phase: CS1 bell-> US, CS1-> Fear(-). 2. Forgetting phase: CS1 bell-> Forgetting. 3. The expectation of shock is disconfirmed. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:746:
  • image p513fig13.41 A conditioning paradigm that illustrates what it means for conditioned inhibitors not to extinguish.
    || Conditioned inhibitor does not extinguish. 1. Learning phase: CS1 light-> shock, CS1-> Fear(-); Forgetting phase: n/a;. 2. Learning phase : CS1 + CS bell-> no shock; CS2-> relief;. Forgetting phase: CS2 bell- no forgetting. SAME CS could be used! SAME "teacher" in forgetting phase! Something else must be going on , or else causality would be violated! /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:747:
  • image p513fig13.42 A conditioned excitor extinguishes because the expectation that was learned of a shock during the learning phase is disconfirmed during the forgetting phase.
    || Conditioned excitor extinguishes. Learning phase: CS1 bell-> US; CS1-> Fear(-); CS1-> shock; CS1 is conditioned to an expectation of shock. Forgetting phase: CS2 bell-> forgetting;. The expectation of shock is disconfirmed. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:748:
  • image p513fig13.43 A conditioned inhibitor does not extinguish because the expectation that was learned of no shock during the learning phase is not disconfirmed during the forgetting phase.
    || Conditioned excitor extinguishes. 1. Learning phase: CS1 light-> Shock; CS1-> Fear(-);. Forgetting phase: n/a;. 2. Learning phase: CS1 bell + CS2-> NO shock; CS2-> relief(+); CS2-> no shock;. Forgetting phase: CS2 bell!-> no forgetting;. The expectation that "no shock" follows CS2 is NOT disconfirmed! /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:749:
  • image p514fig13.44 Analog of the COgEM model in Figure 6.1 of (Damasio 1999).
    || (a) map of object X-> map of proto-self at inaugural instant-> [, map of proto-self modified]-> assembly of second-order map. (b) map of object X enhanced-> second-order map imaged. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:757:
  • image p519fig14.01 Coronal sections of prefrontal cortex. Note particulary the areas 11, 13, 14, and 12o.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:758:
  • image p520fig14.02 Macrocircuit of the main brain regions, and connections between them, that are modelled in the unified predictive Adaptive Resonance Theory (pART) of cognitive-emotional and working memory dynamics. Abbreviations in red denote brain regions used in cognitive-emotional dynamics. Those in green denote brain regions used in working memory dynamics. Black abbreviations denote brain regions that carry out visual perception, learning and recognition of visual object categories, and motion perception, spatial representation and target tracking. Arrow denote non-excitatory synapses. Hemidiscs denote adpative excitatory synapses. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown. Also not shown are output signals from cortical areas to motor responses. V1: striate, or primary, visual cortex; V2 and V4: areas of prestriate visual cortex; MT: Middle Temporal cortex; MST: Medial Superior Temporal area; ITp: posterior InferoTemporal cortex; ITa: anterior InferoTemporal cortex; PPC: Posterior parietal Cortex; LIP: Lateral InterParietal area; VPA: Ventral PreArculate gyrus; FEF: Frontal Eye Fields; PHC: ParaHippocampal Cortex; DLPFC: DorsoLateral Hippocampal Cortex; HIPPO: hippocampus; LH: Lateral Hypothalamus; BG: Basal Ganglia; AMYG: AMYGdala; OFC: OrbitoFrontal Cortex; PRC: PeriRhinal Cortex; VPS: Ventral bank of the Principal Sulcus; VLPFC: VentroLateral PreFrontal Cortex. See the text for further details.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:759:
  • image p523fig14.03 (a) The MOTIVATOR neural model generalizes CogEM by also including the basal ganglia. It can hereby explain and simulate complementary functions of the amygdala and basal ganglia (SNc) during conditioning and learned performance. The basal ganglia generate Now Print signals in response to unexpected rewards. These signals modulate learning of new associations in many brain regions. The amygdala supports motivated attention to trigger actions that are expected to occur in response to conditioned or unconditioned stimuli. Object Categories represent visual or gustatory inputs in anterior inferotemporal (ITA) and rhinal (RHIN) cortices, respectively. Value Categories represent the value of anticipated outcomes on the basis of hunger and satiety inputs, in amygdala (AMYG) and lateral hypothalamus (LH). Object-Value Categories resolve the value of competing perceptual stimuli in medial (MORB) and lateral (ORB) orbitofrontal cortex. The Reward Expectation Filter detects the omission or delivery of rewards using a circuit that spans ventral striatum (VS), ventral pallidum (VP), striosomal delay (SD) cells in the ventral striatum, the pedunculopontine nucleus (PPTN) and midbrain dopaminergic neurons of the substantia nigra pars compacta/ventral tegmental area (SNc/VTA). The circuit that processes CS-related visual information (ITA, AMTG, ORB) operates in parallel with a circuit that processes US-related visual and gustatory information (RHIN, AMYG, MORB). (b) Reciprocal adaptive connections between hypothalamus and amygdala enable amygdala cells to become learned value categories. The bottom region represents hypothalmic cells, which receive converging taste and metabolite inputs whereby they become taste-drive cells. Bottom-up signals from activity patterns across these cells activate competing value category, or US Value Representations, in the amygdala. A winning value category learns to respond selectively to specific combinations of taste-drive activity patterns and sends adaptive top-down priming signals back to the taste-drive cells that activated it. CS-activated conditioned reinforcer signals are also associatively linked to value categories. Adpative connections end in (approximately) hemidiscs. See the text for details.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:75:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:760:
  • image p524fig14.04 (a) Model basal ganglia circuit for the control of dopaminergic Now Print signals from the substantia nigra pars compacta, or SNc, in response to unexpected rewards. Cortical inputs (Ii), activated by conditioned stimuli, learn to excite the SNc via a multi-stage pathway from the vantral striatum (S) to the ventral pallidum and then on to the PPTN (P) and the SNc (D). The inputs Ii excite the ventral striatum via adaptive weights WIS, and the ventral striatum excites the SNc with strength W_PD. The striosomes, which contain an adaptive spectral timing mechanism [xij, Gij, Yij, Zij], learn to generate adaptively timed signals that inhibit reward-related activation of the SNc. Primary reward signals (I_R) from the lateral hypothalamus both excite the PPTN directly (with strength W_RP) and act as training signals to the ventral striatum S (with strength W_RS) that trains the weights W_IS. Arrowheads denote excitatory pathways, circles denote inhibitory pathways, and hemidiscs denote synapses at which learning occurs. Thick pathways denote dopaminergic signals.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:761:
  • image p530fig14.05 Displays used by (Buschman, Miller 2007) in their visual search experiments. See the text foir details.
    || Fixation 500 ms-> Sample 1000 ms-> Delay 500 ms-> Visual [pop-out, search]- reaction time. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:762:
  • image p531fig14.06 Classification of scenic properties as texture categories by the ARTSCENE model. See the text for details.
    || Image-> Feature extraction (texture principal component rankings)-> Learning feature-to-scene mapping (texture category principal component rankings)<- scene class. Large-to-small attentional shrouds as principle component higher. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:763:
  • image p531fig14.07 Voting in the ARTSCENE model achieves even better prediction of scene type. See the text for details.
    || Image-> Feature extraction (texture principal component rankings)-> Learning feature-to-scene mapping (texture category principal component rankings)-> evidence accumulation (sum)-> scene class winner-take-all inference. Large-to-small attentional shrouds as principle component higher. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:764:
  • image p532fig14.08 Macrocircuit of the ARTSCENE Search neural model for learning to search for desired objects by using the sequences of already experienced objects and their locations to predict what and where the desired object is. V1 = First visual area or primary visual cortex; V2 = Second visual area; V4 = Fourth visual area; PPC = Posterior Parietal Cortex; ITp = posterior InferoTemporal cortex; ITa = anterior InferoTemporal cortex; MTL = Medial Temporal Lobe; PHC = ParaHippoCampal cortex; PRC = PeriRhinal Cortex; PFC = PreFrontal Cortex; DLPFC = DorsoLateral PreFrontal Cortex; VPFC = Ventral PFC; SC = Superior Colliculus.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:765:
  • image p533fig14.09 Search data and ARTSCENE Search simulations of them in each pair of images from (A) to (F). See the text for details.
    || 6*[data vs simulation], [Response time (ms) versus epoch]. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:773:
  • image p540fig15.01 The timing of CS and US inputs in the delay and trace conditioning paradigms.
    || Delay and trace conditioning paradigms. [CS, US] vs [Delay, Trace]. To perform an adaptively timed CR, trace conditioning requires a CS memory trace over the Inter-Stimulus Interval (ISI). /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:774:
  • image p541fig15.02 The neurotrophic Spectrally Timed Adaptive Resonance Theory, or nSTART, model of (Franklin, Grossberg 2017) includes hippocampus to enable adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between CS and US.
    || Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. [CS, US] -> Sensory Cortex (SC) <- motivational attention <-> category learning -> Prefrontal Cortex (PFC). SC conditioned reinforcement learning-> Amygdala (cannot bridge the temporal gap) incentive motivational learning-> PFC. SC adaptively timer learning and BDNF-> Hippocampus (can bridge the temporal gap) BDNF-> PFC. PFC adaptively timed motor learning-> cerebellum. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:775:
  • image p541fig15.03 Stages in the processing of adaptively timed conditioning, leading to timed responses in (d) that exhibit both individual Weber laws and an inverted U in conditioning as a function of ISI. See the text for details.
    || Curves of [Response vs ISI]. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:776:
  • image p542fig15.04 Conditioning data from (Smith 1968; Millenson etal 1977). The former shows the kind of Weber Law and inverted U that were simulated in Figure 15.3. The latter shows that, if there are two ISIs during an experiment, then the animals learn to adaptively time their responses with two properly scaled Weber laws.
    || (left) One ISI (Smith 1968) [mean membrane extension (mm) versus time after CS onset (msec)]. (right) Two ISIs (Millenson etal 1977) [200, 100] msec CS test trials, [mean momentary CS amplitude (mm) vs time after CS onset (msec)]. (bottom) Conditioned eye blinks, made with nictitating membrane and/or eyelid, are adaptively timed: peak closure occurs at expected time(s) of arrival of the US following the CS and obeys a Weber Law. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:777:
  • image p543fig15.05 Simulation of conditioning with two ISIs that generate their own Weber Laws, as in the data shown in Figure 15.4.
    || Learning with two ISIs: simulation: R = sum[all: f(xi)*yi*xi] vs msec. Each peak obeys Weber Law! strong evidence for spectral learning. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:778:
  • image p543fig15.06 The circuit between dentate granule cells and CA1 hippocampal pyramid cells seems to compute spectrally timed responses. See the text for details.
    || Hippocampal interpretation. 1. Dentate granule cells (Berger, Berry, Thompson 1986): "increasing firing...in the CS period...the latency...was constant". 2. Pyramidal cells: "Temporal model" Dentate granule cells-> CA3 pyramids. 3. Convergence (Squire etal 1989): 1e6 granule cells, 1.6e5 CA3 pyramids. 80-to-1 (ri). /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:779:
  • image p544fig15.07 In response to a step CS and sustained storage by I_CS of that input, a spectrum of responses xi at different rates ri develops through time.
    || Spectral timing: activation. CS-> I_CS-> All xi. STM sensory representation. Spectral activation d[dt: xi] = ri*[-A*xi + (1 - B*xi)*I_CS]. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:77:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:780:
  • image p544fig15.08 The spectral activities xi generate sigmoid signals f(xi) before the signals are, in turn, gated by habituative transmitters yi.
    || Habituative transmitter gate. transmitter. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:781:
  • image p544fig15.09 As always, the habituative transmitter gate yi increases in response to accumulation and decreases due to gated inactivation, leading to the kinds of transmitter and output responses in the right hand column.
    || Habituative transmitter gate (Grossberg 1968). 1. d[dt: yi] = c*(1-yi) - D*f(xi)*yi, C-term - accumulation, D-term gated inactivation. 2. Sigmoid signal f(xi) = xi^n / (B^n + xi^n). 3. Gated output signal f(xi)*yi. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:782:
  • image p545fig15.10 When the activity spectrum xi generates a spectrum of sigmoidal signals (f(xi), the corresponding transmitters habituate at different rates. The output signals f(xi)*yi therefore generate a series of unimodal activity profiles that peak at different times, as in Figure 15.3a.
    || A timed spectrum of sampling intervals. [f(xi) activation, yi habituation, f(xi)*yi gated sampling] spectra. gated = sampling intervals. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:783:
  • image p545fig15.11 The adaptive weight, or LTM trace , zi learns from the US input I_US at times when the sampling signal f(xi)*yi is on. It then gates the habituative sampling signal f(xi)*yi to generate a doubly gated response f(xi)*yi*zi.
    || Associative learning, gated steepest descent learning (Grossberg 1969). d[dt: zi] = E*f(xi)*yi*[-zi + I_US], E-term read-out of CS gated signal, []-term read-out of US. Output from each population: f(xi)*yi*zi doubly gated signal. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:784:
  • image p546fig15.12 The adaptive weights zi in the spectrum learn fastest whose sampling signals are large when the US occurs, as illustrated by the green region in this simulation of (Grossberg, Schmajuk 1989).
    || Computer simulation of spectral learning. (left) fast (right) slow. Constant ISI: 6 cells fast to slow, 4 learning trials, 1 test trial. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:785:
  • image p546fig15.13 The total learned response is a sum R of all the doubly gated signals in the spectrum.
    || Adaptive timing is a population property. Total output signal: R = sum[i: f(xi)*yi*zi]. Adaptive timing is a collective property of the circuit. "Random" spectrum of rates achieves good collective timing. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:788:
  • image p547fig15.14 An individual's survival depends upon being able to process ?UN?expected non-occurrences, or disconfirmations, of goals differently from EXPECTED non-occurrences, or disconfirmations. See the text for details.
    || Unexpected non-occurences of goal: a predictive failure, eg reward that does not occur at the expected time. Leads to Orienting Reactions: Cognitive- STM reset, attention shift, forgetting; Emotional- Frustration; Motor- Exploratory behaviour;. What about an Expected non-occurrence? predictive signal, all other events, expected goal. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:789:
  • image p547fig15.15 Expected non-occurences do not prevent the processing of sensory events and their expectations. Rather, they prevent mismatches of those expectations from triggering orienting reactions.
    || Expected non-occurrence of goal. Some rewards are reliable but delayed in time. Does not lead to orienting reactions: How? Both expected and unexpected nonoccurrences are diue to mismatch of a sensory event with learned expectations. Expected non-occurrences do not inhibit sensory matching: eg a pigeon can see an earlier-than-usual food pellet. Hypothesis: Expected non-occurrences inhibit the process whereby sensory mismatch activates orienting reactions. Mismatch not-> orient. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:790:
  • image p548fig15.16 Homologous recognition learning and reinforcement learning macrocicuits enable adaptively timed conditioning in the reinforcement learning circuit to increase inhibition of the orienting system at times when a mismatch in the recognition system would have reduced inhibition of it.
    || Homolog between ART and CogEM model, complementary systems. [Recognition, Reinforcement] learning vs [Attentional, Orienting] system. Reinforcement: timing, drive representation. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:791:
  • image p548fig15.17 The timing paradox asks how inhibition of an orienting response (-) can be spread throughout the ISI, yet accurately timed responding can be excited (+) at the end of the ISI.
    || Timing paradox. [CS light, US shock] vs t. ISI = InterStimulus Interval = expected delay of reinforcer. Want timing to be accurate. Want to inhibit exploratory behaviour throught ISI. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:792:
  • image p549fig15.18 The Weber Law solves the timing paradox by creating an adaptively timed response throughout the ISI that peaks at the ISI. Within the reinforcement learning circuit, this response can maintain inhibition of the orienting system A at the same time as it generates adaptively timed incentive motivation to the orbitofrontal cortex.
    || Weber Law: reconciling accurate and distributed timing. Resolution: Output can inhibit orienting, peak response probability. What about different ISIs? Standard deviation = peak time. Weber law rule. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:793:
  • image p549fig15.19 How the adaptively timed hippocampal spectrum T inhibits (red arrow) the orienting system A as motivated attention in orbitofrontal cortex Si(2) peaks at the ISI.
    || Conditioning, Attention, and Timing circuit. Hippocampus spectrum-> Amgdala orienting system-> neocortex motivational attention. Adaptive timing inhibits orienting system and maintains adaptively timed Motivated Attention on the CS. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:794:
  • image p550fig15.20 Adaptively timed conditioning of Long Term Depression, or LTD, occurs in the cerebellum at synapses between parallel fibres and Purkinje cells, thereby reducing inhibition of subcortical nucleus cells and enabling them to express their learned movement gains within the learned time interval. Also see Figure 15.21.
    || [CS-Activated input pathways parallel fibres, US-Activated climbing fibres]-> [Subcortical nucleus (gain control), Cerebella cortex- Purkinje cells (timing)]. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:795:
  • image p551fig15.21 The most important cells types and circuitry of the cerebellum: Purkinje cells (PC) receive excitatory inputs from the climbing fibres (CF) that originate in the inferior olive (IO) and from parallel fibres (PF), which are the axons for granule cells (GC). GCs, in turn, receive inputs from the mossy fibres (MF) coming from the precerebellar nuclei (PCN). The PF also inhibit PC via basket cells (BC), thereby helping to select the most highly activated PC. The PC generate inhibitory outputs from the cerebellum cortex to the deep cerebellar nuclei (DCN), as in Figure 15.20. Excitatory signals are denoted by (+) and inhibitory signals by (-). Other notations: GL- granular layer; GoC- golgi cells; ML- molecular layer; PCL- Purkinje cell layer; SC- stellate cell; WM- white matter.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:796:
  • image p551fig15.22 Responses of a retinal cone in the turtle retina to brief flashes of light of increasing intensity.
    || response vs msc. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:797:
  • image p552fig15.23 Cerebellar biochemistry that supports the hypothesis of how mGluR supports adaptively timed conditioning at cerebellar Purkinje cells. AMPA, Amino-3-hydroxy-5-methyl4-isoxazole priopionic acid-sensitive glutamate receptor; cGMP, cyclic guanosine monophosphate; DAG, diacylglycerol; glu, glutamate; GC, guanylyl cyclase; gK, Ca+-dependent K+ channel protein; GTP, guanosine triphosphate; IP 3'inositol,4,5-triphosphate; NO, nitric oxide; NOS, nitric oxide synthase; P, phosphate; PLC, phospholipase C; PKC, protein kinase C; PKG, cGMP-dependent protein kinase; PP-I, protein phosphatase-i;.
    || climbing fibre induced depolarization, parallel fibre induced mGLuR1 activation. PDE, GTP, 5'GMP, G-substrate, calcineurin, AMPA... /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:798:
  • image p556fig15.24 (a) Data showing normally timed responding (solid curve) and short latency responses after lesioning cerebellar cortex (dashed curve). (b) computer simulation of short latency response after ablation of model cerebellar cortex.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:799:
  • image p557fig15.25 Computer simulations of (a) adaptively timed long term depression at Purkinje cells, and (b) adaptively timed activation of cereballar nuclear cells.
    || response vs time (msec) /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:79:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:800:
  • image p557fig15.26 Brain regions and processes that contribute to autistic behavioral symptoms when they become imbalanced in prescribed ways.
    || Basal Gamglia prolonged gate opening <-> { Amygdala emotionally depressed-> [hippocampus- hyperspecific learning; Cerebellum- adaptive timing fails; hypofrontal blocking fails, no Theory of Mind]-> Neocortex; Neocortex- rewards not received-> Amygdala}. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:801:
  • image p559fig15.27 Brain regions and processes that contribute to the release of dopaminergic Now Print signals by the substantia nigra pars compacta, or SNc, in response to unexpected reinforcing events. See the text for details.
    || Model of spectrally timed SNc learning (Brown, Bulloch, Grossberg 1999). Delayed inhibitory expectations of reward. Dopamine cells signal an error in reqard prediction timing or magnitude. Immediate excitatory predictions of reward. Lateral hypothalamus (Primary Reward Input)-> [(+)ventral striatum <-> ventral pallidium (+)-> PPTN(+)-> SNc]. SNc-> [dopamine signal -> ventral striatum, Striosomal cells]. Conditioned Stimuli (CS)(+)-> [ventral striatum, striosomal cells]. Striosomal cells(-)-> SNc. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:802:
  • image p559fig15.28 Neurophysiological data (left column) and model simulations (right column) of SNc responses. See the text for details.
    || membrane potential vs time /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:803:
  • image p560fig15.29 Excitatory pathways that support activation of the SNc by a US and the conditioning of a CS to the US.
    || Excitatory pathway. Primary reward (apple juice) briefly excites lateral hypothalamus. Hypothalamic-PPTN excitation causes SNc dopamine burst. Hypothalamic activity excites ventral striatum for training. Active CS working memory signals learn to excite ventral striatum. Lateral hypothalamus (Primary Reward Input)-> [(+)ventral striatum <-> ventral pallidium(+)-> PPTN(+)-> SNc]. SNc-> [dopamine signal -> ventral striatum. Conditioned Stimuli working memory trace (CS)(+)-> ventral striatum. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:804:
  • image p560fig15.30 The inhibitory pathway from striosomal cells to the SNc is able to inhibit the SNc when a reward occurs with expected timing and magnitude.
    || Inhibitory pathway. Learning: CS-striosomal LTP occurs due to a three-way coincidence [An active CS working memory input, a Ca2+ spike, a dopamine burst]; Signaling: The delayed Ca2+ spike facilitates striosomal-SNc inhibition;. Striosomal cells learn to predict both timing and magnitude of reward signal to cancel it: reward expectation;. Conditioned stimuli (CS) LTP-> Striosomal cells <- dopamine | (-)-> SNc->. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:805:
  • image p561fig15.31 The CS activates a population of striosomal cells that respond with different delays in order to enable adaptively timed inhibition of the SNc.
    || Expectation timing (Fiala, Grossberg, Bulloch 1996; Grossberg, Merrill 1992, 1996; Grossberg, Schmajuk 1989). How do cells bridge hundreds of milliseconds? Timing spectrum (msec). 1. CS activates a population of cells with delayed transient signals: MGluR. 2. Each has a different delay, so that the range of delays covers the entire interval. 3. Delayed transients gate both learning and read-out of expectations. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:806:
  • image p561fig15.32 The SNc can generate both dopamine bursts and dips in response to rewards whose amplitude is unexpectedly large or small.
    || Inhibitory pathway: expectation magnitude. 1. If reward is greater than expected, a dopamine burst causes striosomal expectation to increase. 2. If reward is less than expected, a dopamine dip causes striosomal expectation to decrease. 3. This is a negative feedback control system for learning. Conditioned stimuli (CS)-> Striosomal cells <- dopamine | (-)-> SNc->. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:807:
  • image p563fig15.33 The basal ganglia gate neural processing in many parts of the brain. The feedback loop through the lateral orbitofrontal cortex (blue arrow, lateral orbitofrontal) is the one that MOTIVATOR models.
    || MOTIVATOR models one of several thalamocortical loops through basal ganglia (Adapted from Fundamental Neuroscience. 2002 Copyright Elsevier). [cortex-> striatum-> pallidum S. nigra-> thalamus] vs [motor, oculomotor, dorsolateral prefrontal, lateral orbitofrontal, anterior cingulate]. thalamus-> [striatum, cortex]. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:808:
  • image p563fig15.34 The colored regions are distinct parts of the basal ganglia in the loops depicted in Figure 15.33.
    || Distinct basal ganglia zones for each loop (Adapted from Fundamental Neuroscience. 2002 Copyright Elsevier). /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:809:
  • image p564fig15.35 (a) A pair of recurrent shunting on-center off-surround networks for control of the fore limbs and hind limbs. (b) Varying the GO signal to these networks can trigger changes in movement gaits. See the text for details.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:810:
  • image p565fig15.36 (a) The FOVEATE model circuit for the control of saccadic eye movements within the peri-pontine reticular formation. (b) A simulated saccade staircase. See the text for details.
    || [left, right] eye FOVEATE model. [vertical vs horizontal] position (deg). /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:811:
  • image p566fig15.37 Steps in the FOVEATE model's generation of a saccade. See the text for details.
    || input(+)-> LLBN-> [(-)OPN, (+)EBN], EBN(-)-> LLBN. (A) rest OPN active. (B) charge [input, LLBN, OPN] active. (C) burst [input, LLBN, EBN] active. (D) shutdown [OPN, EBN] active. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:812:
  • image p567fig15.38 (a) The Gated Pacemaker model for the control of circadian rythms is a recurrent shunting on-center off-surround network whose excitatory feedback signals are gated by habituative transmitters. Tonic arousal signals energize the pacemaker. Diurnal (left) and nocturnal (right) pacemakers are determined by whether phasic light signals turn the pacemaker on or off. An activity-dependent fatigue signal prevents the pacemaker from becoming overly active for too long. (b) Two simulations of circadian activity cycles during different schedules of light (L) and dark (D). See the text for details.
    || sourceOn-> on-cells (recurrent) <-(-) (-)> off-cells (recurrent) <-sourceOff. on-cells-> activity-> off-cells. off-cells-> fatigue. Diurnal: sourceOn=[light, arousal]; sourceOff=arousal;. Nocturnal: sourceOn=arousal; sourceOff=[arousal, light];. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:813:
  • image p568fig15.39 Circuits of the MOTIVATOR model that show hypothalamic gated dipoles.
    || [inputs, -> [object, value] categories-> object-value categories-> [reward expectation filter, [FEF, EAT] outputs]. reward expectation filter [DA dip, arousal burst]-> alpha1 non-specific arousal-> value categories. Msi drive inputs-> value categories. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:814:
  • image p569fig15.40 The direct and indirect basal ganglia circuits that control GO and STOP movement signals. See the text for details.
    || [Direct path GO(+), Indirect path STOP(+), dopamine from SNc(+-)]-> striatum. GO-> GPi/SNr-> Thalamus (VA/Vlo) <-> frontal cortex. STOP-> GPe <-> STN-> GPi/SNr. NAc-> GPi/SNr. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:81:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:823:
  • image p573fig16.01 The experimental chamber (A) and neurophysiological recordings from a rat hippocampus (B) that led to the discovery of place cells. See the text for details.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:824:
  • image p574fig16.02 Neurophysiological recordings of 18 different place cell receptive fields. See the text for details.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:825:
  • image p575fig16.03 As a rat navigates in its experimental chamber (black curves), neurophysiological recordings disclose the firing patterns (in red) of (a) a hippocampal place cell and (b) an entrorhinal grid cell.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:826:
  • image p578fig16.04 Cross-sections of the hippocampal regions and the inputs to them. See the text for details.
    || EC-> CA1-> CA3-> DG. Layers [V/V1, II, II]. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:827:
  • image p580fig16.05 Macrocircuit of the GridPlaceMap model, which can learn both 2D grid cells and place cells in response to realistic trajectories of navigating rats using a hierarchy of SOMs with identical equations.
    || GridPlaceMap model: rate-based and spiking (Pilly, Grossberg 2012). Pre-wired 1D stripe cells, learns both 2D frid and place cells! Same laws for both; both select most frequent and energetic inputs. Place cells emerge gradually in response to developing grid cells. [place-> grid-> stripe] cells-> path integration-> vestibular signals /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:828:
  • image p581fig16.06 The learning of hexagonal grid cell receptive fields as an animal navigates an open field is a natural consequence of simple trigonometric properties of the positions at which the firing of stripe cells that are tuned to different directions will co-occur.
    || The Trigonometry of spatial navigation. Coactivation of stripe cells. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:829:
  • image p582fig16.07 Stripe cells were predicted in (Mhatre, Gorchetchnikov, Grossberg 2012) to convert linear velocity signals into the distances travelled in particular directions. They are modeled by directionally-sensitive ring attractors, which help to explain their periodic activation as an animal continues to move in a given direction. See the text for details.
    || Stripe cells. Stripe cells are predicted to exist in (or no later than) EC layer (III, V/VI). Linear path integrators: represent distance traveled using linear velocity modulated with head direction signal. Ring attractor circuit: the activity bump represents distance traveled, stripe cells with same spatial period and directional preference fire with different spatial phases at different ring positions. Distance is computed directly, it does not require decoding by oscillatory interference. Periodic stripe cell activation due to ring anatomy: periodic boundary conditions. Stripe firing fields with multiple orientations, phases and scales. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:830:
  • image p582fig16.08 Some experimental evidence for stripe-like cell receptive fields has been reported. The band cells posited by Neil Burgess also exhibit the one-dimensional firing symmetry of stripe cells, but are modeled by oscillatory intererence. See the text for details.
    || Evidence for stripe-like cells. Entorhinal cortex data (Sargolini, Fyhn, Hafting, McNaughton, Witter, Moser, Moser 2006; Krupic, Burgess, O'Keefe 2012). Similar hypothetical construct used by Interference model but position is decoded by grid cell oscillatory interference- Band Cells (Burgess 2008). /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:831:
  • image p583fig16.09 The GRIDSmap model used algorithmically defined stripe cells to process realistic rat trajectories. The stripe cell outputs then formed inputs to the adaptive filter of a self-organizing map which learned hexagonal grid cell receptive fields.
    || GRIDSmap. Self-organizing map receives inputs from stripe cells and learns to respond to most frequent co-activation patterns. Stripe cells combine speed and head direction to create a periodic 1D position code. Virtual rat navigated using live rat trajectories from Moser Lab. Speed and head direction drives stripe cells. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:832:
  • image p583fig16.10 The GRIDSmap model is embedded into a more complete representation of the processing stages from receipt of angular head velocity and linear velocity signals to this learning of place cells.
    || GRIDSmap. Pre-wired 2D stripe cells, learns 2D grid cells. vestibular cells [angular head velocity-> head direction cells, linear velocity]-> stripe cells- small scale 1D periodic spatial code (ECIII)-> SOM grid cells entorhinal cortex- small scale 2D periodic spatial scale-> SOM place cells hippocampal cortex- large scale 2D spatial code (dentate/CA3). Unified hierarchy of SOMs. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:833:
  • image p584fig16.11 GRIDSmap simulation of the learning of hexagonal grid fields. See the text for details.
    || Simulation results. Multiple phases per scale. response vs lenght scale (0.5m+). /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:834:
  • image p584fig16.12 Temporal development of grid cell receptive fields on successive learning trials (1,3,5,7,25,50,75,100).
    || Temporal development of grid fields. Cells begin to exhibit grid structure by 3rd trial. Orientations of the emergent grid rotate to align with each other over trials. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:835:
  • image p585fig16.13 Hexagonal grid cell receptive fields develop if their stripe cell directional preferences are separated by 7, 10, 15, 20, or random numbers degrees. The number and directional selectivities of stripe cells can thus be chosen within broad limits without undermining grid cell development.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:836:
  • image p585fig16.14 Superimposing firing of stripe cells whose directional preferences differ by 60 degrees supports learning hexagonal grid cell receptive fields in GRIDSmap.
    || GRIDSmap: from stripe cells to grid cells. Grid-cell Regularity from Integrated Distance through Self-organizing map. Superimposing firing of stripe cells oriented at intervals of 60 degrees. Hexagonal grid! /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:837:
  • image p586fig16.15 Superimposing stripe cells oriented by 45 degrees does not lead to learning of rectangular grids in GRIDSmap, but it does in an oscillatory inference model.
    || Why is a hexagonal grid favored? Superimposing firing of stripe cells oriented at intervals of 45 degrees. Rectangular grid. This and many other possibilities do not happen in vivo. They do happen in the oscillatory inference model. How are they prevented in GRIDSmap? /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:838:
  • image p586fig16.16 In the place cell learning model of (Gorchetnikov, Grossberg 2007), three populations of five cells each of entorhinal grid cells (only two are shown) with different spatial periods input to the model's dentate gyrus. The grid cells are one-dimensional and defined algorithmically. A model dentate gyrus granule cell that receives strong projections from all three grid cell scales fires (green cell) and activates a recurrent inhibitory interneuron that inhibits other granule cells. It also generates back-propagating action potentials that trigger learning in the adaptive weights of the projections from the grid cells, thereby causing learning of place cell receptive fields.
    || Grid-to-place Self-Organizing map (Gorchetnikov, Grossberg 2007). Formation of place cell fields via grid-to-place cell learning. Least common multiple: [grid (cm), place (m)] scales: [40, 50, 60 (cm); 6m], [50, 60, 70 (cm); 21m], [41, 53, 59 (cm); 1.282 km]. Our simulations: [40, 50 (cm); 2m], [44, 52 (cm); 5.72m]. Our SOM: Spiking Hodgkin-Huxley membrane equations; Nonlinear choice by contrast-enhancing recurrent on-center off-surround net;. Choice triggers back-propagating action potentials that induce STDP-modulated learning on cell dendrites. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:839:
  • image p587fig16.17 A finer analysis of the 2D trigonometry of spatial navigation showed that both the frequency and amplitude of coactivations by stripe cells determine the learning of hexagonal grid fields.
    || A refined analysis: SOM amplifies most frequent and energetic coactivations (Pilly, Grossberg 2012). [linear track, 2D environment]. (left) Stripe fields separated by 90°. 25 coactivations by 2 inputs. (right) Stripe fields separated by 60°. 23 coactivations by 3 inputs. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:83:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:840:
  • image p588fig16.18 Simulations of coordinated learning of grid cell receptive fields (second row) and unimodal place cell receptive fields (third row) by the hierarchy of SOMs in the GridPlaceMap model. Note the exquisite regularity of the hexagonal grid cell firing fields.
    || [stripe, grid, place] cells vs [spikes on trajectory, unsmoothed rate map, smoothed rate map]. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:841:
  • image p589fig16.19 Neurophysiological data showing the smaller dorsal grid cell scales and the larger ventral grid cell scales.
    || Spatial scale of grid cells increase along the MEC dorsoventral axis (Hafting etal 2005; Sargolini etal 2006; Brun etal 2008). [dorsal (left), ventral (right)] cart [rate map, autocortelogram]. How does the spatial scale increase along the MEC dorsoventral axis? /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:842:
  • image p590fig16.20 Integration rate of grid cells decreases along the dorsoventral gradient of the Medial Entorhinal Cortex, or MEC.
    || Dorsoventral gradient in the rate of synaptic integration of MEC layer II stellate cells (Garden etal 2008). Cross-section of [Hp, CC, LEC, MEC. (A left column) [dorsal, ventral] mV? vs msec. (B center column) [half width (ms), rise time (ms), amplitude (mV)] vs location (μm). (C right upper) responses (D right lower) width (ms) vs loacation (μm). /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:843:
  • image p590fig16.21 Frequency of membrane potential oscillations in grid cells decreases along the dorsoventral gradient of the MEC.
    || Dorsoventral gradient in the frequency of membrane potential oscillations of MEC layer II stellate cells (Giocomo etal 2007). (C left column) Oscillation (Hz) vs distance from dorsal surface (mm). (D right upper) [dorsal, ventral oscillations 5mV-500ms. (E right lower) [dorsal, ventral oscillations 100ms. Both membrane potential oscillation frequency and resonance frequency decrease from the dorsal to ventral end of MEC. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:844:
  • image p591fig16.22 Time constants and duration of afterhyperpolarization currents of grid cells increase along the dorsoventral gradient of the MEC.
    || Dorsoventral gradient in afterhyperpolarization (AHP) kinetics of MEC layer II stellate cells (Navratilova etal 2012). [mAHP time constant (ms), Half-width (mm)] vs distance from the dorsal surface (mm), at [-55, -50, -45] mV. Time constants and duration of AHP increase from the dorsal to the ventral end of MEC layer II. Effectively, the relative refractory period is longer for ventral stellate cells in MEC layer II. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:845:
  • image p591fig16.23 The Spectral Spacing Model uses a rate gradient to learn a spatial gradient of grid cell receptive field sizes along the dorsoventral gradient of the MEC.
    || Spectral spacing model. Map cells responding to stripe cell inputs of multiple scales. Grid cells: MEC layer II (small scale 2D spatial code). Stripe cells: PaS / MEC deep layer (small scale 1D spatial code). Path Integration. Vestibular signals- linear velocity and angular head velocity. SOM. How do entorhinal cells solve the scale selection problem? /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:846:
  • image p592fig16.24 Parameter settings in the Spectral Spacing Model that were used in simulations.
    || Simulation settings. Activity vs distance (cm). Learning trials: 40. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:848:
  • image p593fig16.25 Spectral Spacing Model STM, MTM, and LTM equations. The rate spectrum that determines the dorsoventral gradient of multiple grid cell properties is defined by μm.
    || Spectral Spacing Model equations. [STM, MTM, LTM]. μm = rate spectrum. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:849:
  • image p593fig16.26 Data (left column) and simulations (right column) of the gradient of increasing grid cell spacing along the dorsoventral axis of MEC.
    || Gradient of grid spacing along dorsoventral axis of MEC (Brun etal 2008). data-[Distance (m?), Median grid spacing (m?)] simulations-[Grid spacing (cm), Grid spacing (cm)] vs response rate. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:850:
  • image p594fig16.27 Data (left column) and simulations (right column) of the gradient of increasing grid cell field width along the dorsoventral axis of MEC.
    || Gradient of field width along dorsoventral axis of MEC (Brun etal 2008). data-[Distance (m?), Width autocorr peak (m?)] simulations-[Grid field width (cm), Width autocorr peak (cm)] vs response rate. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:851:
  • image p595fig16.28 Data (left column) and simulations (right column) about peak and mean grid cell response rates along the dorsoventral axis of MEC.
    || Peak and mean rates at different locations along DV axis of MEC (Brun etal 2008). Peak rate (Hz) vs [data- DV quarter, simulations- Response rate]. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:852:
  • image p596fig16.29 Data (top row) and simulations (bottom row) showing decreasing frequency of subthreshold membrane potential oscillations along the DV axis of MEC.
    || Subthreshold membrane potential oscillations at different locations along DV axis of MEC (Giocomo etal 2020; Yoshida etal 2011). Data [oscillations (Hz) vs distance from dorsal surface (mm) @[-50, -45] mV, Frequency (Hz) vs [-58, -54, -50] mV]. Simulations MPO frequency (Hz) s [response, habituation] rate. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:853:
  • image p596fig16.30 Data (top row) and simulations (bottom row) of spatial phases of learned grid and place cells.
    || Spatial phases of learned grid and place cells (Hafting etal 2005). Data: Cross-correlogram of rate maps of two grid cells; Distribution of phase difference: distance from origin to nearest peak in cross-correlogram. Simulations: Grid cell histogram of spatial correlation coefficients; Place cell histogram of spatial correlation coefficients. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:854:
  • image p597fig16.31 Data (a) and simulations (b-d) about multimodal place cell receptive fields in large spaces. The simulations are the result of learned place fields.
    || Multimodal place cell firing in large spaces (Fenton etal 2008; Henriksen etal 2010; Park etal 2011). Number of cells (%) vs Number of place fields. [2, 3] place fields, 100*100 cm space. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:855:
  • image p597fig16.32 Data (top row) and simulations (bottom row) about grid cell development in juvenile rats. Grid score increases (a-b and d), whereas grid spacing remains fairly flat (c and e).
    || Model fits data about grid cell development (Wills etal 2010; Langston etal 2010). Data: [Gridness, grid score, inter-field distance (cm)]. Simulations: [Gridness score, Grid spacing (cm)] vs trial. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:856:
  • image p598fig16.33 Data (top row) and simulations (bottom row) of changes in place cell properties in juvenile rats, notably about spatial information (a,c) and inter-trial stability (b,d).
    || Model fits data about grid cell development (Wills etal 2010). [Data, Simulation] vs [spatial information, inter-trial stability]. x-axis [age (postnatal day), trial]. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:857:
  • image p598fig16.34 The spiking GridPlaceMap model generates theta-modulated place and grid cell firing, unlike the rate-based model.
    || Theta-modulated cells in spiking model. [place, grid] cell vs [membrane potential (mV vs time), frequency vs inter-spike intervals (s), power spectra (normalized power vs frequency (Hz))]. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:858:
  • image p599fig16.35 Data (a) and simulations (b,c) about anatomically overlapping grid cell modules. (a) shows the anatomical distribution of grid cells belonging to different modules in one animal. DV location (mm) vs postrhinal border. (b) shows the simulated distribution of learned grid cell spacings from two stripe cell scales. frequency (%) vs grid spacing (cm). mu = [1, 0.6]. (c) shows what happens when half the cells respond with one rate and half another rate. (d) shows the same with three rates. (e-g) show spatial maps and autocorrelograms of grid cells that arise from the different rates in (d). [rate map, autocorelogram] vs [score [1.07, 0.5, 0.67], spacing (cm) [23.58, 41, 63.64]].
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:859:
  • image p600fig16.36 The entorhinal-hipppocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories. See the text for details.
    || Entorhinal-hippocampal interactions as an ART system. Hippocampal place cells as spatial categories. Angular head velocity-> head direction cells-> stripe cells- small scale 1D periodic code (ECIII) SOM-> grid cells- small scale 2D periodic code (ECII) SOM-> place cells- larger scale spatial map (DG/CA3)-> place cells (CA1)-> conjunctive-coding cells (EC V/VI)-> top-down feedback back to stripe cells- small scale 1D periodic code (ECIII). stripe cells- small scale 1D periodic code (ECIII)-> place cells (CA1). /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:85:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:860:
  • image p602fig16.37 Data showing the effect of hippocampal inactivation by muscimol on grid cell firing before, during, and six hours after the muscimol, reading from left to right.
    || Hippocampal inactivation disrupts grid cells (Bonnevie etal 2013). muscimole inactivation. spikes on trajectory: [before, after min [6-20, 20-40, 40-60, 6h]]. rate map (Hz) [18.6, 11.4, 9.5, 6.7, 10.8]. spatial autocorrelogram g=[1.12, 0.05, -0.34, 0.09, 1.27]. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:861:
  • image p603fig16.38 Role of hippocampal feedback in maintaining grid fields. (a) Data showing the effect of hippocampal inactivation before and during muscimol inhibition of hippocampal cells, as in Figure 16.37. (b) Model simulation with normal grid fields. (c) Model simulation that emulates the effect of hippocampal inhibition on grid fields.
    || (a) Data: hippocampal inactivation [before, after] cart [spikes on trajectory (p: [18.6, 6.7] Hz), spatial autocorrelogram (g= [1.12, 0.09])]. (b) Model: noise-free path integration, [spikes on trajectory (p: 14.56 Hz), rate map, spatial autocorrelogram (g= 1.41), dynamic autocorrelogram (g=0.6)]. (c) Model: noisy path integration + non-specific tonic inhibition, [spikes on trajectory (p: 11.33 Hz), rate map, spatial autocorrelogram (g= 0.05), dynamic autocorrelogram (g=0.047)]. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:862:
  • image p605fig16.39 Data showing effects of medial septum (MS) inactivation on grid cells and network theta oscillations in medial entorhinal cortex (MEC). (A) Examples of disruption in the spatial expression of the hexagonal grid structure for two grid cells (Brandon etal 2011). (B) Temporal reduction in the power and frequency of network theta oscillations (Koenig etal 2011). (C) Temporary reduction in the gridness score, mean firing rate, and patial stability of grid cells (Koenig etal 2011).
    || Disruptive effects of Medial Septum inactivation in Medial Entorhinal Cortex (Brandon etal 2011; Koenig etal 2011). (A) Rate map [rate map, spatial autocorrelations, trajectory] vs [baseline, sub-sampled, medial septum inactivation, 3-6 hour recovery, 24 hour recovery], [rate map (Hz- m, p), spatial autocorrelations (gridness)][ 1.2, 7.2, 1.1; 0.25, 1.7, 0.6; 0.25, 2.5, -0.53; 0.7, 5.1, 0.55; 1.0, 5.3, 1.3; 2.1, 15, 0.19; 1.7, 12, 0.71; 1.7, 3.2, -0.22; 1.8, 9.1, 0.68; 2.5, 13, 0.46]. (B) [normalized power at 7-9 Hz, frequency (Hz)] vs 5-minute periods. (C) [mean gridness score (+-SEM), mean firing rate (% of baseline), mean correlation coeff (+-SEM)] vs 10-minute periods. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:863:
  • image p607fig16.40 Effects of medial septum (MS) inactivation on grid cells. (a) Each row shows data and different data-derived measures of grid cell responsiveness, starting from the left with the baseline response to the middle column with maximal inhibition. (b) Data showing the temporary reduction in the gridness scores during MS inactivation, followed by recovery. (c) Simulation of the collapse in gridness, achieved by reduction in cell response rates to mimic reduced cholinergic transmission. (d,e) Simulations of the reduction in gridness scores in (d) by reduction of cell response rates, in (e) by changing the leak conductance. See the text for details.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:864:
  • image p611fig16.41 How back-propagating action potentials, supplemented by recurrent inhibitory interneurons, control both learning within the synapses on the apical dendrites of winning pyramidal cells, and regulate a rythm by which associative read-out is dissociated from read-in. See the text for details.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:865:
  • image p612fig16.42 Macrocircuit of the main SOVEREIGN subsystems.
    || [reward input, drive input, drive representation (DR), visual working memory and planning system (VWMPS), visual form and motion system (VFMS), motor approach and orienting system (MAOS), visual input (VisIn), motor working memory and planning system (MWMPS), motor approach and orienting system (MAOS), motor plant (MotP), Proprioceptive Input (PropIn), Vestibular Input (VesIn), Environmental feedback (EnvFB). DR [incentive motivational learning-> [VWMPS, MWMPS], -> VFMS, -> MAOS], VWMPS [conditioned reinforcer learning-> DR, MAOS], VFMS [visual object categories-> VWMPS, reactive movement commands-> MAOS], MWMPS [conditioned reinforcer learning-> DR, planned movement commands-> MAOS], MAOS [motor map positions-> MWMPS, motor outflow-> MotP], VisIn-> VFMS, VesIn-> MAOS, EnvFB-> [VisIn, MotP, VesIn]. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:866:
  • image p613fig16.43 The main visual form and motion processing stream mechanisms of SOVEREIGN, many of them described at length in previous chapters.
    || Render 3-D scene (R3DS), figure-ground separation (FGS), log-polar transform (LPT), Gaussian coarse-coding (GCC), Invariant visual target map (IVTM), What Fuzzy ART (WhatFuzz), body spatial coordinates (BSC), where reactive visual TPV storage (WRVTS), Directional transient cell network (DTCN), Motion direction hemifild map (MDHM), Hemifiled left/right scoring (HLRS), reactive visual control signal (RVCS), Parvo/Magno/Erg competition (PMEC), Approach and Orient GOp (AOGp), GOm (GOm). R3DS [parvo-> FGS, magno-> DTCN], FGS-> [LPT, WRVTS], LPT-> GCC-> IVTM-> WhatFuzz, BSC-> [RVTS, PMEC], PMEC-> [gateRVTS-> RVTS, gateRVCS-> RVCS], DTCN-> MDHM-> HLRS, HLRS-> [PMEC, RVCS], AOGp-> gateRVTS, GOm-> gateRVCS. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:867:
  • image p613fig16.44 The main target position vector (TPV), difference vector (DV), and volitional GO computations in SOVEREIGN that bring together reactive and planned signals to control decision-making and action. See the text for details.
    || Reactive visual TPV (RVT), NETs (NETs), S-MV mismatch (SMVM), NETmv (NETmv), reactive visual TPV storage (RVTS), reactive DV1 (RD1), NET (NET), motivated what and where decisions (MWWD), Planned DV1 (PD1), tonic (Tonic), top-down readout mismatch (TDRM), Parvo gate (tonic) (PG), Orienting GOp offset (OGpO). RVT-> [NETs, RVTS], NETs-> [SMVM, NET], SMVM-> NET, NETmv-> SMVM, RVTS-> [NETs, RD1], NET-> [RD1, PD1, TDRM], MWWD-> PD1, PD1-> Tonic-> TDRMPG-> NETs, OGpO-> [NETmv, PD1]. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:868:
  • image p614fig16.45 The main distance (d) and angle (a) computations that bring together and learn dimensionally-consistent visual and motor information whereby to make the currently best decisions and actions. See the text for details.
    || Reactive Visual TPV [m storage], NETm S-MV mismatch, MV mismatch, NETmv, PPVv, PPVm, Vestibular feedback, motor copy. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:869:
  • image p615fig16.46 SOVEREIGN uses homologous processing stages to model the (a) What cortical stream and the (b) Where cortical stream, including their cognitive working memories and chunking networks, and their modulation by motivational mechanisms. See the text for details.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:870:
  • image p615fig16.47 SOVEREIGN models how multiple READ circuits, operating in parallel in response to multiple internal drive sources, can be coordinated to realize a sensory-drive heterarchy that can maximally amplify the motivationally most currently favored option.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:871:
  • image p616fig16.48 SOVEREIGN was tested using a virtual reality 3D rendering of a cross maze (a) with different visual cues at the end of each corridor.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:872:
  • image p616fig16.49 The animat learned to convert (a) inefficient exploration of the maze into (b) an efficient direct learned path to the goal.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:873:
  • image p617fig16.50 The perirhinal and parahippocampal cortices enable adaptively timed reinforcement learning and spatial navigational processes that are modeled by Spectral Spacing models in the What and Where cortical streams, respectively, to be fused in the hippocampus.
    || What and Where inputs to the hippocampus (Diana, Yonelinas, Ranganath 2007). Adaptively timed conditioning and spatial naviga039tbl01.03 tion. Hippocampus <-> Entorhinal Cortex <-> [Perirhinal Cortex <-> what, Parahippocampal Cortex <-> where]. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:881:
  • image p627tbl17.01 Homologs between reaction-diffusion and recurrent shunting cellular network models of development.
    || byRows: (reaction-diffusion, recurrent shunting net) (activator, excitatory activity) (inhibitor, inhibitory activity) (morphogenic source density, inputs) (firing of morphogen gradient, contrast enhancement) (maintenance of morphogen gradient, short-term memory) (power or sigmoidal signal functions, power or sigmoidal signal functions) (on-center off-surround interactions via diffusion, on-center off-surround interactions via signals) (self-stabilizing distributions of morphogens if inhibitors equilibrate rapidly, short-term memory pattern if inhibitors equilibrate rapidly) (periodic pulses if inhibitors equilibrate slowly, periodic pulses if inhibitors equilibrate slowly) (regulation, adaptation). /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:882:
  • image p628fig17.01 A hydra
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:883:
  • image p628fig17.02 Schematics of how different cuts and grafts of the normal Hydra in (a) may (*) or may not lead to the growth of a new head. See the text for details.
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:884:
  • image p629fig17.03 How an initial morphogenetic gradient may be contrast enhanced to exceed the threshold for head formation in its most active region.
    || head formation threshold, final gradient, initial gradient. /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:885:
  • image p630fig17.04 Morphogenesis: more ratios (Wolpert 1969). Shape preserved as size increases. French flag problem. Use cellular models! (Grossberg 1976, 1978) vs chemical or fluid reaction-diffusion models (Turing 1952; Gierer, Meinhardt 1972).
    || /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:886:
  • image p631fig17.05 How a blastula develops into a gastrula. See the text for details.
    || 1. The vegetal pole of the blastula flattens, [Animal, vegetal] hemisphere, blastocoel. 2. Some cells change shape and move inward to form the archenteron, Elastopore. 3. Other cells break free, becoming mesenchyme. 4. Then extensions of mesenchyme cells attach to the overlying ctoderm, Archenteron. 5. The archenteron elongates, assisted by the contraction of mesenchyme cells. 6. The mouth will form, where the archenteron meets ectoderm. 7. The blastopone will form the anus of the mature animal. [Mesenchyme, Ectoderm, Endoderm, Blastocoel, Archenteron, Mesenchyme]. Concept 38.3, www.macmillanhighered.com /home/bill/web/Neural nets/Grossberg/Grossbergs list of [figure, table]s.HtmWeb.html:887:
  • image p634fig17.06 Summing over a population of cells with binary output signals whose firing thresholds are Gaussianly distributed (left image) generates a total output signal that grows in a sigmoidal fashion with increasing input size (dashed vertical line).
    || How binary cells with a Gaussian distribution of output thresholds generates a sigmoidal population signal. [# of binary cells with threshold T, Total output signal] vs Cell firing thresholds T. Cell population with firing thresholds Gaussianly distributed around a mean value. As input increases (dashed line), more cells in population fire with binary signals. Total population output obeys a sigmoid signal function f. /home/bill/web/Neural nets/Grossberg/Grossbergs list of index.HtmWeb.html:20: /home/bill/web/Neural nets/Grossberg/Grossbergs list of index.HtmWeb.html:23: /home/bill/web/Neural nets/Grossberg/Grossbergs list of index.HtmWeb.html:25: /home/bill/web/Neural nets/Grossberg/Grossbergs list of index.HtmWeb.html:27: /home/bill/web/Neural nets/Grossberg/Grossbergs overview.HtmWeb.html:118:This section is repeated in the Introduction webPage.
    /home/bill/web/Neural nets/Grossberg/Grossbergs overview.HtmWeb.html:20: /home/bill/web/Neural nets/Grossberg/Grossbergs overview.HtmWeb.html:23: /home/bill/web/Neural nets/Grossberg/Grossbergs overview.HtmWeb.html:25: /home/bill/web/Neural nets/Grossberg/Grossbergs overview.HtmWeb.html:27: /home/bill/web/Neural nets/Grossberg/Grossbergs overview.HtmWeb.html:358: /home/bill/web/Neural nets/Grossberg/Grossbergs overview.HtmWeb.html:377: /home/bill/web/Neural nets/Grossberg/Grossbergs overview.HtmWeb.html:396: /home/bill/web/Neural nets/Grossberg/Grossbergs overview.HtmWeb.html:415: /home/bill/web/Neural nets/Grossberg/Grossbergs overview.HtmWeb.html:434: /home/bill/web/Neural nets/Grossberg/Grossbergs overview.HtmWeb.html:453: /home/bill/web/Neural nets/Grossberg/Grossbergs overview.HtmWeb.html:472: /home/bill/web/Neural nets/Grossberg/Grossbergs overview.HtmWeb.html:491: /home/bill/web/Neural nets/Grossberg/Grossbergs overview.HtmWeb.html:49:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs overview.HtmWeb.html:51:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs overview.HtmWeb.html:53:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs overview.HtmWeb.html:56:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs overview.HtmWeb.html:58:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs overview.HtmWeb.html:60:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs overview.HtmWeb.html:62:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs overview.HtmWeb.html:64:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs overview.HtmWeb.html:66:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs overview.HtmWeb.html:69:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs overview.HtmWeb.html:71:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs overview.HtmWeb.html:84:As described on the Introduction webPage, questions driving this "webSite" (collection of webPages, defined by the menu above) are : /home/bill/web/Neural nets/Grossberg/Grossbergs overview.HtmWeb.html:87:
  • How difficult would it be to augment "Transformer Neural Networks" (TrNNs) with Grossberg's [concept, architecture]s, including the emergent systems for consciousness? Perhaps this would combine the scalability of the former with the [robust, extendable] foundations of the latter, which is supported by [broad, diverse, deep] data from [neuroscience, psychology], as well success in real world advanced [science, engineering] applications? /home/bill/web/Neural nets/Grossberg/Grossbergs paleontology.HtmWeb.html:20: /home/bill/web/Neural nets/Grossberg/Grossbergs paleontology.HtmWeb.html:23: /home/bill/web/Neural nets/Grossberg/Grossbergs paleontology.HtmWeb.html:25: /home/bill/web/Neural nets/Grossberg/Grossbergs paleontology.HtmWeb.html:27: /home/bill/web/Neural nets/Grossberg/Grossbergs paleontology.HtmWeb.html:44:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs paleontology.HtmWeb.html:46:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs paleontology.HtmWeb.html:48:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs Principles, Principia.HtmWeb.html:20: /home/bill/web/Neural nets/Grossberg/Grossbergs Principles, Principia.HtmWeb.html:23: /home/bill/web/Neural nets/Grossberg/Grossbergs Principles, Principia.HtmWeb.html:25: /home/bill/web/Neural nets/Grossberg/Grossbergs Principles, Principia.HtmWeb.html:27: /home/bill/web/Neural nets/Grossberg/Grossbergs Principles, Principia.HtmWeb.html:42:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs Principles, Principia.HtmWeb.html:44:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs quoted text.HtmWeb.html:107:
  • Grossberg 2021 p081c2h0.66 sub-section: How does one evolve a computational brain?
    The above discussion illustrates that no single step of theoretical derivation can derive a whole brain. One needs a method for deriving a brain in stages, or cycles, much as evolution has incrementally discovered ever more complex brains over many thousands of years. The following theoretical method has been successfully applied many times since I first used it in 1957. It embodies a kind of conceptual evolutionary process for deriving a brain.

    Because "brain evolution needs to achieve behavioural success", we need to start with data that embodiey indices of behavioral success. That is why, as illustrated in
    Figure 2.37 Modelling method and cycle, one starts with Behavioral Data from scores or hundreds of psychological experiments. These data are analyszed as the result of an individual adapting autonomously in real time to a changing world. This is the Arty of Modeling. It requires that one be able to infer from static data curves the dynamical processes that control individual behaviors occuring in real time. One of the hardest things that I teach to my students to do is "how to think in real time" to be able to carry out this speculative leap.

    Properly carried out, this analysis leads to the discovery of new Design Principles that are embodied by these behavioral processes. The Design Principles highlight the functional meaning of the data, and clarify how individual behaviors occurring in real time give rise to these static data curves.

    These principles are then converted into the simplest Mathematical Model using a method of minimal anatomies, which is a form of Occam's Razor, or principle of parsimony. Such a mathematical model embodies the psychological principles using the simplest possible differential equations. By "simplest" I mean that, if any part of the derived model is removed, then a significant fraction of the targeted data could no longer be explained. One then analyzes the model mathematically and simulates it on the computer, showing along the way how variations on the minimal anatomy can realize the design principles in different individuals or species.

    This analysis has always provided functional explanations and Behavioral Predictions for much larger behavioral data bases than those used to discover the Design Principles. The most remarkable fact is, however, that the behaviorally derived model always looks like part of a brain, thereby explaining a body of challenging Neural Data and making novel Brain Predictions.

    The derivation hereby links mind to brain via psychological organizational principles and their mechanistic realization as a mathematically defined neural network. This startling fact is what I first experienced as a college Freshman taking Introductory Psychology, and it changed my life forever.

    I conclude from having had this experience scores of times since 1957 that brains look the way they do because they embody a natural computational realization for controlling autonomous adaptation in real-time to a changing world. Moreover, the Behavior -> Principles -> Model -> Neural derivation predicts new functional roles for both known and unknown brain mechanisms by linking the brain data to how it helps to ensure behavioral success. As I noted above, the power of this method is illustrated by the fact that scores of these predictions about brain and behavior have been supported by experimental data 5-30 years after they were first published.

    Having made the link from behavior to brain, one can then "burn the candle from both ends" by pressing both top-down from Behavioral Data and bottom-up from Brain Data to clarify what the model can and cannot explain at its current stage of derivation. No model can explain everything. At each stage of development, the model can cope with certain environmental challenges but not others. An important part of the mathematical and computational analysis is to characterize the boundary between the known and unknown; that is which challenges the model can cope with and which it cannot. The shape of this boundary between the known and unknown helps to direct the theorist's attention to new design principles that have been omitted from previous analysis.

    The next step is to show how these new design principles can be incorporated into the evolved model in a self-consistent way, without undermining its previous mechanisms, thereby leading to a progressively more realistic model, one that can explain and predict ever more behavioral and neural data. In this way, the model undergoes a type of evolutionary development, as it becomes able to cope behaviorally with environmental constraints of ever increasing subtlety and complexity. The Method of Minimal Anatomies may hereby be viewed as way to functionally understand how increasingly demanding combinations of environmental pressures were incorporated into brains during the evolutionary process.

    If such an Embedding Principle cannot be carried out - that is, if the model cannot be unlumped or refined in a self-consistent way - then the previous model was, put simply, wrong, and one needs to figure out which parts must be discarded. Such a model is, as it were, an evolutionary dead end. Fortunately, this has not happened to me since I began my work in 1957 because the theoretical method is so conservative. No theoretical addition is made unless it is supported by multiple experiments that cannot be explained in its absence. Where multiple mechanistic instantiations of some Design Principles were possible, they were all developed in models to better underestand their explanatory implications. Not all of these instantiations could survive the pressure of the evolutionary method, but some always could. As a happy result, all earlier models have been capable of incremental refinement and expansion.

    The cycle of model evolution has been carried out many times since 1957, leading today to increasing numbers of models that individually can explain and predict psychological, neurophysiological, anatomical, biophysical, and even biochemical data. In this specific sense, the classical mind-body problem is being incrementally solved.

    Howell: bold added for emphasis.
    (keys : Principles-Principia, behavior-mind-brain link, brain evolution, cycle of model evolution)
    see also quotes: Charles William Lucas "Universal Force" and others (not retyped yet). /home/bill/web/Neural nets/Grossberg/Grossbergs quoted text.HtmWeb.html:20: /home/bill/web/Neural nets/Grossberg/Grossbergs quoted text.HtmWeb.html:23: /home/bill/web/Neural nets/Grossberg/Grossbergs quoted text.HtmWeb.html:25: /home/bill/web/Neural nets/Grossberg/Grossbergs quoted text.HtmWeb.html:27: /home/bill/web/Neural nets/Grossberg/Grossbergs quoted text.HtmWeb.html:40:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs quoted text.HtmWeb.html:42:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs quoted text.HtmWeb.html:44:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs quoted text.HtmWeb.html:46:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs quoted text.HtmWeb.html:70:
  • Grossberg 2021 p229c2h0.60 SMART computer simulations demonstrate that a good enough match of a top-down expectation with a bottom-up feature pattern generates an attentive resonance during which the spikes of active cells synchronize in the gamma frequency range of 20-70 Hz (Figure 5.40). Many labs have reported a link between attention and gamma oscillations in the brain, including two articles published in 2001, one from the laboratory of Robert Desimone when he was at the the National Institute of Mental Health in Bethseda (Fries, Reynolds, Rorie, Desimone 2001), and the other from the laboratory of Wolf Singer in Frankfurt (Engel, Fries, Singer 2001). You'll note that Pascal Fries participated in both studies, and is an acknowledged leader in neurobiological studies of gamma oscillations; eg (Fries 2009). .." /home/bill/web/Neural nets/Grossberg/Grossbergs what is consciousness.HtmWeb.html:20: /home/bill/web/Neural nets/Grossberg/Grossbergs what is consciousness.HtmWeb.html:23: /home/bill/web/Neural nets/Grossberg/Grossbergs what is consciousness.HtmWeb.html:25: /home/bill/web/Neural nets/Grossberg/Grossbergs what is consciousness.HtmWeb.html:27: /home/bill/web/Neural nets/Grossberg/Grossbergs what is consciousness.HtmWeb.html:48:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs what is consciousness.HtmWeb.html:50:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs what is consciousness.HtmWeb.html:52:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs what is consciousness.HtmWeb.html:54:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs what is consciousness.HtmWeb.html:57:
  • /home/bill/web/Neural nets/Grossberg/Grossbergs what is consciousness.HtmWeb.html:71:
  • [definitions, models] of consciousness.html - /home/bill/web/Neural nets/Grossberg/Grossbergs what is consciousness.HtmWeb.html:72:
  • What is consciousness: from historical to Grossberg - /home/bill/web/Neural nets/Grossberg/Howell notes: Grossberg 2025.HtmWeb.html:12: /home/bill/web/Neural nets/Grossberg/Howell notes: Grossberg 2025.HtmWeb.html:14:directory /home/bill/web/Neural nets/Grossberg/Howell notes: Grossberg 2025.HtmWeb.html:15:status & updates /home/bill/web/Neural nets/Grossberg/Howell notes: Grossberg 2025.HtmWeb.html:16:copyrights /home/bill/web/Neural nets/Grossberg/Howell notes: Grossberg 2025.HtmWeb.html:29:

    /home/bill/web/Neural nets/Grossberg/Howell notes: Grossberg 2025.HtmWeb.html:33:
  • /home/bill/web/Neural nets/Grossberg/Howell notes: Grossberg 2025.HtmWeb.html:35:
  • /home/bill/web/Neural nets/Grossberg/IllusionOfTheYear/0_IllusionOfTheYear.HtmWeb.html:105: /home/bill/web/Neural nets/Grossberg/IllusionOfTheYear/0_IllusionOfTheYear.HtmWeb.html:112: /home/bill/web/Neural nets/Grossberg/IllusionOfTheYear/0_IllusionOfTheYear.HtmWeb.html:120: /home/bill/web/Neural nets/Grossberg/IllusionOfTheYear/0_IllusionOfTheYear.HtmWeb.html:127: /home/bill/web/Neural nets/Grossberg/IllusionOfTheYear/0_IllusionOfTheYear.HtmWeb.html:133: /home/bill/web/Neural nets/Grossberg/IllusionOfTheYear/0_IllusionOfTheYear.HtmWeb.html:20: /home/bill/web/Neural nets/Grossberg/IllusionOfTheYear/0_IllusionOfTheYear.HtmWeb.html:23: /home/bill/web/Neural nets/Grossberg/IllusionOfTheYear/0_IllusionOfTheYear.HtmWeb.html:25: /home/bill/web/Neural nets/Grossberg/IllusionOfTheYear/0_IllusionOfTheYear.HtmWeb.html:27: /home/bill/web/Neural nets/Grossberg/IllusionOfTheYear/0_IllusionOfTheYear.HtmWeb.html:42:
  • /home/bill/web/Neural nets/Grossberg/IllusionOfTheYear/0_IllusionOfTheYear.HtmWeb.html:44:
  • /home/bill/web/Neural nets/Grossberg/IllusionOfTheYear/0_IllusionOfTheYear.HtmWeb.html:60: /home/bill/web/Neural nets/Grossberg/IllusionOfTheYear/0_IllusionOfTheYear.HtmWeb.html:74: /home/bill/web/Neural nets/Grossberg/IllusionOfTheYear/0_IllusionOfTheYear.HtmWeb.html:84: /home/bill/web/Neural nets/Grossberg/IllusionOfTheYear/0_IllusionOfTheYear.HtmWeb.html:91: /home/bill/web/Neural nets/Grossberg/IllusionOfTheYear/0_IllusionOfTheYear.HtmWeb.html:98: /home/bill/web/Neural nets/Grossberg/reader Howell notes.HtmWeb.html:20: /home/bill/web/Neural nets/Grossberg/reader Howell notes.HtmWeb.html:23: /home/bill/web/Neural nets/Grossberg/reader Howell notes.HtmWeb.html:25: /home/bill/web/Neural nets/Grossberg/reader Howell notes.HtmWeb.html:27: /home/bill/web/Neural nets/Grossberg/reader Howell notes.HtmWeb.html:38:see incorporate reader questions into theme webPage
    /home/bill/web/Neural nets/Grossberg/reader Howell notes.HtmWeb.html:39:see Navigation: [menu, link, directory]s
    /home/bill/web/Neural nets/Grossberg/reader Howell notes.HtmWeb.html:55:
  • p153 Howell: grepStr 'uncertainty' "multiple conflicting hypothesis"- a slef-imposed practice to avoid becoming a [believer, tool] of a concept. But this wasd intended for [long-term, well-established, mainstream] theories, and well as new ideas that excite me. Does Grossberg's uncertainty" concept also allow for "multiple conflicting hypothesis" to sit there and brew? /home/bill/web/Neural nets/Grossberg/reader Howell notes.HtmWeb.html:70:
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
    bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :
    whitegeneral microcircuit : a possible component of ART architecture
    lime greensensory perception [attention, expectation, learn]. Table includes [see, hear, !!*must add touch example*!!], no Grossberg [smell, taste] yet?
    light bluepost-perceptual cognition?
    pink"the feeling of what happens" and knowing what event caused that feeling
    /home/bill/web/Neural nets/Grossberg/references- Grossberg.HtmWeb.html:20: /home/bill/web/Neural nets/Grossberg/references- Grossberg.HtmWeb.html:23: /home/bill/web/Neural nets/Grossberg/references- Grossberg.HtmWeb.html:25: /home/bill/web/Neural nets/Grossberg/references- Grossberg.HtmWeb.html:27: /home/bill/web/Neural nets/Grossberg/references- non-Grossberg.HtmWeb.html:20: /home/bill/web/Neural nets/Grossberg/references- non-Grossberg.HtmWeb.html:23: /home/bill/web/Neural nets/Grossberg/references- non-Grossberg.HtmWeb.html:25: /home/bill/web/Neural nets/Grossberg/references- non-Grossberg.HtmWeb.html:27: /home/bill/web/Neural nets/Grossberg/references- non-Grossberg.HtmWeb.html:37:Note that a separate webPage lists a very small portion of Stephan Grossberg's publications.
    /home/bill/web/Neural nets/Grossberg/references- non-Grossberg.HtmWeb.html:41:
  • J.E. Kaal, A. Otte, J.A. Sorensen, J.G. Emming 2021 "The nature of the atom" www.Curtis-Press.com, 268pp ISBN 978-1-8381280-2-9 https://StructuredAtom.org/ /home/bill/web/Neural nets/Grossberg/references- non-Grossberg.HtmWeb.html:45:
  • rationalwiki.org "Quantum consciousness" (last update 07Nov2022, viewed 16Jul2023)
    /home/bill/web/Neural nets/Grossberg/references- non-Grossberg.HtmWeb.html:46: also critiques of the article above /home/bill/web/Neural nets/Grossberg/references- non-Grossberg.HtmWeb.html:53:
  • Terrence J. Sejnowski 21Aug2023 "Large Language Models and the Reverse Turing Test", Neural Computation (2023) 35 (3): 309–342 (33 pages) https://direct.mit.edu/neco/issue (also copy in case original link fails) /home/bill/web/Neural nets/Grossberg/references- non-Grossberg.HtmWeb.html:57:
  • Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin 12Jun2017 "Attention Is All You Need" [v5] Wed, 6 Dec 2017 03:30:32 UTC https://arxiv.org/abs/1706.03762 /home/bill/web/Neural nets/Grossberg/references- non-Grossberg.HtmWeb.html:61:
  • Wikipedia Consciousness /home/bill/web/Neural nets/Grossberg/[use, modification]s of c-ART.HtmWeb.html:20: /home/bill/web/Neural nets/Grossberg/[use, modification]s of c-ART.HtmWeb.html:23: /home/bill/web/Neural nets/Grossberg/[use, modification]s of c-ART.HtmWeb.html:25: /home/bill/web/Neural nets/Grossberg/[use, modification]s of c-ART.HtmWeb.html:27: /home/bill/web/Neural nets/Grossberg/[use, modification]s of c-ART.HtmWeb.html:40:
  • /home/bill/web/Neural nets/Grossberg/[use, modification]s of c-ART.HtmWeb.html:42:
  • /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs ART- Adaptive Resonance Theory workCore.HtmWeb.html:20: /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs ART- Adaptive Resonance Theory workCore.HtmWeb.html:23: /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs ART- Adaptive Resonance Theory workCore.HtmWeb.html:25: /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs ART- Adaptive Resonance Theory workCore.HtmWeb.html:27: /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs ART- Adaptive Resonance Theory workCull.HtmWeb.html:20: /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs ART- Adaptive Resonance Theory workCull.HtmWeb.html:23: /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs ART- Adaptive Resonance Theory workCull.HtmWeb.html:25: /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs ART- Adaptive Resonance Theory workCull.HtmWeb.html:27: /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:20: /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:232:
  • image p038fig01.25 The ART Matching Rule stabilizes real time learning using a [top-down, modulatory on-center, off-surround] network. Object attention is realized by such a network. See text for additional discussion.
    || ART Matching Rule [volition, categories, features]. [one, two] against one. /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:236:
  • image p483fig13.02 The object-value categories in the orbitofrontal cortex require converging specific inputs from the sensory cortex and nonspecific incentive motivational inputs from the amygdala in order to fire. When the orbitofrontal cortex fires, it can deliver top-down ART Matching Rule priming signals to the sensory cortical area by which it was activated, thereby helping to choose the active recognition categories there that have the most emotional support, while suppressing others, leading to attentional blocking of irrelevant cues.
    || Cognitive-Emotional-Motor (CogEM) model. Drive-> amygdala incentive motivational learning-> orbitofrontal cortex- need converging cue and incentive iputs to fire <-> sensory cortex- conditioned renforcer learning-> amygdala. CS-> sensory cortex. Motivated attention closes the cognitive-emotional feedback loop, focuses on relevant cues, and causes blocking of irrelevant cues. /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:238:
  • image p484fig13.04 The top-down feedback from the orbitofrontal cortex closes a feedback loop that supports a cognitive-emotional resonance. If this resonance can be sustained long enough, it enables us to have feelings at the same time that we experience the categories that caused them.
    || Cognitive-Emotional resonance. Basis of "core consciousness" and "the feeling of what happens". (Damasio 1999) derives heuristic version of CogEM model from his clinical data. Drive-> amygdala-> prefrontal cortex-> sensory cortex, resonance around the latter 3. How is this resonance maintained long enough to become conscious? /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:23: /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:25: /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:27: /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:292: /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:298: /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:299:
  • /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:343:First, what is the 'hard problem of consciousness'? Wikipedia says: '... The hard problem of consciousness is the problem of explaining how and why we have qualia or phenomenal experiences - how sensations acquire characteristics, such as colors and tastes'. David Chalmers, who introduced the term 'hard problem' of consciousness, contrasts this with the 'easy problems' of explaining the ability to discriminate, integrate information, report mental states, focus attention, etc. As (Chalmer 1995) has noted: 'The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As (Nagel 1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt qaulity of emotion, and the feeling of a stream of conscious thought. What unites all these states is that there is something it is like to be in them. All of them are states of experience.'
    /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:346:"... The Internet Encyclopedia of Philosophy goes on to say: 'The hard problem of consciousness is the problem of explaining why any physical state is conscious rather than nonconscious. It is the problem of explaining why there is something it is like for a subject in conscious experience, why conscious mental states light up and directly appear to the subject.' ..."
    /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:350:
  • image p541fig15.02 The neurotrophic Spectrally Timed Adaptive Resonance Theory, or nSTART, model of (Franklin, Grossberg 2017) includes hippocampus to enable adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between CS and US.
    || Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. [CS, US] -> Sensory Cortex (SC) <- motivational attention <-> category learning -> Prefrontal Cortex (PFC). SC conditioned reinforcement learning-> Amygdala (cannot bridge the temporal gap) incentive motivational learning-> PFC. SC adaptively timer learning and BDNF-> Hippocampus (can bridge the temporal gap) BDNF-> PFC. PFC adaptively timed motor learning-> cerebellum. /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:382:
  • p190 Howell: [neural microcircuits, modal architectures] used in ART -
    bottom-up filterstop-down expectationspurpose
    instar learningoutstar learningp200fig05.13 Expectations focus attention: [in, out]star often used to learn the adaptive weights. top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features
    LGN Lateral Geniculate NucleusV1 cortical areap192fig05.06 focus attention on expected critical features
    EC-III entorhinal stripe cellsCA1 hippocampal place cellsp600fig16.36 entorhinal-hippocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories
    auditory orienting arousalauditory categoryp215fig05.28 How a mismatch between bottom-up and top-down patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance
    auditory stream with/without [silence, noise] gapsperceived sound continues?p419fig12.17 The auditory continuity illusion: Backwards in time - How does a future sound let past sound continue through noise? Resonance!
    visual perception, learning and recognition of visual object categoriesmotion perception, spatial representation and target trackingp520fig14.02 pART predictive Adaptive Resonance Theory. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown.
    red - cognitive-emotional dynamics
    green - working memory dynamics
    black - see [bottom-up, top-down] lists
    EEG with mismatch, arousal, and STM reset eventsexpected [P120, N200, P300] event-related potentials (ERPs)p211fig05.21 Sequences of P120, N200, and P300 event-related potentials (ERPs) occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    CognitiveEmotionalp541fig15.02 nSTART neurotrophic Spectrally Timed Adaptive Resonance Theory: Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. Hippocampus enables adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between Conditioned Stimuluus (CS) and Unconditioned Stimulus (US).

    backgound colours in the table signify :
    whitegeneral microcircuit : a possible component of ART architecture
    lime greensensory perception [attention, expectation, learn]. Table includes [see, hear, !!*must add touch example*!!], no Grossberg [smell, taste] yet? some are conscious (decision-quality? or must interact with conscious cognitive?), others not
    light bluepost-perceptual cognition?
    pink"the feeling of what happens" and knowing what event caused that feeling
    /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:388:020
  • image p038fig01.25 The ART Matching Rule stabilizes real time learning using a [top-down, modulatory on-center, off-surround] network. Object attention is realized by such a network. See text for additional discussion.
    || ART Matching Rule [volition, categories, features]. [one, two] against one. /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:389:025
  • image p192fig05.06 Bottom-up and top-down circuits between the LGN and cortical area V1. The top-down circuits obey the ART Matching Rule for matching with bottom-up input patterns and focussing attention on expected critical features.
    || Model V1-LGN circuits, version [1, 2]. retina -> LGN relay cells -> interneurons -> cortex [simple, endstopped] cells -> cortex complex cells /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:390:030
  • image p200fig05.13 Instar and outstar learning are often used to learn the adaptive weights in the bottom-up filters and top-down expectations that occur in ART. The ART Matching Rule for object attention enables top-down expectations to select, amplify, and synchronize expected patterns of critical features, while suppressing unexpected features.
    || Expectations focus attention: feature pattern (STM), Bottom-Up adaptive filter (LTM), Category (STM), competition, Top-Down expectation (LTM); ART Matching Rule: STM before top-down matching, STM after top-down matching (attention!) /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:392:100
  • image p520fig14.02 Macrocircuit of the main brain regions, and connections between them, that are modelled in the unified predictive Adaptive Resonance Theory (pART) of cognitive-emotional and working memory dynamics. Abbreviations in red denote brain regions used in cognitive-emotional dynamics. Those in green denote brain regions used in working memory dynamics. Black abbreviations denote brain regions that carry out visual perception, learning and recognition of visual object categories, and motion perception, spatial representation and target tracking. Arrow denote non-excitatory synapses. Hemidiscs denote adpative excitatory synapses. Many adaptive synapses are bidirectional, thereby supporting synchronous resonant dynamics among multiple cortical regions. The output signals from the basal ganglia that regulate reinforcement learning and gating of multiple cortical areas are not shown. Also not shown are output signals from cortical areas to motor responses. V1: striate, or primary, visual cortex; V2 and V4: areas of prestriate visual cortex; MT: Middle Temporal cortex; MST: Medial Superior Temporal area; ITp: posterior InferoTemporal cortex; ITa: anterior InferoTemporal cortex; PPC: Posterior parietal Cortex; LIP: Lateral InterParietal area; VPA: Ventral PreArculate gyrus; FEF: Frontal Eye Fields; PHC: ParaHippocampal Cortex; DLPFC: DorsoLateral Hippocampal Cortex; HIPPO: hippocampus; LH: Lateral Hypothalamus; BG: Basal Ganglia; AMYG: AMYGdala; OFC: OrbitoFrontal Cortex; PRC: PeriRhinal Cortex; VPS: Ventral bank of the Principal Sulcus; VLPFC: VentroLateral PreFrontal Cortex. See the text for further details.
    || /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:400:240
  • image p240fig05.44 When an algebraic exemplar model is realized using only local computations, it starts looking like an ART prototype model.
    || How does the model know which exemplars are in category A? BU-TD learning. How does a NOVEL test item access category A? /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:407:325
  • image p207fig05.19 The ART hypothesis testing and learning cycle. See the text for details about how the attentional system and orienting system interact in order to incorporate learning of novel categories into the corpus of already learned categories without causing catastophic forgetting.
    || /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:408:330
  • image p215fig05.28 How a mismatch between bottom-up and top-down input patterns can trigger activation of the orienting system A and, with it, a burst of nonspecific arousal to the category level.
    || Mismatch -> inhibition -> arousal -> reset. BU input orienting arousal, BU+TD mismatch arousal and reset. ART Matching Rule: TD mismatch can suppress a part of F1 STM pattern, F2 is reset if degree of match < vigilance /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:409:335
  • image p226fig05.35 I had shown in 1976 how a competitive learning or self-organizing map model could undergo catastrophic forgetting if the input environment was sufficiently dense and nonstationary, as illustrated by Figure 5.18. Later work with Gail Carpenter showed how, if the ART Matching Rule was shut off, repeating just four input patterns in the correct order could also casue catastrophic forgetting by causing superset recoding, as illustrated in Figure 5.36.
    || Code instability input sequences. D C A; B A; B C = ; |D|<|B|<|C|; where |E| is the number of features in the set E. Any set of input vectors that satisfy the above conditions will lead to unstable coding if they are periodically presented in the order ABCAD and the top-down ART Matching Rule is shut off. /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:410:340
  • image p226fig05.36 Column (a) shows catastrophic forgetting when the ART Matching Rule is not operative. It is due to superset recoding. Column (b) shows how category learning quickly stabilizes when the ART Matching Rule is restored.
    || Stabel and unstable learning, superset recoding /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:411:345
  • image p241fig05.45 The 5-4 category structure is one example of how an ART network learns the same kinds of categories as human learners. See the text for details.
    || 5-4 Category structure. A1-A5: closer to the (1 1 1 1) prototype; B1-B4: closer to the (0 0 0 0) prototype /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:412:350
  • image p419fig12.17 The auditory continuity illusion illustrates the ART Matching Rule at the level of auditory streaming. Its "backwards in time" effect of future context on past conscious perception is a signature of resonance.
    || Auditory continuity illusion. input, percept. Backwards in time - How does a future sound let past sound continue through noise? Resonance! - It takes a while to kick in. After it starts, a future tone can maintain it much more quickly. Why does this not happen if there is no noise? - ART Matching Rule! TD harmonic filter is maodulatory without BU input. It cannot create something out of nothing. /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:413:355
  • image p600fig16.36 The entorhinal-hipppocampal system has properties of an ART spatial category learning system, with hippocampal place cells as the spatial categories. See the text for details.
    || Entorhinal-hippocampal interactions as an ART system. Hippocampal place cells as spatial categories. Angular head velocity-> head direction cells-> stripe cells- small scale 1D periodic code (ECIII) SOM-> grid cells- small scale 2D periodic code (ECII) SOM-> place cells- larger scale spatial map (DG/CA3)-> place cells (CA1)-> conjunctive-coding cells (EC V/VI)-> top-down feedback back to stripe cells- small scale 1D periodic code (ECIII). stripe cells- small scale 1D periodic code (ECIII)-> place cells (CA1). /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:415:800
  • image p211fig05.21 Sequences of P120, N200, and P300 event-related potentials occur during oddball learning EEG experiments under conditions that ART predicted should occur during sequences of mismatch, arousal, and STM reset events, respectively.
    || ERP support for mismatch-mediated reset: event-related potentials: human scalp potentials. ART predicted correlated sequences of P120-N200-P300 Event Related Potentials during oddball learning. P120 mismatch; N200 arousal/novelty; P300 STM reset. Confirmed in (Banquet and Grossberg 1987) /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:417:905
  • image p541fig15.02 The neurotrophic Spectrally Timed Adaptive Resonance Theory, or nSTART, model of (Franklin, Grossberg 2017) includes hippocampus to enable adaptively timed learning that can bridge a trace conditioning gap, or other temporal gap between CS and US.
    || Hippocampus can sustain a Cognitive-Emotional resonance: that can support "the feeling of what happens" and knowing what event caused that feeling. [CS, US] -> Sensory Cortex (SC) <- motivational attention <-> category learning -> Prefrontal Cortex (PFC). SC conditioned reinforcement learning-> Amygdala (cannot bridge the temporal gap) incentive motivational learning-> PFC. SC adaptively timer learning and BDNF-> Hippocampus (can bridge the temporal gap) BDNF-> PFC. PFC adaptively timed motor learning-> cerebellum. /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:434:
  • image /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:435:
  • image /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:437:
  • image p163fig04.39 A schematic of the LAMINART model that explains key aspects of laminar visual cortical anatomy and dynamics. LGN -> V1 [6, 4, 2/3] -> V2 [6, 4, 2/3]
    || p163c1h0.6 "... The first article abount laminar computing ... proposed how the laminar cortical model could process 2D pictures using bottom-up filtering and horizontal bipole grouping interactions (Grossbeerg, Mingolla, Ross 1997). In 1999, I was able to extend the model to also include top-down circuits for expectation and attention (Grossberg 1999)(right panel). Such a synthesis of laminar bottom-up, horizontal, and top-down circuits is characteristic of the cerebral cortex (left panel). I called it LAMINART because it began to show how properties of Adaptive Resonance Theory, or ART, notably the ART prediction about how top-down expectations and attention work, and are realized by identical cortical cells and circuits. You can immediately see from the schematic laminar circuit diagram ... (right panel) that circuits in V2 seem to repeat circuits in V1, albeit with a larger spatial scale, despite the fact that V1 and V2 carry out different functions, How this anatomical similarity can coexist with functional diversity will be clarified in subsequent sections and chapters. It enable different kinds of biological intelligence to communicate seamlessly while carrying out their different psychological functions. ..." /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:48:
  • /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:50:
  • /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:69:
  • use a bash script, for example, to automatically play through a sequence of selected segments /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:75:Viewers may list their own comments in files (on or more files from different people, for example), to include in Files listing [chapter, section, figure, table, selected Grossberg quotes, my comments]s. These files of lists are my basis for providing much more detailed information. While this is FAR LESS HELPFUL than the text of the book or its index alone, it can complement the book index, and it has the advantages that : /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:80:
  • text extractions of simple searches or "themes" is greatly facilitated, so the reader can download the files, copy the bash scripts (or use another text extraction program), and set up their own "themes". /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:83:Rather than just watch this video, you can follow it by reading the script and following its links, once I write it... /home/bill/web/Neural nets/Grossberg/videoProdn/Grossbergs Consciousness: video script.HtmWeb.html:93:What is consciousness? I will start with a simple definition concentrated on how out [awareness of [environment, situation, self, others], expectations, feeling about a situation] arise from essentially non-conscious cognitive, emotional, and motor processes, including muscle control. "Awareness", "Expectations", "Emotions", lead to "Actions". "Actions" include muscle actions, language communications, striving towards a goal, reactions to the current situation, directing [perception, cognition], and other processes. "Learning" in a robust, stable, and flexible manner is an essential part of this, given that the environment forces us to learn and adapt to new situations and to modify our [conscious, sub-conscious] understanding where it is wrong or insufficient. Some other components of consciousness are provided in the remainder of this video, but there are many, many more in the literature. Of interest to philosophers such as David Chalmers, are qualia and phenomenal experiences.
    /home/bill/web/Neural nets/Grossberg/why is cART unknown.HtmWeb.html:20: /home/bill/web/Neural nets/Grossberg/why is cART unknown.HtmWeb.html:23: /home/bill/web/Neural nets/Grossberg/why is cART unknown.HtmWeb.html:25: /home/bill/web/Neural nets/Grossberg/why is cART unknown.HtmWeb.html:27: /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:100:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:105:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:108:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:110:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:112:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:114:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:116:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:121:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:124:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:126:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:128:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:133:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:136:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:138:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:140:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:142:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:144:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:146:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:148:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:150:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:152:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:157:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:160:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:162:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:164:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:166:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:168:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:173:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:176:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:178:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:192:
  • Navigation: [menu, link, directory]s
    /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:193:
  • Theme webPage generation by bash script
    /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:194:
  • Notation for [chapter, section, figure, table, index, note]s
    /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:195:
  • incorporate reader questions into theme webPages
    /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:23: /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:26: /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:28: /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:30: /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:39: /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:47:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:51:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:54:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:56:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:58:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:60:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:62:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:64:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:69:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:72:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:74:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:76:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:78:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:80:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:82:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:87:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:90:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:92:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:94:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:96:
  • /home/bill/web/Neural nets/Grossberg/z_[core, fun, strange] concepts header file.HtmWeb.html:98:
  • /home/bill/web/Neural nets/Holidays - neural networks and genomics.HtmWeb.html:20: /home/bill/web/Neural nets/Holidays - neural networks and genomics.HtmWeb.html:22:directory /home/bill/web/Neural nets/Holidays - neural networks and genomics.HtmWeb.html:23:status & updates /home/bill/web/Neural nets/Holidays - neural networks and genomics.HtmWeb.html:24:copyrights /home/bill/web/Neural nets/Holidays - neural networks and genomics.HtmWeb.html:35:Celebrating 20 years of neural networks!
    /home/bill/web/Neural nets/Howell [callerID-SNNs, [DNA, RNA] program code, MindCode, Grossberg].HtmNwp.html:107: Izhikevich Nov2003 Known types of neurons /home/bill/web/Neural nets/Howell [callerID-SNNs, [DNA, RNA] program code, MindCode, Grossberg].HtmNwp.html:161:

    /home/bill/web/Neural nets/Howell [callerID-SNNs, [DNA, RNA] program code, MindCode, Grossberg].HtmNwp.html:184:

    /home/bill/web/Neural nets/Howell [callerID-SNNs, [DNA, RNA] program code, MindCode, Grossberg].HtmNwp.html:208: /home/bill/web/Neural nets/Howell [callerID-SNNs, [DNA, RNA] program code, MindCode, Grossberg].HtmNwp.html:211: /home/bill/web/Neural nets/Howell [callerID-SNNs, [DNA, RNA] program code, MindCode, Grossberg].HtmNwp.html:213: /home/bill/web/Neural nets/Howell [callerID-SNNs, [DNA, RNA] program code, MindCode, Grossberg].HtmNwp.html:215: /home/bill/web/Neural nets/Howell [callerID-SNNs, [DNA, RNA] program code, MindCode, Grossberg].HtmNwp.html:21: /home/bill/web/Neural nets/Howell [callerID-SNNs, [DNA, RNA] program code, MindCode, Grossberg].HtmNwp.html:23:directory /home/bill/web/Neural nets/Howell [callerID-SNNs, [DNA, RNA] program code, MindCode, Grossberg].HtmNwp.html:24:status & updates /home/bill/web/Neural nets/Howell [callerID-SNNs, [DNA, RNA] program code, MindCode, Grossberg].HtmNwp.html:25:copyrights /home/bill/web/Neural nets/Howell [callerID-SNNs, [DNA, RNA] program code, MindCode, Grossberg].HtmNwp.html:38:
  • /home/bill/web/Neural nets/Howell [callerID-SNNs, [DNA, RNA] program code, MindCode, Grossberg].HtmNwp.html:40:
  • /home/bill/web/Neural nets/Howell [callerID-SNNs, [DNA, RNA] program code, MindCode, Grossberg].HtmNwp.html:43:
  • /home/bill/web/Neural nets/Howell [callerID-SNNs, [DNA, RNA] program code, MindCode, Grossberg].HtmNwp.html:45:
  • /home/bill/web/Neural nets/Howell [callerID-SNNs, [DNA, RNA] program code, MindCode, Grossberg].HtmNwp.html:47:
  • /home/bill/web/Neural nets/Howell [callerID-SNNs, [DNA, RNA] program code, MindCode, Grossberg].HtmNwp.html:50:
  • /home/bill/web/Neural nets/Howell [callerID-SNNs, [DNA, RNA] program code, MindCode, Grossberg].HtmNwp.html:52:
  • /home/bill/web/Neural nets/Howell [callerID-SNNs, [DNA, RNA] program code, MindCode, Grossberg].HtmNwp.html:54:
  • /home/bill/web/Neural nets/Howell [callerID-SNNs, [DNA, RNA] program code, MindCode, Grossberg].HtmNwp.html:82:

    /home/bill/web/Neural nets/MindCode/231108 email html insert.HtmNlk.html:5:html insert : Mind2023 webPage.html
    /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:100:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:103:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:106:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:108:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:111:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:114:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:116:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:119:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:121:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:124:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:164:Probably the most important section of this webPage is "Computations with multiple RNA strands". Most other sections provide context.
    /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:178:
  • extracellular - callerID-SNNs (Spiking Neural Networks), as introduced in another webPage /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:21: /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:223:
  • 4-value logic (Colin James' short commentaries) when looking at RNA strands (of double-strand DNA sequences), [A,T,G,C(U)] does look like a 4-value encoding. Does this bring up the subject of 4-value logic, which is not [complete, optimal] in a normal boolean sense. But since ?date?, logicians have worked away from the limelight on this subject, and other forms of logic. Fuzzy logic is well-know, and has its own "Fuzzy Systems" area. Fuzzy Systems are one of three main original pillars of Computational Intelligence (CI), along with [evolutionary computation, neural networks]. (?? other logic approaches I have looked at very briefly, then have fogotten)... /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:23:directory /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:24:status & updates /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:25:copyrights /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:281:
  • bitShifts (like hexadecimal microprocessor machine code) for time series following. This is considered in the callerID-SNNs project. /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:290:13Dec2023 (https://en.wikipedia.org/wiki/Transfer_RNA)
    /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:293:
    /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:315:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:317:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:319:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:322:
  • 2006 MindCode WCCI2006 Vancouver "Howell 060215 Genetic specification of neural networks" : /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:324:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:326:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:328:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:332:
  • 2015 - 2020 MindCode /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:335:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:337:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:340:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:343:
  • voice musings of [scattered, random] thoughts - This really is just me, "arm waving and yapping", trying to identify [missing, lost] items. /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:37:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:385:Does biology have "[relative, absolute] addressing" (relative - local proximity on same DNA or RNA strand, absolute - address may even be on different chromosome)? I don't remember any references mentioning that possibility. In a previous sub-section of this webPage, I have provided a few (incomplete) points on addressing. I have a lot of [read, program]ing to do here
    /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:39:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:417:While I have long been a fan of the work of Stephen Grossberg and his colleagues, I was very surprised with his 2021 Book "Conscious Mind, Resonant Brain". (This link shows a menu that lists details of many themes from his book, plus commentary on well-known concepts of consciousness.) His book went far beyond my awareness of his work (obviously I was horribly out of date). [Right, wrong, true, false], it also goes far beyond any other [concept, work] that I am aware of in explaining how [neurons, the brain] work. The results are not simple, nor are they amenable to the normal "yap, wave your arms" that we all like so much. Maybe that's why it's not so [popular, well known]. To me, concepts of consciousness that do not [emerge from, work with] non-conscious processes, and which do not ellucidate mechanisms, are not satisfying, even though they can still be pragmatically useful.
    /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:423:In any case, I will work on Grossberg's concepts in the future, and apart from providing a simple "figure-captions-based" thematic overview of his work, my only other comment is on the subject of consciousness (below).
    /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:42:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:430:There are only two concepts of consciousness with which I am comfortable, biologically based concepts from Grossberg and colleagues, and the late John Taylor's "advanced control theory" concepts for consciousness (linked webPage not built yet 07Nov2023). But the latter is not amenable at the present time with the directions of MindCode, with a special emphasis on genetics. I did do a very [quick, incomplete] commentary on consciousness concepts, and a simple overview of [definitions, models] of consciousness.
    /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:45:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:48:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:50:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:52:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:54:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:56:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:58:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:60:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:637:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:638: Glenn Borchardt's concept of infinity (one example application), with a few voice comments on how to avoid one trap of self-limiting thinking.
    /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:63:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:640: Of possible interest to geologists: Puetz, Borchardt 150925 Quasi-periodic fractal patterns in geomagnetic reversals, geological activity, and astronomical events.pdf /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:66:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:68:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:70:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:72:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:740:
  • Howell 2006 "Genetic specification of recurrent neural networks" (draft version of my WCCI2006 conference paper) /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:741:
  • MindCode 2023 description /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:743:
  • MindCode 2023 program coding (QNial programming language) this is a simple one-line listing of each operator for each file /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:745:
  • callerID-SNNs Introduction (this webPage) /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:747:
  • callerID-SNNs program coding (QNial programming language) /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:749:
  • bash library: file operations used extensively, sometimes hybridized with the QNial programming language /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:74:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:757:All of these are very incomplete, but the lists are a handy back-reference so that I don't forget ideas. LibreOffice documents of .odt file format. These are in original form in the directory Mind2020, and while I intend to convert them to html and may update them, I have not done so as of 20Nov2023. /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:759:
  • Introduction - Conceptual pseudo-basis for MindCode 2020 old description of MindCode /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:760:
  • MindCode components /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:761:
  • Historical [DNA, Protein, Evolutionary Computing, ANN] hybrid basis for epiDNA-NNs /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:762:
  • MindCode - arbitrary selections from Multiple Conflicting Hypothesis /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:763:
  • Assumed rules of the game /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:764:
  • Questions, not answers /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:765:
  • Static epiDNA-NN /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:766:
  • Dynamic epiDNA-NN coding /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:767:
  • [Neurological, biological] basis for epiDNA coding /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:768:
  • Ontogeny /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:769:
  • Specialized epiDNA-NNs for MindCode /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:770:
  • Hybrids of [algorithms, conventional computing, ANNs, MindCode] /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:77:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:80:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:83:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:86:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:88:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:90:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:92:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:95:
  • /home/bill/web/Neural nets/MindCode/MindCode.HtmWeb.html:98:
  • /home/bill/web/Neural nets/MindCode project collection.HtmWeb.html:21: /home/bill/web/Neural nets/MindCode project collection.HtmWeb.html:23:directory /home/bill/web/Neural nets/MindCode project collection.HtmWeb.html:24:status & updates /home/bill/web/Neural nets/MindCode project collection.HtmWeb.html:25:copyrights /home/bill/web/Neural nets/MindCode project collection.HtmWeb.html:42:
  • /home/bill/web/Neural nets/MindCode project collection.HtmWeb.html:45:
  • genetic mechanisms for [protein, program] code /home/bill/web/Neural nets/MindCode project collection.HtmWeb.html:47:
  • /home/bill/web/Neural nets/MindCode project collection.HtmWeb.html:50:
  • /home/bill/web/Neural nets/MindCode project collection.HtmWeb.html:53:
  • /home/bill/web/Neural nets/Neural Networks.HtmWeb.html:20: /home/bill/web/Neural nets/Neural Networks.HtmWeb.html:22:directory /home/bill/web/Neural nets/Neural Networks.HtmWeb.html:23:status & updates /home/bill/web/Neural nets/Neural Networks.HtmWeb.html:24:copyrights /home/bill/web/Neural nets/Pribram 1993 Rethinking neural networks, Quantum fields and biological data, TableOfContents.HtmWeb.html:20: /home/bill/web/Neural nets/Pribram 1993 Rethinking neural networks, Quantum fields and biological data, TableOfContents.HtmWeb.html:22:directory /home/bill/web/Neural nets/Pribram 1993 Rethinking neural networks, Quantum fields and biological data, TableOfContents.HtmWeb.html:23:status & updates /home/bill/web/Neural nets/Pribram 1993 Rethinking neural networks, Quantum fields and biological data, TableOfContents.HtmWeb.html:24:copyrights /home/bill/web/Neural nets/Transformer NNs/0_Transformer NNs.HtmWeb.html:20: /home/bill/web/Neural nets/Transformer NNs/0_Transformer NNs.HtmWeb.html:22:directory /home/bill/web/Neural nets/Transformer NNs/0_Transformer NNs.HtmWeb.html:23:status & updates /home/bill/web/Neural nets/Transformer NNs/0_Transformer NNs.HtmWeb.html:24:copyrights /home/bill/web/Neural nets/Transformer NNs/0_Transformer NNs.HtmWeb.html:36:
  • /home/bill/web/Neural nets/Transformer NNs/0_Transformer NNs.HtmWeb.html:38:
  • /home/bill/web/Neural nets/Transformer NNs/230604 KEEP survey ChatGPT and AI Usage (Students).HtmWeb.html:10: /home/bill/web/Neural nets/Transformer NNs/230604 KEEP survey ChatGPT and AI Usage (Students).HtmWeb.html:12:directory /home/bill/web/Neural nets/Transformer NNs/230604 KEEP survey ChatGPT and AI Usage (Students).HtmWeb.html:13:status & updates /home/bill/web/Neural nets/Transformer NNs/230604 KEEP survey ChatGPT and AI Usage (Students).HtmWeb.html:14:copyrights /home/bill/web/Neural nets/Transformer NNs/230604 KEEP survey ChatGPT and AI Usage (Teachers).HtmWeb.html:11: /home/bill/web/Neural nets/Transformer NNs/230604 KEEP survey ChatGPT and AI Usage (Teachers).HtmWeb.html:13:directory /home/bill/web/Neural nets/Transformer NNs/230604 KEEP survey ChatGPT and AI Usage (Teachers).HtmWeb.html:14:status & updates /home/bill/web/Neural nets/Transformer NNs/230604 KEEP survey ChatGPT and AI Usage (Teachers).HtmWeb.html:15:copyrights /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/240719 emTo Herve Ajeneza, Google cloud consultant.HtmEml.html:53: see example context: Ben Davidson, Suspicious0bservers.org: Catastrophe Evidence /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/240719 emTo Herve Ajeneza, Google cloud consultant.HtmEml.html:61:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/Expanding Earth hypothesis versus plate tectonics.HtmWeb.html:12: /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/Expanding Earth hypothesis versus plate tectonics.HtmWeb.html:14:directory /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/Expanding Earth hypothesis versus plate tectonics.HtmWeb.html:15:status & updates /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/Expanding Earth hypothesis versus plate tectonics.HtmWeb.html:16:copyrights /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/Expanding Earth hypothesis versus plate tectonics.HtmWeb.html:32:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/Expanding Earth hypothesis versus plate tectonics.HtmWeb.html:34:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/gravity is an Electro-Magnetic (EM) effect.HtmWeb.html:12: /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/gravity is an Electro-Magnetic (EM) effect.HtmWeb.html:131: /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/gravity is an Electro-Magnetic (EM) effect.HtmWeb.html:14:directory /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/gravity is an Electro-Magnetic (EM) effect.HtmWeb.html:15:status & updates /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/gravity is an Electro-Magnetic (EM) effect.HtmWeb.html:16:copyrights /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/gravity is an Electro-Magnetic (EM) effect.HtmWeb.html:193: /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/gravity is an Electro-Magnetic (EM) effect.HtmWeb.html:207:-1 Gravity and Magnetism... One and the same?
    /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/gravity is an Electro-Magnetic (EM) effect.HtmWeb.html:208:-3 Is gravity weak negative electric charge?
    /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/gravity is an Electro-Magnetic (EM) effect.HtmWeb.html:224:Derivation and Experimental Proof of the Universal Force by Dr. Bill Lucas
    /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/gravity is an Electro-Magnetic (EM) effect.HtmWeb.html:229:Derivation and Experimental Proof of the Universal Force
    /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/gravity is an Electro-Magnetic (EM) effect.HtmWeb.html:248: /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/gravity is an Electro-Magnetic (EM) effect.HtmWeb.html:263:see /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/gravity is an Electro-Magnetic (EM) effect.HtmWeb.html:32:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/gravity is an Electro-Magnetic (EM) effect.HtmWeb.html:34:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/gravity is an Electro-Magnetic (EM) effect.HtmWeb.html:36:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/gravity is an Electro-Magnetic (EM) effect.HtmWeb.html:38:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:101:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:103: see example context: Ben Davidson, Suspicious0bservers.org: Catastrophe Evidence /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:104:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:111:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:114:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:12: /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:14:directory /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:15:status & updates /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:16:copyrights /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:173:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:176:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:178:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:181:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:184:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:186:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:188:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:190:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:192:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:45:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:47:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:50:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:53:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:56:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:59:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:61:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:64:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:66:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:69:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:71:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:73:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:75:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:77:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/LLMs for Conceptual Predators.HtmWeb.html:80:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/Solar micronova and catastrophism.HtmWeb.html:12: /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/Solar micronova and catastrophism.HtmWeb.html:14:directory /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/Solar micronova and catastrophism.HtmWeb.html:15:status & updates /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/Solar micronova and catastrophism.HtmWeb.html:16:copyrights /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/Solar micronova and catastrophism.HtmWeb.html:32:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/Solar micronova and catastrophism.HtmWeb.html:34:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/Solar micronova and catastrophism.HtmWeb.html:36:
  • /home/bill/web/Neural nets/Transformer NNs/LLMs for Conceptual Predators/Solar micronova and catastrophism.HtmWeb.html:38:
  • /home/bill/web/page blogs.HtmWeb.html:20: /home/bill/web/page blogs.HtmWeb.html:22:directory /home/bill/web/page blogs.HtmWeb.html:23:status & updates /home/bill/web/page blogs.HtmWeb.html:24:copyrights /home/bill/web/page blogs.HtmWeb.html:35:
  • /home/bill/web/page blogs.HtmWeb.html:37:
  • /home/bill/web/page blogs.HtmWeb.html:39:
  • /home/bill/web/page blogs.HtmWeb.html:41:
  • /home/bill/web/page blogs.HtmWeb.html:43:
  • /home/bill/web/page crazy themes and stories.HtmWeb.html:20: /home/bill/web/page crazy themes and stories.HtmWeb.html:22:directory /home/bill/web/page crazy themes and stories.HtmWeb.html:23:status & updates /home/bill/web/page crazy themes and stories.HtmWeb.html:24:copyrights /home/bill/web/page crazy themes and stories.HtmWeb.html:34:
  • /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:124:
  • Mark Boslough and team at Sandia National Laboratory - Exploding asteroids, 1,950 views, Dec 27, 2010 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:125:
  • CraterHunter (?Dennis Cox?) - A Catastrophe of Comets, The geophysical world according to me, and a few folks I happen to agree with, ~23Dec2010? /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:126:
  • EU2015 speakers Bruce Leybourne and Ben Davidson - explain theories of our electromagnetic environment and the hot spots of current welling inside the Earth. 2015 (Ben Davidson video 24Feb2016) /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:132:https://www.youtube.com/watch?v=mPcF40vBqzs
  • /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:142:https://www.thunderbolts.info/wp/2016/05/11/arc-blast-part-1/ /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:149:
  • Mark Boslough and team at Sandia National Laboratory - Exploding asteroids, 1,950 views, Dec 27, 2010 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:150:
  • /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:151:
  • /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:152:
  • /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:157:https://www.thunderbolts.info/wp/2016/05/21/arc-blast-part-two/ /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:177:https://www.thunderbolts.info/wp/2016/05/28/arc-blast-part-three/ /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:189:https://www.thunderbolts.info/wp/2016/10/06/the-monocline/ /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:197:https://www.thunderbolts.info/wp/2017/01/20/the-maars-of-pinacate-part-one/ /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:205:https://www.thunderbolts.info/wp/2017/02/16/the-maars-of-pinacate-part-two/ /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:20: /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:213:https://www.thunderbolts.info/wp/2017/04/22/natures-electrode/ /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:221:https://www.thunderbolts.info/wp/2017/05/21/the-summer-thermopile/ /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:22:directory /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:231:https://www.thunderbolts.info/wp/2017/06/13/tornado-the-electric-model/ /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:239:https://www.thunderbolts.info/wp/2017/12/10/lightning-scarred-earth-part-1/ /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:23:status & updates /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:248:https://www.thunderbolts.info/wp/2017/12/17/lightning-scarred-earth-part-2/ /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:24:copyrights /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:256:https://www.thunderbolts.info/wp/2018/02/12/sputtering-canyons-part-1/ /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:265:https://www.thunderbolts.info/wp/2018/02/12/sputtering-canyons-part-2/ /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:273:https://www.thunderbolts.info/wp/2018/03/31/sputtering-canyons-part-3/ /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:283:https://www.thunderbolts.info/wp/2019/03/31/the-eye-of-the-storm-part-1/ /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:292:https://www.thunderbolts.info/wp/2019/05/05/the-eye-of-the-storm-part-2/ /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:301:https://www.thunderbolts.info/wp/2019/05/24/eye-of-the-storm-part-3/ /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:311:https://www.thunderbolts.info/wp/2019/06/20/eye-of-the-storm-part-4-2/ /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:320:https://www.thunderbolts.info/wp/2020/03/19/47212/ /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:329:https://www.thunderbolts.info/wp/2020/04/04/the-great-red-spot/ /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:338:https://www.thunderbolts.info/wp/2020/09/24/48437/ /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:339:https://www.youtube.com/watch?v=DgNTKrjpiiI&t=0s /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:341:https://www.youtube.com/watch?v=_3ITTdl_QRY&t=0s /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:350:https://www.thunderbolts.info/wp/2020/09/24/48437/ /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:351:https://youtu.be/_3ITTdl_QRY /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:360:https://www.thunderbolts.info/wp/2020/10/31/eye-of-the-storm-part-8/ /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:362:https://youtu.be/2WS0vsVB4Tw /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:373:https://www.thunderbolts.info/wp/2020/12/25/eye-of-the-storm-part-9/ /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:374:https://youtu.be/LwbsA-QDBFY /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:383:https://www.thunderbolts.info/wp/2020/12/25/eye-of-the-storm-part-9/ /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:384:https://youtu.be/-KoJ9wpvD_g /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:385:https://www.youtube.com/watch?v=-KoJ9wpvD_g /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:394:https://www.thunderbolts.info/wp/2021/01/28/eye-of-the-storm-part-10-2/ /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:395:https://www.youtube.com/watch?v=hW4kCP-ascw /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:405:https://www.patreon.com/posts/andrew-hall-egg-51555997?utm_medium=post_notification_email&utm_source=post_link&utm_campaign=patron_engagement /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:420:https://thunderbolts.us7.list-manage.com/track/click?u=1b8e5fc5ffab70f95805dea12&id=f6b8bab8a7&e=54f3bc9169 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:428:https://www.thunderbolts.info/wp/2021/08/20/the-shocking-truth/ /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:433:https://www.youtube.com/watch?v=Pt6NscQ2qS8 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:445:Thunderblog source article
    /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:446:https://www.youtube.com/watch?v=ISfuOZgaN3c
    /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:44:
  • Introduction /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:455:https://www.youtube.com/watch?v=i4jWPfNJ0rM&t=1s
    /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:45:
  • Andrew Hall's work /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:47:
  • Surface Conductive Faults 11Mar2016 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:50:
  • Part 1 Thunderblog 11May2016 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:51:
  • Part 2 Thunderblog 21May2016 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:527:Shine On You Crazy Diamond
    /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:52:
  • Part 3 Thunderblog 28May216 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:54:
  • The Monocline Thunderblog 06Oct016 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:566:
  • Immanuel Velikovsky - Velikovsky is a primary inspiration for a great deal of breakthrough thinking across many subjects! He was not liked by establishment science, but over time most of his [idea, prediction]s have been OR[right, insightful], and mainstream scientists wrong! That thing about Venus sprouting from [Saturn, Mars, something] (I forget which) in historical times is a bit much for me, but given his track record I am afraid to say that he was wrong. /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:568:
  • Paul Anderson - A US Army research chemist, Paul also was also core team member for the SAFIRE experiment of an electrical model of the sun, which broke convential physics theories and is leading to development and commercialisation efforts for [energy, de-radioisotope processes, ???]. (see also Howell 120903 Paul Anderson's Electric scarring of the Earth.pdf) /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:571:
  • Rens Van Der Sluijs "Theories on the Rocks - In a Flash (Part Two)" 27Aug2021 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:578:
  • Expanding Earth (EE) hypothesis [?Hildebrand?, Neal Adams (Batman artist), James Maxlow, ??? Hurrell?] - entirely subsumes plate techtonics and takes it to an entirely new level,both for geology and evolution. /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:57:
  • Part 1 Thunderblog 20Jan2016 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:583:
  • Petroglyphs [David Talbot, Wal Thornhill, Anthony Peratt] - Mythology backed by space plasma science help explain what some [mythology, petroglyphic images] may represent. This is far superior to any other explanations that I have seen (including ?Joseph Campbell's archetypes).
    /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:58:
  • Part 2 Thunderblog 16Feb2017 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:60:
  • Nature's Electrode Thunderblog 22Apr2017 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:61:
  • The Summer Thermopile Thunderblog 21May2017 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:62:
  • Tornado - The Electric Model Thunderblog 13Jun2017 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:65:
  • Part 1 Thunderblog 10Dec2017 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:66:
  • Part 2 Thunderblog 17Dec2017 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:70:
  • Part 1 Arches National Monument, Thunderblog 12Feb2018 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:71:
  • Part 2 Colorado Plateau, Thunderblog 12Feb2018 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:72:
  • Part 3 Secondary effects from electrical deposition, Thunderblog 31Mar2018 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:76:
  • Part 1 Thunderblog 31Mar2019 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:77:
  • Part 2 The Electric Winds of Jupiter, Thunderblog 05May2019 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:78:
  • Part 3 Some storms suck and others blow, Thunderblog 05May2019 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:79:
  • Part 4 Wind Map, Thunderblog 20Jun2019 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:80:
  • Part 5 Large Scale Wind Structures, Thunderblog 19Mar2020 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:81:
  • Part 6 Jupiter's The Great Red Spot, Thunderblog 04Apr2020 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:82:
  • Part 7, 1 of 2 Electric Earth & the Cosmic Dragon, Thunderblog and video 24Sep2020 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:83:
  • Part 7, 2 of 2 Electric Earth & the Cosmic Dragon, Thunderblog and video 24Sep2020 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:84:
  • Part 8 Proving the Passage of the Dragon, Thunderblog and video 31Oct2020 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:85:
  • Part 9, 1 of 2 San Andreas Fault - A Dragon in Action? Thunderblog and video 18Dec2020 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:86:
  • Part 9, 2 of 2 Ground Currents and Subsurface Birkeland Currents - How the Earth Thinks? Thunderblog and video 25Dec2020 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:87:
  • Part 10 Reverse Engineering the Earth, Thunderblog and video 28Jan2021 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:91:
  • Part 1 Easter Egg Hunt, video 22May2021 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:92:
  • Part 2 The Cross from the Laramie Mountains, video 29May2021 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:94:
  • The Shocking Truth, Thunderblog and video 20Aug2021 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:95:
  • Cracks in Theory 20Nov2021 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:96:
  • Electricity in Ancient Egypt, video 26Aug2023 /home/bill/web/ProjMajor/Electric Universe/Hall- electric geology/Andrew Halls electric geology [thunderblog, video]s.HtmWeb.html:98:
  • Other electric geology concepts /home/bill/web/ProjMajor/Electric Universe/Jupp- instant fossilization/0_Jupp instant fossilisation notes.HtmWeb.html:104: /home/bill/web/ProjMajor/Electric Universe/Jupp- instant fossilization/0_Jupp instant fossilisation notes.HtmWeb.html:11: /home/bill/web/ProjMajor/Electric Universe/Jupp- instant fossilization/0_Jupp instant fossilisation notes.HtmWeb.html:124: /home/bill/web/ProjMajor/Electric Universe/Jupp- instant fossilization/0_Jupp instant fossilisation notes.HtmWeb.html:13:directory /home/bill/web/ProjMajor/Electric Universe/Jupp- instant fossilization/0_Jupp instant fossilisation notes.HtmWeb.html:14:status & updates /home/bill/web/ProjMajor/Electric Universe/Jupp- instant fossilization/0_Jupp instant fossilisation notes.HtmWeb.html:15:copyrights /home/bill/web/ProjMajor/Electric Universe/Jupp- instant fossilization/0_Jupp instant fossilisation notes.HtmWeb.html:45:
  • /home/bill/web/ProjMajor/Electric Universe/Jupp- instant fossilization/0_Jupp instant fossilisation notes.HtmWeb.html:47:
  • /home/bill/web/ProjMajor/Electric Universe/Jupp- instant fossilization/0_Jupp instant fossilisation notes.HtmWeb.html:57: /home/bill/web/ProjMajor/Electric Universe/Jupp- instant fossilization/0_Jupp instant fossilisation notes.HtmWeb.html:64: /home/bill/web/ProjMajor/Electric Universe/Jupp- instant fossilization/0_Jupp instant fossilisation notes.HtmWeb.html:66: /home/bill/web/ProjMajor/Electric Universe/Jupp- instant fossilization/0_Jupp instant fossilisation notes.HtmWeb.html:68: /home/bill/web/ProjMajor/Electric Universe/Jupp- instant fossilization/0_Jupp instant fossilisation notes.HtmWeb.html:69: /home/bill/web/ProjMajor/Electric Universe/Jupp- instant fossilization/0_Jupp instant fossilisation notes.HtmWeb.html:71: /home/bill/web/ProjMajor/Electric Universe/Jupp- instant fossilization/0_Jupp instant fossilisation notes.HtmWeb.html:73: /home/bill/web/ProjMajor/Electric Universe/Jupp- instant fossilization/0_Jupp instant fossilisation notes.HtmWeb.html:75: /home/bill/web/ProjMajor/Electric Universe/Jupp- instant fossilization/0_Jupp instant fossilisation notes.HtmWeb.html:77: /home/bill/web/ProjMajor/Electric Universe/Jupp- instant fossilization/0_Jupp instant fossilisation notes.HtmWeb.html:90: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/240115 emto Steve: Sierpinski [triangles, tertrahedra]: Johannes Kepler, Edo Kaal, Pyramids [Bosnia, Egypt, Mesopotamia, Central America, China (dirt, not stone)].HtmNlk.html:102:
  • Croatian /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/240115 emto Steve: Sierpinski [triangles, tertrahedra]: Johannes Kepler, Edo Kaal, Pyramids [Bosnia, Egypt, Mesopotamia, Central America, China (dirt, not stone)].HtmNlk.html:107: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/240115 emto Steve: Sierpinski [triangles, tertrahedra]: Johannes Kepler, Edo Kaal, Pyramids [Bosnia, Egypt, Mesopotamia, Central America, China (dirt, not stone)].HtmNlk.html:111: Section 2 provides background information on how Pascal’s triangle is related to the /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/240115 emto Steve: Sierpinski [triangles, tertrahedra]: Johannes Kepler, Edo Kaal, Pyramids [Bosnia, Egypt, Mesopotamia, Central America, China (dirt, not stone)].HtmNlk.html:115:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/240115 emto Steve: Sierpinski [triangles, tertrahedra]: Johannes Kepler, Edo Kaal, Pyramids [Bosnia, Egypt, Mesopotamia, Central America, China (dirt, not stone)].HtmNlk.html:117: "... At the end of the 1920s, Bohr, Heisenberg, and Pauli had worked out the Copenhagen interpretation of quantum mechanics, Bannink, Buhrman 19Nov2018 Quantum Pascal’s Triangle and Sierpinski’s carpet
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/240115 emto Steve: Sierpinski [triangles, tertrahedra]: Johannes Kepler, Edo Kaal, Pyramids [Bosnia, Egypt, Mesopotamia, Central America, China (dirt, not stone)].HtmNlk.html:166:Bannink, Buhrman 19Nov2018 Quantum Pascal’s Triangle and Sierpinski’s carpet
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/240115 emto Steve: Sierpinski [triangles, tertrahedra]: Johannes Kepler, Edo Kaal, Pyramids [Bosnia, Egypt, Mesopotamia, Central America, China (dirt, not stone)].HtmNlk.html:189:Johanna L. Miller 19Nov2018Quantum mechanics in fractal geometry (<Sierpinski triangle>)
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/240115 emto Steve: Sierpinski [triangles, tertrahedra]: Johannes Kepler, Edo Kaal, Pyramids [Bosnia, Egypt, Mesopotamia, Central America, China (dirt, not stone)].HtmNlk.html:222:Xie, HH., Zeng, GM. 15Jul2021 Quantum walks on Sierpinski gasket and Sierpinski tetrahedron. Quantum Inf Process 20, 240 (2021). https://doi.org/10.1007/s11128-021-03171-4
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/240115 emto Steve: Sierpinski [triangles, tertrahedra]: Johannes Kepler, Edo Kaal, Pyramids [Bosnia, Egypt, Mesopotamia, Central America, China (dirt, not stone)].HtmNlk.html:233:Shajesh, Parashar, Cavero-Peláez, Kocik, Brevik 13Nov2017 Casimir energy of Sierpinski triangles, Phys. Rev. D 96, 105010
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/240115 emto Steve: Sierpinski [triangles, tertrahedra]: Johannes Kepler, Edo Kaal, Pyramids [Bosnia, Egypt, Mesopotamia, Central America, China (dirt, not stone)].HtmNlk.html:243:Krecmar, Zelenayova, Caha, Rapcan, Nishino, Gendiar 24Mar2020 Quantum Potts Models on the Sierpinski Pyramid /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/240115 emto Steve: Sierpinski [triangles, tertrahedra]: Johannes Kepler, Edo Kaal, Pyramids [Bosnia, Egypt, Mesopotamia, Central America, China (dirt, not stone)].HtmNlk.html:25: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/240115 emto Steve: Sierpinski [triangles, tertrahedra]: Johannes Kepler, Edo Kaal, Pyramids [Bosnia, Egypt, Mesopotamia, Central America, China (dirt, not stone)].HtmNlk.html:27: etsy.com second generation Sierpinski tetrahedron VI, beaded art /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/240115 emto Steve: Sierpinski [triangles, tertrahedra]: Johannes Kepler, Edo Kaal, Pyramids [Bosnia, Egypt, Mesopotamia, Central America, China (dirt, not stone)].HtmNlk.html:29: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/240115 emto Steve: Sierpinski [triangles, tertrahedra]: Johannes Kepler, Edo Kaal, Pyramids [Bosnia, Egypt, Mesopotamia, Central America, China (dirt, not stone)].HtmNlk.html:31: Johanna L. Miller 19Nov2018 Quantum mechanics in fractal geometry (<Sierpinski triangle>), Physics Today
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/240115 emto Steve: Sierpinski [triangles, tertrahedra]: Johannes Kepler, Edo Kaal, Pyramids [Bosnia, Egypt, Mesopotamia, Central America, China (dirt, not stone)].HtmNlk.html:37: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/240115 emto Steve: Sierpinski [triangles, tertrahedra]: Johannes Kepler, Edo Kaal, Pyramids [Bosnia, Egypt, Mesopotamia, Central America, China (dirt, not stone)].HtmNlk.html:39: Shajesh, Parashar, Cavero-Peláez, Kocik, Brevik 13Nov2017 Casimir energy of Sierpinski triangles, Phys. Rev. D 96, 105010
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/240115 emto Steve: Sierpinski [triangles, tertrahedra]: Johannes Kepler, Edo Kaal, Pyramids [Bosnia, Egypt, Mesopotamia, Central America, China (dirt, not stone)].HtmNlk.html:45:
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/240115 emto Steve: Sierpinski [triangles, tertrahedra]: Johannes Kepler, Edo Kaal, Pyramids [Bosnia, Egypt, Mesopotamia, Central America, China (dirt, not stone)].HtmNlk.html:46: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/240115 emto Steve: Sierpinski [triangles, tertrahedra]: Johannes Kepler, Edo Kaal, Pyramids [Bosnia, Egypt, Mesopotamia, Central America, China (dirt, not stone)].HtmNlk.html:48: Johannes Kepler
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/240115 emto Steve: Sierpinski [triangles, tertrahedra]: Johannes Kepler, Edo Kaal, Pyramids [Bosnia, Egypt, Mesopotamia, Central America, China (dirt, not stone)].HtmNlk.html:53:
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/240115 emto Steve: Sierpinski [triangles, tertrahedra]: Johannes Kepler, Edo Kaal, Pyramids [Bosnia, Egypt, Mesopotamia, Central America, China (dirt, not stone)].HtmNlk.html:54: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/240115 emto Steve: Sierpinski [triangles, tertrahedra]: Johannes Kepler, Edo Kaal, Pyramids [Bosnia, Egypt, Mesopotamia, Central America, China (dirt, not stone)].HtmNlk.html:56: Great Pyramid of Giza can focus electromagnetic energy through its hidden chambers
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/240115 emto Steve: Sierpinski [triangles, tertrahedra]: Johannes Kepler, Edo Kaal, Pyramids [Bosnia, Egypt, Mesopotamia, Central America, China (dirt, not stone)].HtmNlk.html:74:
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/240115 emto Steve: Sierpinski [triangles, tertrahedra]: Johannes Kepler, Edo Kaal, Pyramids [Bosnia, Egypt, Mesopotamia, Central America, China (dirt, not stone)].HtmNlk.html:84: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/240115 emto Steve: Sierpinski [triangles, tertrahedra]: Johannes Kepler, Edo Kaal, Pyramids [Bosnia, Egypt, Mesopotamia, Central America, China (dirt, not stone)].HtmNlk.html:86: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/240124 emto [Kaal, Childs, Samuel] for permissions.HtmWeb.html:18:Kaal Structured Atom Model vs Quantum Mechanics
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/240527 emto Niki Hunchuk: Scottish balls, philosophers stones, Edo Kaals atomic nucleus model.HtmNlk.html:25: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/240527 emto Niki Hunchuk: Scottish balls, philosophers stones, Edo Kaals atomic nucleus model.HtmNlk.html:37: wikipedia: Platonic solids /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal SAM vs QM: deactivation.HtmWeb.html:110: eg [Am, Cm, etc]:Bromerly papers /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal SAM vs QM: deactivation.HtmWeb.html:126: see SAFIRE & Aureon.ca for actual experimental results /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal SAM vs QM: deactivation.HtmWeb.html:215: see above: "Nuclear [material, process, deactivate]s" /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal SAM vs QM: deactivation.HtmWeb.html:21: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal SAM vs QM: deactivation.HtmWeb.html:23:directory /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal SAM vs QM: deactivation.HtmWeb.html:24:status & updates /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal SAM vs QM: deactivation.HtmWeb.html:25:copyrights /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal SAM vs QM: deactivation.HtmWeb.html:263:Institute for Energy and Environmental Research (viewed 05Jan2024) "Fissile Material Basics" https://ieer.org/resource/factsheets/fissile-material-basics/
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal SAM vs QM: deactivation.HtmWeb.html:38:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal SAM vs QM: deactivation.HtmWeb.html:40:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal SAM vs QM: deactivation.HtmWeb.html:42:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal SAM vs QM: deactivation.HtmWeb.html:44:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal SAM vs QM: deactivation.HtmWeb.html:46:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal SAM vs QM: deactivation.HtmWeb.html:48:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal SAM vs QM: deactivation.HtmWeb.html:50:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal SAM vs QM: deactivation.HtmWeb.html:52:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal SAM vs QM: deactivation.HtmWeb.html:54:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal SAM vs QM: deactivation.HtmWeb.html:62:
  • WebSite: https://StructuredAtom.org/ /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal SAM vs QM: deactivation.HtmWeb.html:63:
  • "Atom Viewer" 3D rotatable online model to see element structures according to SAM : /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal SAM vs QM: deactivation.HtmWeb.html:64:
  • transmutation of fission wastes - Howell's [question, thought, my error]s regarding a series of papers on [Th, transmutation] by Bromely Blair and colleagues, focussed on "Pressure Tube - Heavy Water Reactors" (PT-HWR) /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:101:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:105:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:118:From the SAM webPage :
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:127: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:129:
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:132: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:133:
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:134: "Atom Viewer" 3D rotatable online model to see nucleus structures for each element according to SAM /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:139: J.E. Kaal, A. Otte, J.A. Sorensen, J.G. Emming 2021 "The nature of the atom" www.Curtis-Press.com, 268pp ISBN 978-1-8381280-2-9 https://StructuredAtom.org/ /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:152:
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:155:
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:158:
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:161:
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:166:
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:169:
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:173:
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:189: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:201:available on YouTube /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:207: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:21: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:228: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:238: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:239: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:23:directory /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:241: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:24:status & updates /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:256: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:258: wikipedia: Platonic solids /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:259: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:25:copyrights /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:280: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:286: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:292: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:298: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:304: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:322: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:344: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:350: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:363: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:375: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:384:
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:389:
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:38:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:40:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:428:
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:42:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:440:
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:44:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:457:
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:479: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:47:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:495: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:49:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:519: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:51:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:53:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:546:

    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:554:The SAFIRE reactor (from the video The Walkthrough: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:555:
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:565: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:567: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:56:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:597: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:59:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:606:
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:611:
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:61:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:627: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:633:
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:637:
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:63:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:640:
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:65:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:67:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:687:
  • Croatian /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:692: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:696:
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:699:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:69:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:71:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:720:
  • nkodama, lenr-forum.com Electron Deep Orbit (EDO) fine structure by quarks
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:723: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:726: [time, length] scales are related in many subject areas. An extremely simple example is that of the 1872-2020 SP500 index : a [time, mind]-bending perspective :
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:727:
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:728: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:73:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:748: This table has been MOVED. /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:751: Big printable: /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:755: fun ideas I've seen over the last ~15 years from the Electric Universe and John Chapelle Natural Philosophy Society groups. To me, in the self-imposed context of "Multiple Conflicting Hypotesis", the context isn't [right, wrong, true, false], but rather the pleasure of seeing the ideas of people who have done some [observation, imagination, often huge work] in a very [imaginative, creative] way. I tend to favor hard math basis, but I've learned that observation may be a powerful key independent of math.
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:75:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:768: I was stunned to see comments driving this question in Scherer, Veizer, Shaviv et.al. 2006 "Interstellar-Terrestrial relations" : /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:775: I also joked about this possibly being a driver of dark [energy, matter] pseudo-theory, and the scramble to "solve" the main (politically-correct) problem with General Relativity (GR) (but my cheap mpeg video anim doesn't seem to work now). /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:80:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:818: see above: "Nuclear [material, process, deactivate]s" /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:83:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:85:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:866:Institute for Energy and Environmental Research (viewed 05Jan2024) "Fissile Material Basics" https://ieer.org/resource/factsheets/fissile-material-basics/
    /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:87:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:94:
  • /home/bill/web/ProjMajor/Electric Universe/Kaal SAM nucleus/Kaal Structured Atom Model vs Quantum Mechanics.HtmWeb.html:98:
  • /home/bill/web/ProjMajor/Electric Universe/Peratt petroglyphs/Peratt - Auroral phenomena and petroglyphs.HtmWeb.html:121: