/media/bill/HOWELL_BASE/Projects/ACM Facility Safety presentation 01Sep2016/0_acm notes.txt ********** List of known errors in video : 1. Frank Lewis of UofTX Austin - didn't address robust control, just adaptive & optimal 2. Games - Kasparov's doubts were in film 6 years after the match, not 10 to 20, although I don't know if his later thinking may have changed x. Deep Learning History - How could I have dropped out a key basis - Paul Werbos's 1974 "ordered derivatives"? (as more [general, complete correct, deeper & broader] than "back propagation") 2. Games - AlphaGo came 20 years after Deep Blue, not 30. 3. Corporate reports, Confabulation's "GhostWriter" - missing the ssue that the the machine questions and guides the human, as much or more than the other way around Missing : 1. People - Paul Werbos, Danil Prokohorv, Vladimir Vapnik, ? de Jager, Derong Liu, Jinde Cao, 2. Themes a. Safety - tremendous safety advantages of CI!!! # recordmydesktop - must be run from Lenovo desktop as Toshiba laptiop doessn't # have a difgital audio capability!! *************** *********** 03Nov2016 [Shawn the sheep, Bell curve cover] - resize problem - work today (??!?!??) Background & Title page disappear - no frigging idea why!!!! *********** 01Nov2016 Remaining problems background & Title slide are killed at end of intro - why? (1Nov2016 can't see problem yet) image_next - doesn't properly cut refs_Lists (ignore for now - still works sort of) German power - Cat Stevens music cut short (1Nov2016 can't see problem yet) Sociology - no heading - add Shawn the sheep!! up to Question the Questions Driverless cars - kills refs in previous scene (I added "host 'sleep 2s') ***** 15Oct2016 Isabelle Guyon, Amir Saffari, Gideon Dror, Gavin Cawley 2007 "Agnostic Learning vs. Prior Knowledge Challenge" Proceedings of International Joint Conference on Neural Networks, Orlando, Florida, USA, August 12-17, 2007 Guyon etal 2007 Agnostic Learning vs Prior Knowledge Challenge ********** Computational Intelligence (CI) Overview CI intro +-----+ Image - Multi-layer perceptron : Michael Nielsen, http://neuralnetworksanddeeplearning.com/images/tikz11.png ; Image - RNN structure : AP O’Riordan, http://www.cs.ucc.ie/~adrian/cs5201/NeuralComputingI.htm +-----+ Images - Genetic Algorithm : hhttp://www.ewh.ieee.org/soc/es/May2001/14/Begin.htm Ying-Hong Liao, and Chuen-Tsai Sun, National Chiao Tung University. Member, IEEE "An Educational Genetic Algorithms Learning Tool" ©2001 IEEE Figure 1. Evolution flow of genetic algorithm (click graphic for animation demo) Figure 2. Crossover (click for animation demo) Liao & Sun 2001 Evolution flow of genetic algorithm.GIF Liao & Sun 2001 Crossover.GIF +-----+ Image - fuzzification, solution - de-fuzzification : https://www.researchgate.net/figure/273023655_fig11_Fig-16-Example-of-fuzzification-and-defuzzification-in-the-proposed-fuzzy-control Jianhui Wong, Yun Seng Lim, Ezra Morris Oct2014 "Novel Fuzzy Controlled Energy Storage for Low-Voltage Distribution Networks with Photovoltaic Systems under Highly Cloudy Conditions" Journal of Energy Engineering Fig. 16. Example of fuzzification and defuzzification in the proposed fuzzy control method Wong, Lim, Morris Oct2014 Fuzzification, solution, defuzzification.png ********** Big Data - Patterns versus rare anomalies &&&&& Example - Ford misfire detection (CAVD) &&&&& http://popelka.ms.mff.cuni.cz/cerno/structure_and_recognition/files/kukacka_clustering_referat.pdf Clustering: A neural network approach Marek Kuka, October 14, 2010 Good description of types of clustering https://www.microsoft.com/en-us/research/wp-content/uploads/1996/01/neural_networks_pattern_recognition.pdf Neural networks : A pattern recognition perspective Christopher M. Bishop, Jan1996 http://yann.lecun.com/exdb/publis/pdf/lecun-bengio-95b.pdf Yann LeCun, Yoshua Bengio 1995? Pattern recognition and neural networks Not so good &&&&& https://en.wikipedia.org/wiki/Anomaly_detection Supervised, Semi-supervised, Unsupervised "... the key difference to many other statistical classification problems is the inherent unbalanced nature of outlier detection ..." Popular techniques Several anomaly detection techniques have been proposed in literature. Some of the popular techniques are: - Density-based techniques (k-nearest neighbor,[6][7][8] local outlier factor,[9] and many more variations of this concept[10]). - Subspace-[11] and correlation-based[12] outlier detection for high-dimensional data.[13] - One class support vector machines.[14] - Replicator neural networks.[15] - Cluster analysis-based outlier detection.[16][17] - Deviations from association rules and frequent itemsets. - Fuzzy logic based outlier detection. - Ensemble techniques, using feature bagging,[18][19] score normalization[20][21] and different sources of diversity.[22][23] &&&&& HOWELL - Key difference is rare-event needle-in-the-haystack, evolving, targeted, camoflauged HOWELL - Evolutionary computation!!! Be your own worst enemy (grey hat crackers (not hackers, who are the good guys) so treat safety anomalies as "enemy actions", David Fogel - True learning REQUIRES evolution!!! The performance of different methods depends a lot on the data set and parameters, and methods have little systematic advantages over another when compared across many data sets and parameters.[24][25] &&&&& http://neuro.bstu.by/ai/To-dom/My_research/Papers-0/For-research/D-mining/Anomaly-D/KDD-cup-99/NN/dawak02.pdf Outlier Detection Using Replicator Neural Networks Simon Hawkins, Hongxing He, Graham Williams and Rohan Baxter In this paper we employ multi-layer perceptron neural networks with three hidden layers, and the same number of output neurons and input neurons, to model the data. These neural networks are known as replicator neural networks (RNNs). In the RNN model the input variables are also the output variables so that the RNN forms an implicit, compressed model of the data during training. A measure of outlyingness of individuals is then developed as the reconstruction error of individual data points. The RNN approach has linear analogues in Principal Components Analysis [10]. HOWELL - i.e. Replicator NNs (NOT RecurrentNNs!!) = autoencoders &&&&& https://www.sans.org/reading-room/whitepapers/detection/application-neural-networks-intrusion-detection-336 Application of Neural Networks to Intrusion Detection Jean-Philippe PLANQUART Intrusion Detection Systems ( IDS ) are now mainly employed to secure company networks. Ideally, an IDS has the capacity to detect in real-time all ( attempted ) intrusions, and to execute work to stop the attack ( for example, modifying firewall rules ). We present in this paper a " state of the art " of Intrusion Detection Systems, developing commercial and research tools, and a new way to improve false-alarm detection using Neural Network approach. This approach is still in development, nevertheless it seems to be v.. &&&&& http://csrc.nist.gov/nissc/1998/proceedings/paperF13.pdf Artificial Neural Networks for Misuse Detection James Cannady >=1997 Abstract - Misuse detection is the process of attempting to identify instances of network attacks by comparing current activity against the expected actions of an intruder. Most current approaches to misuse detection involve the use of rule based expert systems to identify indications of known attacks. However, these techniques are less successful in identifying attacks which vary from expected patterns. Artificial neural networks provide the potential to identify and classify network activity based on limited, incomplete, and nonlinear data sources. We present an approach to the process of misuse detection that utilizes the analytical strengths of neural networks, and we provide the results from our preliminary analysis of this approach &&&&& http://www.ncbi.nlm.nih.gov/pubmed/16761810 IEEE Trans Syst Man Cybern B Cybern. 2006 Jun;36(3):559-70. Evolutionary neural networks for anomaly detection based on the behavior of a program. Han SJ1, Cho SB. Abstract The process of learning the behavior of a given program by using machine-learning techniques (based on system-call audit data) is effective to detect intrusions. Rule learning, neural networks, statistics, and hidden Markov models (HMMs) are some of the kinds of representative methods for intrusion detection. Among them, neural networks are known for good performance in learning system-call sequences. In order to apply this knowledge to real-world problems successfully, it is important to determine the structures and weights of these call sequences. However, finding the appropriate structures requires very long time periods because there are no suitable analytical solutions. In this paper, a novel intrusion-detection technique based on evolutionary neural networks (ENNs) is proposed. One advantage of using ENNs is that it takes less time to obtain superior neural networks than when using conventional approaches. This is because they discover the structures and weights of the neural networks simultaneously. Experimental results with the 1999 Defense Advanced Research Projects Agency (DARPA) Intrusion Detection Evaluation (IDEVAL) data confirm that ENNs are promising tools for intrusion detection. &&&&& https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2015-56.pdf Long Short Term Memory Networks for Anomaly Detection in Time Series Pankaj Malhotra, Lovekesh Vig, Gautam Shroff, Puneet Agarwal Abstract - Long Short Term Memory (LSTM) networks have been demonstrated to be particularly useful for learning sequences containing longer term patterns of unknown length, due to their ability to maintain long term memory. Stacking recurrent hidden layers in such networks also enables the learning of higher level temporal features, for faster learning with sparser representations. In this paper, we use stacked LSTM net- works for anomaly/fault detection in time series. A network is trained on non-anomalous data and used as a predictor over a number of time steps. The resulting prediction errors are modeled as a multivariate Gaussian distribution, which is used to assess the likelihood of anomalous behav- ior. The efficacy of this approach is demonstrated on four datasets: ECG, space shuttle, power demand, and multi-sensor engine dataset. Space Shuttle Marotta valve time series This dataset has both short time-period patterns and long time-period patterns lasting 100s of time-steps. Ther eare three anomalous regions in the dataset markeda1,a2,anda3in Fig. 2(b).Regiona3is a more easily discernible anomaly, whereas regionsa1anda2corre-spond to more subtle anomalies that are not easily discernable at this resolution. ********** Big Data - German power grid Image - Reinforcement learning from animal studies, p18 : Paul Werbos 2004 "ADP: Goals, opportunities and principles", in J. Si, A.G. Barto, W.B. Powell, D. Wunsch, (eds) Handbook of learning and approximate dynamic programming. New Jersey, USA: IEEE Press and Wiley-Interscience, 2004. Image - Hamilton-Jacobi-Bellman equation : http://images.slideplayer.com/14/4316064/slides/slide_4.jpg Image - ADP actor & critic roles : K.G. Vamvoudakis, http://www.ece.ucsb.edu/~kyriakos/res.html Image - ADP model types : Marcin Szuster, Piotr Gierlak 04Feb2016 "Approximate Dynamic Programming in Tracking Control of a Robotic Manipulator" International Journal of Advanced Robotic Systems", ISSN 1729-8806, Published: February 4, 2016 DOI: 10.5772/62129, http://www.intechopen.com/books/international_journal_of_advanced_robotic_systems/approximate-dynamic-programming-in-tracking-control-of-a-robotic-manipulator Image - Fluctuating wind & solar power, Germany : 23Jan2013, http://instituteforenergyresearch.org/analysis/germanys-green-energy-destabilizing-electric-grids/ Image - European wind&solar mislocation : Siemens 1Oct2014, http://www.siemens.com/innovation/en/home/pictures-of-the-future/energy-and-efficiency/power-transmission-facts-and-forecasts.html Image - Siemans NN research : Hans-Georg Zimmermann of Siemens presentation on 27Aug2016 at WCCI2016 Vancouver +-----+ Extras Image - Approximate dynamic programming, Vamvoudakis Image - ADP actor & critic roles, Vamvoudakis http://www.ece.ucsb.edu/~kyriakos/res.html Kyriakos G. Vamvoudakis - Research Projects Our research draws from the areas of control theory, game theory and computational intelligence. Our recent research interests lie in the design of robust and secure multi-agent networked systems, such as smart grids, and unmanned aerial and ground vehicles. Below is a brief description of some recent research projects. Image - Fluctuating wind & solar power, Germany http://instituteforenergyresearch.org/analysis/germanys-green-energy-destabilizing-electric-grids/ 23Jan2013 Germany is phasing out its nuclear plants in favor of wind and solar energy backed-up by coal power. The government’s transition to these intermittent green energy technologies is causing havoc with its electric grid and that of its neighbors–countries that are now building switches to turn off their connection with Germany at their borders. The intermittent power is causing destabilization of the electric grids causing potential blackouts, weakening voltage and causing damage to industrial equipment. The instability of the electric grid is just one of many issues that the German government is facing regarding its move to intermittent renewable technologies. As we have previously reported, residential electricity prices in Germany are some of the highest in Europe and are increasing dramatically (currently Germans pay 34 cents a kilowatt hour compared to an average of 12 cents in the United States). This year German electricity rates are about to increase by over 10 percent due mainly to a surcharge for using more renewable energy and a further 30 to 50 percent price increase is expected in the next ten years. These changes in the electricity generation market have caused about 800,000 German households to no longer be able to afford their energy bills. The Destabilization Problem More than one third of Germany’s wind turbines are located in the eastern part of the nation where this large concentration of generating capacity regularly overloads the region’s electricity grid, threatening blackouts. The situation tends to be particularly critical on public holidays when residents and companies consume significantly less electricity than usual with the wind blowing regardless of the demand and supplying electricity that isn’t needed. In some extreme cases, the region produces three to four times the total amount of electricity actually being consumed, placing a strain on the eastern German electric grid. System engineers have to intervene every other day to maintain network stability. Image - European wind&solar mislocation, 1 October 2014 http://www.siemens.com/innovation/en/home/pictures-of-the-future/energy-and-efficiency/power-transmission-facts-and-forecasts.html A Siemens study reveals that renewable sources of energy could be made much more efficient in Europe if wind turbines and solar panels were set up at optimum locations. This could produce savings of up to €45 billion. However, such a sustainable power supply would not only require stable grids, but smart transmission systems as well. Stabilizing Power Grids Although the total length of Europe’s power network (11.1 million kilometers) is comparable to that of China or the United States, the above-mentioned Siemens study regards it as one of the most reliable and efficient grids in the world. The data show that the grid in Europe suffers much lower transmission losses than the power networks in other regions. Whereas 6.5 percent of transmitted electricity is lost in Europe, the comparable figure for the U.S. 6.3 percent and for China is 6.9 percent. And at 12.4 percent, power losses are almost twice as high in Russia as in Europe. However, a comparison of the individual European countries shows that power transmission can still be optimized in Europe as a whole. For example, the Siemens study reveals that Germany has transmission and distribution losses of only four percent. Despite these strengths, the VDE study cites experts from the German Ministry for Economic Affairs and Technology who state that the German transmission network reached its capacity limits on “a relevant number of occasions”. In fact, German power companies currently have to intervene in the country’s network more than 1,000 times each year to keep the grid stable. Image - Wind power control challenge in Germany https://www.cleanenergywire.org/sites/default/files/styles/lightbox_image/public/images/factsheet/16022016-redispatch-grafik-neu.png?itok=gZvBOaV3 Image - Approximate Dynamic , p21: Paul Werbos 2004 "ADP: Goals, opportunities and principles", in J. Si, A.G. Barto, W.B. Powell, D. Wunsch, (eds) Handbook of learning and approximate dynamic programming. New Jersey, USA: IEEE Press and Wiley-Interscience, 2004. ******************** Big Data : Design --->> This was NOT included in the presentation!!! https://cloud.google.com/products/machine-learning/?utm_source=google&utm_medium=cpc&utm_campaign=2016-q1-cloud-na-ML-skws-freetrial&gclid=CjwKEAjwxeq9BRDDh4_MheOnvAESJABZ4VTqxW8-8SRcVdMWdq3_lVDcRyuRfOm0CFVOD9tuNfucfRoCksTw_wcB Teach Your Apps New Tricks Google Cloud Machine Learning provides modern machine learning services, with pre-trained models and a platform to generate your own tailored models. Our neural net-based ML platform has better training performance and increased accuracy compared to other large scale deep learning systems. Our services are fast, scalable and easy to use. Major Google applications use Cloud Machine Learning, including Photos (image search), the Google app (voice search), Translate, and Inbox (Smart Reply). Our platform is now available as a cloud service to bring unmatched scale and speed to your business applications. https://cloud.google.com/solutions/big-data/ Big Data Analytics at Google Scale Today's applications are generating an unprecedented amount of data from diverse sources within your enterprise, extending out into the physical world where any device is capable of capturing important signals for analysis. The volume of data being generated is increasing at dizzying rates. Highly unstructured, raw data can tell a story of your operations environment and your customers in a way that we can now tap efficiently at scale. Analytics and machine intelligence at web-scale have been in Google’s founding DNA since the very early days. Google Cloud Platform surfaces the same analytical engines invented and used by Google for nearly two decades to help unearth insight in your business and operational environment. ******************** Big Data - Sociology WCCI2016 Vancouver Keynote & Plenary talk Charles Ragin is Chancellor’s Professor of Sociology at the University of California, Irvine ... he has developed software packages for set-theoretic analysis of social data, Qualitative Comparative Analysis (QCA) and Fuzzy-Set/Qualitative Comparative Analysis (fsQCA) Today quantitative social science is dominated by analytic methods that are heavily slanted toward “variables”. The key focus of analysis is the assessment of the relative importance of “independent” variables on a dependent variable, and researchers view their central task as estimating “net effects”. Many scholars find the dominance of variable-oriented approaches deplorable and argue that the proper remedy is to drop the variable altogether from the lexicon of social research. I argue, however, that the notion of the variable should be reformulated in ways that enhance the interplay and integration of cross-case and within-case analysis. Central to this reformulation is set-theoretic methods such as truth table analysis. I show that set-theoretic methods not only provide a better way for researchers to study “connections” between aspects of cases, it also offers a better bridge to conceptual discourse. I argue further that the extensions and elaborations of set-theoretic methods that are afforded by the use of fuzzy sets are especially valuable for social research. http://www.socsci.uci.edu/~cragin/cragin/ In a recent review article in Contemporary Sociology entitled "The Ragin Revolution" (Vaisey review), sociologist Stephen Vaisey describes Ragin's work as a "principled alternative" to quantitative analysis (which assumes away casual complexity) and qualitative case-based methods (which lack tools for generalizing across cases.) Many who have adopted Ragin's methods believe that these techniques combine the strength of both quantitative and qualitative methods, while transcending their limits. http://www.socsci.uci.edu/~cragin/cragin/Vaisey_RDSI.pdf Review opening by STEPHEN V AISEY, University of California, Berkeley Since publishing The Comparative Method (TCM) more than two decades ago, Charles Ragin has become sociological methodology’s George Wallace, Ross Perot, and Ralph Nader rolled into one. Defying the doctrines of the two major parties — quantitative and qualitative — Ragin has run an insurgent campaign dedicated to a principled alternative. In his latest book, Redesigning Social Inquiry: Fuzzy Sets and Beyond (RSI), Ragin continues his quest, claiming no less than to offer a “real alternative to conventional practices” that “is not a compromise between qualitative and quantitative” but rather “transcends many of their respective limitations” (p.6). Though some specifics have changed over twenty years, Ragin’s overarching goal remains the same: to reshape the way sociologists think about and practice their research. http://www.compasss.org/wpseries/Lee2008.pdf Seungyoon Sophia Lee >=2007 "A Critique of the Fuzzy-set Methods in Comparative Social Policy, A Critical Introduction and Review of the Applications of Fuzzy-set Methods" Social policy and Social Work Department, University of Oxford Abstract - This article critiques the Fuzzy-set Qualitative Analysis (fs/QCA) methodology by examining its applicability in three studies in the field of comparative social policy. In each of these three test cases, I focus on the validity of fuzzy-set’s claimed function – its ability to combine theoretic discourse and evidence analysis. All three studies investigate welfare state reform in the late twentieth century and apply fs/QCA: (1) “Welfare Reform in the Nordic Countries in the 1990s: Using Fuzzy-set Theory to Assess Conformity to Ideal Types,” (2) “States of Welfare or States of Workfare? Welfare State Restructuring in 16 Capitalist Democracies, 1985-2002,” and (3) “The Diversity and Causality of Welfare State Reforms Explored with Fuzzy-sets.” This article begins by discussing the ontology and epistemology of comparative social policy. The fuzzy set logic and set theoretic nature of social science theory is then discussed to align the ontology with fuzzy set methodology. Next, a more detailed introduction of fuzzy-set method (fs/QCA) is followed. This study suggests that fs/QCA is a unique and useful method for comparative social policy. It advances quantitative comparative analysis by interpreting attributes as a configuration. By applying fuzzy set logic and the principle of calibration, it advances qualitative analysis by permitting theoretically-informed concepts to the quantified. +-----+ Example http://www.u.arizona.edu/~cragin/fsQCA/download/Chapter5_CCA.pdf Charles Ragin 2007 "Qualitative comparative analysis using fuzzy sets (fsQCA) in Benoit Rihoux and Charles Ragin (editors), Configurational Comparative Analysis, Sage Publications, 2007 causal conditions relevant to the breakdown/survival of democracy in interwar Europe Image - Table 5.1 Crisp versus fuzzy sets Image - Table 5.2 Data matrix showing original variables and fuzzy-set membership scores Image - Table 5.7 The correspondence between truth table rows and vector space corners Image - Figure 5.1: Plot of degree of membership in BREAKDOWN against degree of membership in DUL case +-----+ Charles C. Ragin "The limitations of net-effects thinking" Chapter 2 in Editors: Benoît Rihoux, Heike Grimm, 2006 "Innovative Comparative Methods for Policy Analysis" Springer pp 13-41 Online ISBN 978-0-387-28829-1 University of Arizona +-----+ Example - The Bell Curve &&&&& http://www-bcf.usc.edu/~fiss/Ragin%20Fiss.pdf Charles Ragin, Peer Fiss ?2005? "Net Effects Analysis Versus Configurational Analysis" &&&&& http://www.uv.uio.no/ils/om/aktuelt/arrangementer/2015/invitation_open_lecture_charles_ragin_sept_23-25.pdf Professor Charles C. Ragin, Department of Sociology, University of California, is visiting Faculty of Educational Sciences on September 23rd–25th 2015. Inequality is a key feature of human social organization — some would say the key feature. In almost all known societies, inequalities coincide. Those at the top of social hierarchies do their best to fortify their advantages, while those at the bottom struggle to gain leverage. &&&&& http://press.uchicago.edu/ucp/books/book/chicago/I/bo24957423.html Charles C. Ragin and Peer C. Fiss Nov2016 "Intersectional Inequality - Race, Class, Test Scores, and Poverty", University of Chicago Press For over twenty-five years, Charles C. Ragin has developed Qualitative Comparative Analysis and related set-analytic techniques as a means of bridging qualitative and quantitative methods of research. Now, with Peer C. Fiss, Ragin uses these impressive new tools to unravel the varied conditions affecting life chances. Ragin and Fiss begin by taking up the controversy regarding the relative importance of test scores versus socioeconomic background on life chances, a debate that has raged since the 1994 publication of Richard Herrnstein and Charles Murray’s TheBell Curve. In contrast to prior work, Ragin and Fiss bring an intersectional approach to the evidence, analyzing the different ways that advantages and disadvantages combine in their impact on life chances. Moving beyond controversy and fixed policy positions, the authors propose sophisticated new methods of analysis to underscore the importance of attending to configurations of race, gender, family background, educational achievement, and related conditions when addressing social inequality in America today. &&&&& https://www.google.com/url?q=http://www.springer.com/cda/content/document/cda_downloaddocument/9780387288284-c2.pdf%3FSGWID%3D0-0-45-315613-p88091008&sa=U&ved=0ahUKEwia_K_Iu9_OAhVS0GMKHbxKAaMQFggVMAc&client=internal-uds-cse&usg=AFQjCNFGri86gm19OI9NTKhhE94GTRTtuA Charles Ragin date The limitations of net-effects thinking [Image - Bell Curve, Herrnstein & Murray analysis] [Image - Bell Curve, Logistic regression analysis] [Image - Bell Curve, Ragin Fuzzy sets analysis] combined height = 254 + 318 + 300 = 872 max width = max(867, 862, 599) = 867 [Image - Bell Curve, Herrnstein & Murray, Logistic regression, Ragin Fuzzy sets analysis] When cases are viewed configurationally, it is possible to identify the different combinations of conditions linked to an outcome. The results of the configurational analyses reported in this contribution show that there are several recipes for staying out of poverty. The recipes all include not having low AFQT scores, a favorable household composition-especially marriage for those with liabilities in other spheres (e.g., lacking college education or having low parental income), and educational qualifications. Not having low parental income is also an important ingredient in two of the three recipes. Herrnstein and Murray dramatize the implications of their research by claiming that if one could choose at birth between having a high AFQT score and having a high parental SES (or high parental income), the better choice would be to select having a high AFQT score. The fuzzy-set results underscore the fact that the choice is really about combinations of conditions, about recipes, not about individual variables. In short, choosing to not have a low AFQT score, by itself, does not offer protection from poverty. It must be combined with other resources. &&&&& Benoit Mandelbrot, Richard Hudson 2004 "The (Mis)Behaviour of markets - a fractal view of risk, ruin, and reward" Basic Books, 328pp ISBN 0-465-04355-0 "... [Pareto] was fascinated by the problems of power and wealth. ... He gathered reams of data on wealth and income through different centuries, through different countries: the tax records of Basel, Switzerland, from 1454 and from Augsburg, Germany, in 1471, 1498, and 1512; contemporary rental income from Paris; personal income from Britain, Prussia, Saxony, Ireland, Italy, Peru. What he found - or thought he found - was striking. When he plotted the data on graph paper, with income level on one axis and number of people with that income on the other, he saw the same picture nearly everywhere in every era. Society was not a "social pyramid" with the proportion of rich to poor sloping gently from one class to the next. Instead, it was more of a "social arrow" - very fat on the bottom where the mass of men live, and very thin at the top where sit the wealthy elite. ... There is no progress in human history. Democracy is a fraud. Human nature is primitive, emotional, unyielding. ..." &&&&& Benoit Mandelbrot May1960 "The Pareto-Levy Law and the distribution of income" International Economic Review, v1 n2 https://www.jstor.org/stable/2525289?seq=1#page_scan_tab_contents "... over a certain range of values of income U, its distribution is not markedly influenced either by the socio-economic structure of the community of the study, or by the definition chosen for "income". ..." &&&&& http://www.webofstories.com/play/benoit.mandelbrot/117;jsessionid=56F9BAF68D1A77B85C9EDC5F6833EFA6 Pareto law and inequality in income distribution Benoît Mandelbrot Mathematician &&&&& Eugene F. Fama Oct1963 "Mandelbrot and the Stable Paretian Hypothesis" Journal of Business, Vol. 36, No. 4 (Oct., 1963), pp. 420-429 http://www.jstor.org/stable/2350971 https://web.williams.edu/Mathematics/sjmiller/public_html/341Fa09/handouts/Fama_MandelbroitAndStableParetianHypothesis.pdf [Image - Global asymptotical w-periodicity of a fractional-order non-autonomous neural networks] +-----+ Benoit Mandelbrot - maybe they are all wrong? Multi-fractals & fractional order calculus [Image - Mandelbrot, Pareto wealth distribution p154] ******************** Deep Learning description [Image - Deep history] [Image - Prominent researchers] (from Jeurgen's review) https://research.facebook.com/blog/fair-open-sources-deep-learning-modules-for-torch Soumith Chintala 16Jan2015 "Facebook AI Research (FAIR) open sources deep-learning modules for Torch" https://research.googleblog.com/2016/04/deepmind-moves-to-tensorflow.html Google Research blog logo.png Google Deep Mind Tensor flow open source.png https://deepmind.com/ Deep Mind website Solve intelligence. Use it to make the world a better place. &&&&& https://www.youtube.com/watch?v=l2dVjADTEDU Google's AI Chief Geoffrey Hinton - How Neural Networks Really Work 31May2016 <06:15 into 20:47 presentation [Video - Breakthrough with bigger data, faster/bigger processors] &&&&& https://www.youtube.com/watch?v=XkltShNd6XE Prof. Jürgen Schmidhuber - True Artificial Intelligence Will Change Everything 07Jul2016 &&&&& https://www.youtube.com/watch?v=Q9Z20HCPnww Deep Learning Demystified, Brandon Rohrer 23May2016 ******************** Deep Learning - Driverless Cars https://www.ted.com/talks/sebastian_thrun_google_s_driverless_car#t-239094 Great 4 min video 2011 https://www.youtube.com/watch?v=tq_OTcncPH0 A Real-Time Commute on Autopilot 47 min duration Great video and user commentary!! +-----+ http://www.driverless-future.com/?page_id=384 Driverless car market watch : Gearing up to save lives, reduce costs, resource consumption (only latest comments by companies are listed) GM: Autononomous cars could be deployed by 2020 or sooner (Source: Wall Street Journal, 2016-05-10) BMW to launch autonomous iNext in 2021 (Source: Elektrek, 2016-05-12) Ford’s head of product development: autonomous vehicle on the market by 2020 (Source: autonews, 2016-02-27) Baidu’s Chief Scientist expects large number of self-driving cars on the road by 2019 (Source: Quora, 2016-01-29) Elon Musk now expects first fully autonomous Tesla by 2018, approved by 2021 (Source: Borsen Interview on youtube, timeline: 8:06-8:29, recorded on 2015-9-23) Jaguar and Land-Rover to provide fully autonomous cars by 2024 says Director of Research and Technlogy (Source: Drive.com.au, 2014-10-03) Nissan to provide fully autonomous vehicles by 2020 (Source: Nissan Motors, 2013-08-27) Truly autonomous cars to populate roads by 2028-2032 estimates insurance think tank executive (Source: The Detroit News, 2013-02-14) Sergey Brin plans to have Google driverless car in the market by 2018 (Source: Driverless car market watch, 2012-10-02) IEEE predicts up to 75% of vehicles will be autonomous in 2040 (Source: IEEE, 2012-09-05) +-----+ https://en.wikipedia.org/wiki/Autonomous_car Autonomous car Some demonstrative systems, precursory to autonomous cars, date back to the 1920s and 30s. The first self-sufficient (and therefore, truly autonomous) cars appeared in the 1980s, with Carnegie Mellon University's Navlab and ALV projects in 1984 and Mercedes-Benz and Bundeswehr University Munich's Eureka Prometheus Project in 1987. Since then, numerous major companies and research organizations have developed working prototype autonomous vehicles. Tesla Autopilot Tesla's autonomous driving features are ahead of others in the industry, and can be classified as is somewhere between level 2 and level 3 under the U.S. Department of Transportation’s National Highway Traffic Safety Administration (NHTSA) five levels of vehicle automation. At this levels the car can act autonomously but requires the full attention of the driver, who must be prepared to take control at a moment's notice.[84][85][86] Autopilot should be used only on limited-access highways, and sometimes it will fail to detect lane markings and disengage itself. In urban driving the system will not read traffic signals or obey stop signs. The system also does not detect pedestrians or cyclists.[87] The first known fatal accident involving a vehicle being driven by itself took place in Williston, Florida on 7 May 2016 while a Tesla Model S electric car was engaged in Autopilot mode. The driver was killed in a crash with a large 18-wheel tractor-trailer. Tesla also stated that this was Tesla’s first known autopilot death in over 130 million miles (208 million km) driven by its customers with Autopilot engaged. According to Tesla there is a fatality every 94 million miles (150 million km) among all type of vehicles in the U.S.[88][89][93] Google self-driving car As of March 2016, Google had test driven their fleet of driverless cars in autonomous mode a total of 1,498,214 mi (2,411,142 km).[99] Autonomous driving functions Measurement of Assured Clear Distance Ahead Autonomous cruise control system Automatic parking Death by GPS Electronic stability control Lane Keep Assist Precrash system Automated platooning +-----+ Traffic sign recognition German traffic sign competition IJCNN2015 Dallas TX &&&&& Sebastian Houben, Johannes Stallkamp, Jan Salmen, Marc Schlipsing, Christian Igel 04Aug2013 "Detection of Traffic Signs in Real-World Images: The German Traffic Sign Detection Benchmark" Proceedings of International Joint Conference on Neural Networks, Dallas, Texas, USA, August 4-9, 2013 Houben etal 04Aug2013 Detection of Traffic Signs in Real-World Images, The German Traffic Sign Detection Benchmark Abstract — Real-time detection of traffic signs, the task of pinpointing a traffic sign’s location in natural images, is a challenging computer vision task of high industrial relevance. Various algorithms have been proposed, and advanced driver assistance systems supporting detection and recognition of traf- fic signs have reached the market. Despite the many competing approaches, there is no clear consensus on what the state-of-the-art in this field is. This can be accounted to the lack of comprehensive, unbiased comparisons of those methods. We aim at closing this gap by the ”German Traffic Sign Detection Benchmark” presented as a competition at IJCNN 2013 (International Joint Conference on Neural Networks). We introduce a real-world benchmark data set for traffic sign detection together with carefully chosen evaluation metrics, baseline results, and a web-interface for comparing approaches. In our evaluation, we separate sign detection from classification, but still measure the performance on relevant categories of signs to allow for benchmarking specialized solutions. The considered baseline algorithms represent some of the most popular detection approaches such as the Viola-Jones detector based on Haar features and a linear classifier relying on HOG descriptors. Further, a recently proposed problem-specific algorithm exploiting shape and color in a model-based Hough-like voting scheme is evaluated. Finally, we present the best-performing algorithms of the IJCNN competition. ... Our final dataset comprises 900 full images containing 1206 traffic signs. We split the dataset randomly into a training (600 images, 846 traffic signs) and an evaluation set (300 images, 360 traffic signs). ... Image - GTSDB example photos Image - GTSDB sign classifications Image - GTSDB Basic types of Haar wavelet features used for the Viola-Jones detector Image - GTSDB Five most significant Haar features selected for the first stage of each of the 3 trained detectors Image - GTSDB A choice of false positive results of prohibitive (top row) and danger sign detection (bottom row) Image - GTSDB Competition ranking by area-under-curve (average overlap) Winning team wgy@HIT501 The features used in the two filterings are both HOG, and the classifiers used are LDA and IK-SVM respectively. The baseline is enough to give high recall and precision for prohibitory signs, while some extra steps are needed for the other two categories. For danger signs, we perform projective adjustment to the ROIs and re-classify them with HOG and SVM. For mandatory signs, we train a class-specific SVM for each class of mandatory sign, and if any of the SVMs outputs positive response for a ROI, then the ROI is determined to be a true positive. Experimental results show the proposed method give very high recalls and precisions for all the three categories, but the processing time for one image is several seconds, not enough for real-time application. &&&&& Pierre Sermanet, Yann LeCunn 2010 "Traffic Sign Recognition with Multi-Scale Convolutional Networks" http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf Zhe Zhu, Dun Liang, Songhai Zhang, Xiaolei Huang, Baoli Li, Shimin Hu 2016 "Traffic-Sign Detection and Classification in the Wild" http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Zhu_Traffic-Sign_Detection_and_CVPR_2016_paper.pdf Abstract - Although promising results have been achieved in the areas of traffic-sign detection and classification, few works have provided simultaneous solutions to these two tasks for realistic real world images. We make two contributions to this problem. Firstly, we have created a large traffic-sign benchmark from 100000 Tencent Street View panoramas, going beyond previous benchmarks. It provides 100000 images containing 30000 traffic-sign instances. These images cover large variations in illuminance and weather condi- tions. Each traffic-sign in the benchmark is annotated with a class label, its bounding box and pixel mask. We call this benchmark Tsinghua-Tencent 100K. Secondly, we demonstrate how a robust end-to-end convolutional neural network (CNN) can simultaneously detect and classify traffic-signs. Most previous CNN image processing solutions target objects that occupy a large proportion of an image, and such networks do not work well for target objects occupying only a small fraction of an image like the traffic-signs here. Experimental results show the robustness of our network and its superiority to alternatives. The benchmark, source code and the CNN model introduced in this paper is publicly available ... Our network achieved 84% accuracy and 94% recall at a Jaccard similarity coefficient of 0.5, without carefully tuning its parameters, which significantly outperforms the results obtained by previous objection detection methods ... ... We also tested our detector on the 90000 panoramas that contained no traffic-signs, and the network perfectly identified them all as only containing background. ... ... We compare the results provided by our network and the state-of-the-art Fast R-CNN method [9]. Image - Zhu etal Chinese traffic-sign classes &&&&& Yujun Zeng, Xin Xu, Yuqiang Fang, Kun Zhao >=2015 "Traffic Sign Recognition Using Extreme Learning Classifier with Deep Convolutional Features" http://www.ntu.edu.sg/home/egbhuang/pdf/Traffic-Sign-Recognition-Using-ELM-CNN.pdf Abstract - Traffic sign recognition is an important but challenging task, especially for automated driving and driver assistance. Its accuracy depends on two aspects: feature extractor and classifier. Current popular algorithms mainly use convolutional neural networks (CNN) to execute both feature extraction and classification. Such methods could achieve impressive results but usually on the basis of an extremely huge and complex network. What’s more, since the fully-connected layers in CNN form a classical neural network classifier, which is trained by gradient descent-based implementations, the generalization ability is limited and sub-optimal. The performance could be further improved if other favorable classifiers are used and extreme learning machine (ELM) classifier is just the candidate. In this paper, ELM classifier equipped with deep convolutional features is utilized, which integrates the terrific discriminative capability of deep convolutional features learnt by CNN with the outstanding generalization performance of ELM classifier. Firstly CNN learns deep and robust features, followed by the removing of the fully-connected layers, which turns CNN to be the feature extractor. Then ELM fed with the CNN features is used as the classifier to conduct an excellent classification. Experiments on German traffic sign recognition benchmark (GTSRB) demonstrate that the proposed method can obtain competitive results with state-of-the-art algorithms with much less complexity. Image - Zeng etal CNN feature extractor architecture Image - Zeng etal Recognition accuracy comparison of MCDNN, CNNaug/MLP, Multi-scale CNN, Plain CNN, CNN-SVM, and the proposed architecture &&&&& Mrinal Haloi, IIT Guwahati 17Jul2016 "Traffic Sign Classification Using Deep Inception Based Convolutional Networks" arXiv:1511.02992v2 http://arxiv.org/pdf/1511.02992.pdf Abstract — In this work, we propose a novel deep network for traffic sign classification that achieves outstanding performance on GTSRB surpassing all previous methods. Our deep network consists of spatial transformer layers and a modified version of inception module specifically designed for capturing local and global features together. This features adoption allows our network to classify precisely intraclass samples even under deformations. Use of spatial transformer layer makes this network more robust to deformations such as translation, rotation, scaling of input images. Unlike existing approaches that are developed with hand-crafted features, multiple deep networks with huge parameters and data augmentations, our method addresses the concern of exploding parameters and augmentations. We have achieved the state-of-the-art performance of 99.81% on GTSRB dataset Image - Haloi & Guwahati, Comparions of accuracy (top-1) with state-of-the-arts +-----+ Sound, vibration, feel recognition Google Android - speechnotes Jordan Novet 28May2015 "Google says its speech recognition technology now has only an 8% word error rate (Down from 23% in 2013)" http://venturebeat.com/2015/05/28/google-says-its-speech-recognition-technology-now-has-only-an-8-word-error-rate/ Google today announced its advancements in deep learning, a type of artificial intelligence, for key processes like image recognition and speech recognition. When it comes to accurately recognizing words in speech, Google now has just an 8 percent error rate. Compare that to 23 percent in 2013, Sundar Pichai, senior vice president of Android, Chrome, and Apps at Google, said at the company’s annual I/O developer conference in San Francisco. Pichai boasted, “We have the best investments in machine learning over the past many years.” Indeed, Google has acquired several deep learning companies over the years, including DeepMind, DNNresearch, and Jetpac. Deep learning involves ingesting lots of data to train systems called neural networks, and then feeding new data to those systems and receiving predictions in response. The company’s current neural networks are now more than 30 layers deep, Pichai said. Google uses deep learning across many types of services, including object recognition in YouTube videos and even optimization of its vast data centers. Meanwhile, Baidu, Facebook, and Microsoft are also beefing up their deep learning capabilities. Earlier-stage companies like Flipboard, Pinterest, and Snapchat have also been doing research in the area — but none have the computing power that Google does. So Google’s achievements in real production apps are a pretty big deal. +-----+ http://www.smh.com.au/comment/the-future-is-nearly-here-with-driverless-cars--and-its-bright-20160818-gqw97l.html Peter FitzSimons 21Aug2016 The future is nearly here with driverless cars - and it's bright, Sydney Morning Herald But in Pittsburgh on Thursday, Uber and Volvo announced a $US300 million partnership that will see, in a matter of weeks, people in that city able to order up a driverless car to take them where they want to go. Stunning, yes? https://en.wikipedia.org/wiki/Google_self-driving_car Google Self-Driving Car is any in a range of autonomous cars, developed by Google X as part of its project to develop technology for mainly electric cars.[1] Lettering on the side of each car identifies it as a "self-driving car". The project was formerly led by Sebastian Thrun, former director of the Stanford Artificial Intelligence Laboratory and co-inventor of Google Street View. Thrun's team at Stanford created the robotic vehicle Stanley which won the 2005 DARPA Grand Challenge and its US$2 million prize from the United States Department of Defense.[2] The team developing the system consisted of 15 engineers working for Google, including Chris Urmson, Mike Montemerlo, and Anthony Levandowski who had worked on the DARPA Grand and Urban Challenges.[3] Technology The project team has equipped a number of different types of cars with the self-driving equipment, including the Toyota Prius, Audi TT, and Lexus RX450h,[14] Google has also developed their own custom vehicle, which is assembled by Roush Enterprises and uses equipment from Bosch, ZF Lenksysteme, LG, and Continental.[15][16] Google's robotic cars have about $150,000 in equipment including a $70,000 LIDAR system.[17] The range finder mounted on the top is a Velodyne 64-beam laser. This laser allows the vehicle to generate a detailed 3D map of its environment. The car then takes these generated maps and combines them with high-resolution maps of the world, producing different types of data models that allow it to drive itself.[18] As of June 2014, the system works with a very high definition inch-precision map of the area the vehicle is expected to use, including how high the traffic lights are; in addition to on-board systems, some computation is performed on remote computer farms.[19] +-----+ Plamen Angelov's Android app Howell - I sold voice dictation systems mid-1990's to doctors and lawyers ******************** Deep Learning - Checkers, Chess, Go Image - Blondie24 architecture : https://www.amazon.ca/Blondie24-Playing-at-Edge-AI/dp/1558607838 David B. Fogel 26Oct2001 "Blondie24: Playing at the Edge of AI" Morgan Kaufmann, 406pp ISBN-13: 978-1558607835 Blondie24 architecture.png https://www.youtube.com/watch?v=QSNs-PYv7co David Fogel interview 01Nov2010 homepages.gac.edu/~wolfe/385/2003S/talks/jhill2/Blondie24good.ppt Powerpoint details http://www.usacheckers.com/blondierebuttal.php Jim Loy 's criticism of Blondie24 laying http://deeplearning4j.org/compare-dl4j-torch7-pylearn.html DL4J vs. Torch vs. Theano vs. Caffe vs. TensorFlow Interesting compoarison of platforms Bill Howell 30Dec2011 "Social graphs, social sets, and social media" from project at Natural Resources Canada, Spine project, 68pp http://www.billhowell.ca/Social%20media/Howell%20111230%20%E2%80%93%20Social%20graphs,%20social%20sets,%20and%20social%20media.pdf Blondie24 kills the opponent.png Bill Howell 30Dec2011 "Systems design issues for social media" 18pp http://www.billhowell.ca/Social%20media/Howell%20110902%20%E2%80%93%20Systems%20design%20issues%20for%20social%20media.pdf Bill Howell 06Oct2011 "SPINE – Semantics beyond search" 31pp http://www.billhowell.ca/Social%20media/Howell%20111006%20-%20SPINE,%20Semantics%20beyond%20search.pdf Bill Howell 30Dec2011 "Data-mining including Social Media: Set-up and use" 15pp http://www.billhowell.ca/Social%20media/Howell%20111117%20-%20How%20to%20set%20up%20&%20use%20data%20mining%20with%20Social%20media.pdf &&&&& https://en.wikipedia.org/wiki/Deep_Blue_versus_Garry_Kasparov Deep Blue versus Garry Kasparov was a pair of six-game chess matches between world chess champion Garry Kasparov and an IBM supercomputer called Deep Blue. The first match was played in Philadelphia in 1996 and won by Kasparov. The second was played in New York City in 1997 and won by Deep Blue. The 1997 match was the first defeat of a reigning world chess champion to a computer under tournament conditions. &&&&& https://en.wikipedia.org/wiki/Google_Brain https://en.wikipedia.org/wiki/Google_DeepMind Google's AlphaMind https://en.wikipedia.org/wiki/AlphaGo AlphaGo In October 2015, it became the first Computer Go program to beat a professional human Go player without handicaps on a full-sized 19×19 board.[2][3] In March 2016, it beat Lee Sedol in a five-game match, the first time a computer Go program has beaten a 9-dan professional without handicaps.[4] Although it lost to Lee Sedol in the fourth game, Lee resigned the final game, giving a final score of 4 games to 1 in favour of AlphaGo. In recognition of beating Lee Sedol, AlphaGo was awarded an honorary 9-dan by the Korea Baduk Association. Algorithm As of 2016, AlphaGo's algorithm uses a combination of machine learning and tree search techniques, combined with extensive training, both from human and computer play. It uses Monte Carlo tree search, guided by a "value network" and a "policy network," both implemented using deep neural network technology.[2][6] A limited amount of game-specific feature detection pre-processing (for example, to highlight whether a move matches a nakade pattern) is applied to the input before it is sent to the neural networks.[6] The system's neural networks were initially bootstrapped from human gameplay expertise. AlphaGo was initially trained to mimic human play by attempting to match the moves of expert players from recorded historical games, using a database of around 30 million moves.[13] Once it had reached a certain degree of proficiency, it was trained further by being set to play large numbers of games against other instances of itself, using reinforcement learning to improve its play.[2] To avoid "disrespectfully" wasting its opponent's time, the program is specifically programmed to resign if its assessment of win probability falls beneath a certain threshold; for the March 2016 match against Lee, the resignation threshold was set to 20%.[35] China's Ke Jie, an 18-year-old generally recognized as the world's best Go player,[23][52] initially claimed that he would be able to beat AlphaGo, but declined to play against it for fear that it would "copy my style".[52] As the matches progressed, Ke Jie went back and forth, stating that "it is highly likely that I (could) lose" after analyzing the first three matches,[53] but regaining confidence after AlphaGo displayed flaws in the fourth match.[54] Image - Deep AlphaGo (black) vs Fan Hui, Game 4 (8 October 2015) Video - Deep Blue versus Kasparov, couldn't accept machine worked alone ******************** Deep Learning - Social Media Table - Bill Howwell - Social media thoughts from 2011 (random, scattered, incomplete) Howell - Social Media further thoughts.png +-----+ http://www.wired.com/2013/03/google_hinton/ Robert McMillan "Google Hires Brains that Helped Supercharge Machine Learning" Business magazine Google has hired the man who showed how to make computers learn much like the human brain. His name is Geoffrey Hinton, and on Tuesday, Google said that it had hired him along with two of his University of Toronto graduate students — Alex Krizhevsky and Ilya Sutskever. Their job: to help Google make sense of the growing mountains of data it is indexing and to improve products that already use machine learning — products such as Android voice search. Google paid an undisclosed sum to buy Hinton’s company, DNNresearch. It’s a bit of a best-of-both-worlds deal for the researcher. He gets to stay in Toronto, splitting his time between Google and his teaching duties at the University of Toronto, while Krizhevsky and Sutskever fly south to work at Google’s Mountain View, California campus. +-----+ https://www.facebook.com/yann.lecun/posts/10151728212367143 Yann LeCunn 09Dec2013 FaceBook page Big news today! Facebook has created a new research laboratory with the ambitious, long-term goal of bringing about major advances in Artificial Intelligence. I am thrilled to announce that I have accepted the position of director of this new lab. I will remain a professor at New York University on a part-time basis, and will maintain research and teaching activities at NYU. Simultaneously, Facebook and New York University's Center for Data Science are entering a partnership to carry out research in data science, machine learning, and AI. The new AI Group at Facebook will have locations in Menlo Park, CA, in London, UK, and at Facebook's new facility in New York City, one block away from NYU's main campus. Facebook CEO Mark Zuckerberg, CTO Michael Schroepfer and I are at the Neural Information Processing Systems Conference in Lake Tahoe today. Mark will announce the news during his presentation at the NIPS Workshop on Deep Learning later today. And we are hiring! +-----+ https://blog.hootsuite.com/artificial-intelligence-in-social-media/ Artificial Intelligence in Social Media: What AI Knows About You, and What You Need to Know By Olsy Sorokina, 13 March 2015 Social The truth is, artificial intelligence is what helps everyday user experience on Facebook get better. Developing deep learning technologies to sort through large databases helps adjust Facebook’s suggestions, News Feed filters, figure out trending topics, and tag the appropriate friends in photos—all without spending too much manpower on data analysis. With over 800 million users logging in and generating massive amounts of data every day, advanced deep learning technology is the best way for the network to make that information work in users’ advantage. If you’re having trouble picturing all the implications of deep learning algorithms on Facebook, then imagine that tripled for Google. Google’s acquisition of British artificial intelligence startup DeepMind in January 2014 made a splash in both tech and AI academic communities. Reportedly costing Google over $400 million to acquire, the DeepMind project now recruits a large portion of the world’s leading researchers in deep learning. Peter Norvig, Google’s director of research, perfectly summed up his company’s ace in the deck when it comes to employing AI experts in an interview with MIT Technology Review: “We said to Geoff [Hinton, one of the world’s leading deep learning researchers], ‘We like your stuff. Would you like to run models that are 100 times bigger than anyone else’s?’ That was attractive to him.” Google DeepMind recently made headlines after one of its programs beat thirty Atari games, outperforming a human player in at least one of them. According to a New Yorker article on the topic, the DeepMind team now claims that the program is a “novel artificial agent” that combines two existing forms of brain-inspired machine intelligence: a deep neural network and a reinforcement-learning algorithm. If that sentence doesn’t immediately sound revolutionary to you, let’s break it down: a program combining these two forms intelligence can not only analyze and extrapolate patterns from existing data, but also learn from these patterns in order to achieve a highly desired objective—winning the game. This new form of learning has huge implications for the future of artificial intelligence programs: faster, more efficient learning can save a lot of operational memory and accomplish tasks more efficiently. Labeled data scarcity Written language, despite the variations mentioned above, has a lot of structure that can be extracted from unlabeled text using unsupervised learning and captured in embeddings. Deep learning offers a good framework to leverage these embeddings and refine them further using small labeled data sets. This is a significant advantage over traditional methods, which often require large amounts of human-labeled data that are inefficient to generate and difficult to adapt to new tasks. In many cases, this combination of unsupervised learning and supervised learning significantly improves performance, as it compensates for the scarcity of labeled data sets. Current developments Better understanding people's interests Joint understanding of textual and visual content +-----+ https://code.facebook.com/posts/181565595577955/introducing-deeptext-facebook-s-text-understanding-engine/ Introducing DeepText: Facebook's text understanding engine Text understanding on Facebook requires solving tricky scaling and language challenges where traditional NLP techniques are not effective. Using deep learning, we are able to understand text better across multiple languages and use labeled data much more efficiently than traditional NLP techniques. DeepText has built on and extended ideas in deep learning that were originally developed in papers by Ronan Collobert and Yann LeCun from Facebook AI Research. Understanding more languages faster The community on Facebook is truly global, so it's important for DeepText to understand as many languages as possible. Traditional NLP techniques require extensive preprocessing logic built on intricate engineering and language knowledge. There are also variations within each language, as people use slang and different spellings to communicate the same idea. Using deep learning, we can reduce the reliance on language-dependent knowledge, as the system can learn from text with no or little preprocessing. This helps us span multiple languages quickly, with minimal engineering effort. Deeper understanding In traditional NLP approaches, words are converted into a format that a computer algorithm can learn. The word “brother” might be assigned an integer ID such as 4598, while the word “bro” becomes another integer, like 986665. This representation requires each word to be seen with exact spellings in the training data to be understood. With deep learning, we can instead use “word embeddings,” a mathematical concept that preserves the semantic relationship among words. So, when calculated properly, we can see that the word embeddings of “brother” and “bro” are close in space. This type of representation allows us to capture the deeper semantic meaning of words. Using word embeddings, we can also understand the same semantics across multiple languages, despite differences in the surface form. As an example, for English and Spanish, “happy birthday” and “feliz cumpleaños” should be very close to each other in the common embedding space. By mapping words and phrases into a common embedding space, DeepText is capable of building models that are language-agnostic. +-----+ http://torch.ch/ Torch Website Torch is constantly evolving: it is already used within Facebook, Google, Twitter, NYU, IDIAP, Purdue and several other companies and research labs. +-----+ http://www.businessinsider.com/social-medias-big-data-future-from-deep-learning-to-predictive-marketing-2014-2 Reinventing Social Media: Deep Learning, Predictive Marketing, And Image Recognition Will Change Everything Cooper Smith, Feb. 16, 2014, 6:02 PM Here are some of the key takeaways from the report: Thanks to deep learning, social media has the potential to become far more personalized. New marketing fields are quickly emerging, too: audience clustering, predictive marketing, and sophisticated brand sentiment analysis. Seventy-one percent of chief marketing officers around the globe feel their organization is unprepared to deal with the explosion of big data over the next few years, according to an IBM survey. They cited it as their top challenge. Targeted and personalized marketing using social data is expected to be the business area that benefits the most from mining big data — 61% of data professionals say big data will overhaul the practice for the better, according to Booz & Company. Facebook ingests approximately 500 times more data each day than the New York Stock Exchange (NYSE). Twitter is storing at least 12 times more data each day than the NYSE. By deciphering image and video-based data, marketers will be more effective and comprehensive in their "social listening" efforts. Large companies spend a great deal of money monitoring people's attitudes toward a specific brands or product, and despite all the photo-and video-sharing happening on social media, these mediums were formerly mostly invisible to their analytics tools. Image - Social Media volume of digital data stored in databases : http://www.businessinsider.com/social-medias-big-data-future-from-deep-learning-to-predictive-marketing-2014-2 Cooper Smith 16Feb2014 "Reinventing Social Media: Deep Learning, Predictive Marketing, And Image Recognition Will Change Everything" +-----+ http://www.socialmediatoday.com/technology-data/deep-learning-end-seo-we-know-it Deep Learning - The End of SEO (Search Engine Optimization) as We Know It February 16, 2016 Sam Kerzhnerman The latest news about Google's head of search, Amit Singhal, to leave the company he spent 15 years with, had the shocking effect on the SEO community. And what is more surprising - his successor, John Giannandrea, is the one who has worked on artificial intelligence at Google (including RankBrain - the part of search algorithm which uses AI to work with a queries search engine was not able to understand before). With this change of executives, we may be on the verge of a new era - the era of transition from the algorithm-based search to AI-based search. To power its artificial intelligence, Google uses deep learning (also known as neural networks) - one of machine learning methods, which uses a mathematical model to mimic the way as human brain neurons work. Amit Singhal was against using machine learning inside Google Search because it is not clear how neural net ranks the results and thus more difficult to tweak its behavior. This resistance cost him a career. How Deep Learning Will Change SEO Threat #1: No control over search algorithm Amit Singhal was correct - with neural networks using unsupervised learning, it is very hard to define which factors the machine uses to rank websites in search, and how these factors are related to each other. The factors which are considered less efficient for now (as, for example, working under HTTPS or having valid W3C markup) can obtain higher importance for an AI-based ranking algorithm - because the machine uses a different approach when it creates its own concepts from input data. Moreover, the AI may even start using factors which Google doesn’t use to rank websites. And neither engineers nor users won’t know that. Threat #2: Potential errors due to the nature of deep learning method Do you remember how Google translate, which is also based on machine learning, converted “Russian Federation” to “Mordor” in its Ukrainian-to-Russian version? Other errors included “Russians” translated as “occupiers” and the name of the Russian minister Sergey Lavrov translated as “sad little horse.” This happened due to how neural net works with data. And while this particular error has been noticed and fixed, imagine how many errors will go unnoticed (and unfixed). Threat #3: Heavier personalization With AI technology already used to deliver Google ads, it is clear that search results will be personalized more heavily over time. Thus, each visitor will have the search results based on his/her previous search queries, age, gender, income and all other information collected by Google. So rankings will be based on user’s persona, and not on how the search results are relevant to the particular query. The Post-Algorithmic SEO In this post-algorithmic world, it will be impossible to build links or optimize pages in order to manipulate search results. Even “SEO-friendly” term may disappear. Instead, the only thing to focus will be “user-friendly.” +-----+ Image - Social Media Marketing Priorities : Image - Social media Top photo sharing sites : Image - Social Media Ad spending : http://www.businessinsider.com.au/social-medias-new-big-data-frontiers-artificial-intelligence-deep-learning-and-predictive-marketing-2014-2 Cooper Smith 08Feb2014"Social Media's New Big Data Frontiers -- Artificial Intelligence, Deep Learning, And Predictive Marketing" KEY POINTS - 70-one per cent of chief marketing officers around the globe say their organisation is unprepared to deal with the explosion of big data over the next few years, according to an IBM survey. They cited it as their top challenge, ahead of device fragmentation and shifting demographics. - The data tidal wave shows no signs of abating. By 2015, research firm IDC predicts there will be more than 5,300 exabytes of unstructured digital consumer data stored in databases, and we expect a large share of that to be generated by social networks. For context, one exabyte equals 1 million terabytes, and Facebook’s databases ingest approximately 500 terabytes of data each day. Facebook ingests approximately 500 times more data each day than the New York Stock Exchange (NYSE). Twitter is storing at least 12 times more data each day than the NYSE. - “Unstructured” big data means data that is spontaneously generated and not easily captured and classified (“Structured” data is more akin to data entered into a form, like a user name might be, or generated as part of a pre-classified series, like the time stamp on a tweet.) - Machine learning or artificial intelligence (AI) — the study of how computer systems can be programmed to exhibit problem-solving and decision-making capabilities that emulate human intelligence — are helping marketers and advertisers glean insights from this vast ocean of unstructured consumer data collected by the world’s largest social networks. - Advances in “deep learning,” cutting-edge AI research that attempts to program machines to perform high-level thought and abstractions, are allowing marketers to extract information from the billions of photos, videos, and messages uploaded and shared on social networks each day. Image recognition technology is now advanced enough to identify brand logos in photos. - Audience targeting and personalised predictive marketing using social data are expected to be some of the business areas that benefit the most from mining big data — 61% of data professionals say big data will overhaul marketing for the better, according to Booz & Company. . Unlocking Image And Video Data Social networking experiences are becoming increasingly centered around photos and videos, but it is extremely difficult to extract information from visual content. Because of this, image and video recognition are two of the more exciting disciplines being worked on in the field of AI and deep learning. Facebook users upload 350 million photos each day. Snapchat users share 400 million “snaps” (Snapchat’s term for photos and videos shared over the network) each day. Instagram users upload 55 million photos each day. . Image - Social media Top photo sharing sites Text mining is another field that’s rapidly evolving. A team of Belgian computer science researchers developed what they call an opinion mining algorithm that can identify positive, negative, and neutral sentiment in Web content with 83% accuracy for English text. Accuracy fluctuates depending on the language of the text because of the variety of linguistic expressions. The more complex a language is, the more training a machine-learning system requires. Clustering Like-Minded Consumers “The first level is understanding who are these people as individuals — are they real, are they spam bots, and what do they talk about?” “The second level is grouping similar individuals together so that you have a network of say, 20,000 people, and then a machine-learning system will have enough raw data to accurately draw some conclusions about those people. At that point, we begin to move away from studying graph analytics and into a new study of understanding how large networks of people interact with each other,” said Jim Hendler. Predicting Consumer Actions But what if social media can get good enough at predicting what products or services customers might become interested in purchasing in the future? Then the ads shown to users will be personalised and predictive, rather than trying to grab a consumer’s attention once a purchase process has already begun, and the consumer’s mind might already be made up. +-----+ https://en.wikipedia.org/wiki/X_%28company%29 Google X (company) X is an American semi-secret research and development facility founded by Google in January 2010 as Google[x][1][2] and is an operatable subsidiary of Alphabet Inc.[3] X is located about a half mile from Google's corporate headquarters, the Googleplex, in Mountain View, California.[4][5] Work at X is overseen by entrepreneur scientist Astro Teller, as CEO and "Captain of Moonshots".[6][7][8] The lab started with the development of Google's self-driving car.[8] On 2 October 2015, after the complete restructuring of Google into Alphabet, the company was renamed to X. Self-driving car & several others Google Brain is now a deep learning research project at Google which started as an X project. Considered one of the biggest successes,[29] Astro Teller has said this one project has produced enough value for Google to more than cover the total costs of X.[30] On 26 January 2014, multiple news outlets stated that Google had purchased DeepMind Technologies for an undisclosed amount. Analysts later announced that the company was purchased for £400 Million ($650M USD / €486M), although later reports estimated the acquisition was valued at over £500 Million.[9][10][11][12][13][14][15] The acquisition reportedly took place after Facebook ended negotiations with DeepMind Technologies in 2013, which resulted in no agreement or purchase of the company.[16] +-----+ https://en.wikipedia.org/wiki/Calico_%28company%29 Calico LLC is an independent research and development biotech company established in 2013 by Google Inc. and Arthur D. Levinson with the goal of combating aging and associated diseases.[2] In Google's 2013 Founders' Letter, Larry Page described Calico as a company focused on "health, well-being, and longevity." The company's name is an acronym for "California Life Company".[3][4] ******************** Howell questions about Deep Learning Don't [see, hear] about RNN's!! Is backpropagation being properly applied? (Paul Werbos -> Don Wunsch, credit assignment) Cubature Kalman Filters Simon Haykin McMaster (?A? now at Ford Research in Canada) What about marrying ADP & Deep Learning? Evolutionary Computational contributions ******************** Safety of CI itself Video - Low-speed control for LoFLYTE® Mach 5 Waverider Remotely Piloted Vehicle (RPV), http://www.accurate-automation.com/content/LoFlyte_Flight, Accurate Automation Corporation - LoFLYTE® is a Mach 5 Waverider Remotely Piloted Vehicle (RPV) testbed vehicle used for AAC Neural Net Controller development. It has been used at Edwards AFB by the 419th. google safety and "neural networks" &&&&& https://www-users.cs.york.ac.uk/tpk/kes2003.pdf Establishing Safety Criteria for Artificial Neural Networks Zeshan Kurd, Tim Kelly, 2003 Abstract - Artificial neural networks are employed in many areas of industry such as medicine and defence. There are many techniques that aim to improve the performance of neural networks for safety-critical systems. However, there is a complete absence of analytical certification methods for neural network paradigms. Consequently, their role in safety-critical applications, if any, is typically restricted to advisory systems. It is therefore desirable to enable neural networks for highly-dependable roles. This paper defines the safety criteria which if enforced, would contribute to justifying the safety of neural networks. The criteria are a set of safety requirements for the behaviour of neural networks. The paper also highlights the challenge of maintaining performance in terms of adaptability and generalisation whilst providing acceptable safety arguments. One of the main tools for determining requirements is the use of a safety case to encapsulate all safety arguments for the software. A safety case is defined in Defence Standard 00-55 [8] as: “The software safety case shall present a well-organised and reasoned justification based on objective evidence, that the software does or will satisfy the safety aspects of the Statement of Technical Requirements and the Software Requirements specification.” [Image - Preliminary Safety Criteria for Artificial Neural Networks] &&&&& https://www-users.cs.york.ac.uk/tpk/eunite03.pdf Safety Criteria and Safety Lifecycle for Artificial Neural Networks Zeshan Kurd, Tim Kelly and Jim Austin, 31Jan2016 ABSTRACT: There are many performance based techniques that aim to improve the safety of neural networks for safety critical applications. However, many of these techniques provide inadequate forms of safety arguments required for safety assurance. As a result, neural networks are typically restricted to advisory roles in safety-related applications. Neural networks are appealing to use given their ability to operate in unpredictable and changing environments. Therefore, it is desirable to certify them for highly-dependable roles in safety critical systems. This paper outlines the safety criteria which if enforced, would contribute to justifying the safety of neural networks. The criteria are a set of safety requirements for the behaviour of neural networks. A potential neural network model is also outlined and is based upon representing knowledge in symbolic form. The paper also presents a safety lifecycle for artificial neural networks. This lifecycle focuses on managing behaviour represented by neural networks and contributes to providing acceptable forms of safety assurance. The software safety lifecycle specifies where certain safety processes should be performed throughout development of software systems. Within the software context, a hazard is a software level condition that could give rise to a system level hazard. The following is an outline of some of the major processes performed during the software safety lifecycle: • Hazard Identification: is a major activity at the start of the software lifecycle. This requires an element of indepth knowledge about the system and inquisitively explores possible hazards. This may require consultation of a checklist of known hazards specific to the type of application (possibly from an initial hazard list). • Functional Hazard Analysis (FHA): Analyses the risk or the severity and probability of potential accidents for each identified hazard. This is performed during the specification and design stages. • Preliminary System Safety Analysis (PSSA): The purpose of this phase is two fold: To ensure that the proposed design will adhere to safety requirements and to refine the safety requirements and help guide the design process. • System Safety Analysis (SSA): This process is performed at implementation, testing and other stages of development. Its main purpose is to gain evidence from these development stages for assurance that safety requirements have been achieved. • Safety Case: This final phase delivers a comprehensible and defensible argument that the software is acceptably safe to use in a given context. This is presented with the delivery and commissioning of the final software system. The intentions of these safety processes are well established. They can be found in current safety standards such as ARP 4761 [17]. Image - Safety Lifecycle for Hybrid Neural Networks Preliminary hazard identification (PHI) &&&&& Memristor stability ******************** Safety - NASA failures, Apollo 1&13, Challenger 2004 film series "From the Earth to the Moon: Apollo One" Home Box Office (HBO) presents a Clavius Base/Imagine Entertainment Production, Disk 1, Part 2, 98853, 720minutes total series 2004 film series "From the Earth to the Moon: We interrupt this program (Apollo 13)" Home Box Office (HBO) presents a Clavius Base/Imagine Entertainment Production, Disk 3, Part 8, 98853, 720minutes total series +-----+ Apollo 1 fire killed crew http://history.nasa.gov/Apollo204/ Apollo-1 (204) Crew: Virgil "Gus" Ivan Grissom, Lieutenant Colonel, USAF Edward Higgins White, II, Lieutenant Colonel, USAF Roger Bruce Chaffee, Lieutenant Commander, USN On January 27, 1967, tragedy struck the Apollo program when a flash fire occurred in command module 012 during a launch pad test of the Apollo/Saturn space vehicle being prepared for the first piloted flight, the AS-204 mission. Three astronauts, Lt. Col. Virgil I. Grissom, a veteran of Mercury and Gemini missions; Lt. Col. Edward H. White, the astronaut who had performed the first United States extra-vehicular activity during the Gemini program; and Roger B. Chaffee, an astronaut preparing for his first space flight, died in this tragic accident. &&&&& http://history.nasa.gov/as204_senate_956.pdf Senate report Preface ... No single person bears all of the responsibility for the Apollo 204 accident. It happened because many people made the mistake of failing to recognize a hazardous situation. ... Introduction A seven-member board, under the direction of the NASA Langley Research Center Director, Dr. Floyd L. Thompson, conducted a comprehensive investigation to pinpoint the cause of the fire. The final report, completed in April 1967 was subsequently submitted to the NASA Administrator. The report presented the results of the investigation and made specific recommendations that led to major design and engineering modifications, and revisions to test planning, test discipline, manufacturing processes and procedures, and quality control. With these changes, the overall safety of the command and service module and the lunar module was increased substantially. The AS-204 mission was redesignated Apollo I in honor of the crew. Senate report (25 pages only ?!?!?!?) ... The fire lasted only about 25.5 seconds - less than one-half minute- before consuming all of the oxygen in the command module ... Failure to identify test as hazardous Spacecraft hatch Ground safety procedures Operational test procedures Communications (quick verbal decisions, not circulated for feedback -> CI out of the loop) Control of combustible material Engineering, workmanship, and quality of control deficiencies &&&&& Video "From the Earth to the Moon" Frank Borman's comments Tom Hanks narration ?North American? corporate dissent +-----+ Apollo 13 rescue https://www.nasa.gov/mission_pages/apollo/missions/apollo13.html#.V7yLFR63TOo At 5 1/2 minutes after liftoff, John Swigert, Fred Haise and James Lovell felt a little vibration. Then the center engine of the S-II stage shut down two minutes early. This caused the remaining four engines to burn 34 seconds longer than planned, and the S-IVB third stage had to burn nine seconds longer to put Apollo 13 in orbit. The No. 2 oxygen tank, serial number 10024X-TA0009, had been previously installed in the service module of Apollo 10, but was removed for modification and damaged in the process. The tank was fixed, tested at the factory, installed in the Apollo 13 service module and tested again during the Countdown Demonstration Test at NASA's Kennedy Space Center beginning March 16, 1970. The tanks normally are emptied to about half full. No. 1 behaved all right, but No. 2 dropped to only 92 percent of capacity. Gaseous oxygen at 80 pounds per square inch was applied through the vent line to expel the liquid oxygen, but to no avail. An interim discrepancy report was written, and on March 27, two weeks before launch, detanking operations resumed. No. 1 again emptied normally, but No. 2 did not. After a conference with contractor and NASA personnel, the test director decided to "boil off" the remaining oxygen in No. 2 by using the electrical heater within the tank. The technique worked, but it took eight hours of 65-volt DC power from the ground support equipment to dissipate the oxygen. Due to an oversight in replacing an underrated component during a design modification, this turned out to severely damage the internal heating elements of the tank. After an intensive investigation, the Apollo 13 Accident Review Board identified the cause of the explosion. In 1965, the CM had undergone many improvements that included raising the permissible voltage to the heaters in the oxygen tanks from 28 to 65 volts DC. Unfortunately, the thermostatic switches on these heaters weren't modified to suit the change. During one final test on the launch pad, the heaters were on for a long period of time. This subjected the wiring in the vicinity of the heaters to very high temperatures (1000 F), which have been subsequently shown to severely degrade teflon insulation. The thermostatic switches started to open while powered by 65 volts DC and were probably welded shut. Furthermore, other warning signs during testing went unheeded and the tank, damaged from eight hours of overheating, was a potential bomb the next time it was filled with oxygen. That bomb exploded on April 13, 1970 - 200,000 miles from Earth. Video "Apollo 13" +-----+ Challenger accident Image - Challenger O-ring : http://pics-about-space.com/nasa-challenger-o-ring-failure?p=1 "NASA Challenger O-ring Failure" Challenger O-ring.jpg Image - Challenger rockets out of round : http://www.onlineethics.org/Resources/thiokolshuttle/shuttle_pre.aspx Roger M. Boisjoly 15May2006 "Ethical Decisions - Morton Thiokol and the Space Shuttle Challenger Disaster - Index" The Online Ethics Center (OEC) is a resource maintained by the Center for Engineering Ethics and Society (CEES) at the National Academy of Engineering (NAE). Boisjoly 15May2006 Ethical Decisions - Morton Thiokol and the Space Shuttle Challenger Disaster.jpg Added05/15/2006 Author(s) &&&&& http://www.space.com/31732-space-shuttle-challenger-disaster-explained-infographic.html The Space Shuttle Challenger Disaster: What Happened? &&&&& https://en.wikipedia.org/wiki/Space_Shuttle_Challenger_disaster Space Shuttle Challenger disaster The Space Shuttle Challenger disaster occurred on January 28, 1986, when the NASA Space Shuttle orbiter Challenger (OV-099) (mission STS-51-L) broke apart 73 seconds into its flight, leading to the deaths of its seven crew members &&&&& http://er.jsc.nasa.gov/seh/explode.html (The following information is presented in Chapters III and IV of "Report of the Presidential Commission on the Space Shuttle Challenger Accident," U.S. Government Printing Office : 1986 0 -157-336.) The Cause of the Accident The consensus of the Commission and participating investigative agencies is that the loss of the Space Shuttle Challenger was caused by a failure in the joint between the two lower segments of the right Solid Rocket Motor. The specifc failure was the destruction of the seals that are intended to prevent hot gases from leaking through the joint during the propellant burn of the rocket motor. The evidence assembled by the Commission indicates that no other element of the Space Shuttle system contributed to this failure. Findings : 5. Launch site records show that the right Solid Rocket Motor segments were assembled using approved procedures. However, significant out-of-round conditions existed between the two segments joined at the right Solid Rocket Motor aft field joint (the joint that failed). a. While the assembly conditions had the potential of generating debris or damage that could cause O-ring seal failure, these were not considered factors in this accident. b. The diameters of the two Solid Rocket Motor segments had grown as a result of prior use. c. The growth resulted in a condition at time of launch wherein the maximum gap between the tang and clevis in the region of the joint's O-rings was no more than .008 inches and the average gap would have been .004 inches. d. With a tang-to-clevis gap of .004 inches, the O-ring in the joint would be compressed to the extent that it pressed against all three walls of the O-ring retaining channel. e. The lack of roundness of the segments was such that the smallest tang-to-clevis clearance occurred at the initiation of the assembly operation at positions of 120 degrees and 300 degrees around the circumference of the aft field joint. It is uncertain if this tight condition and the resultant greater compression of the O-rings at these points persisted to the time of launch. 8. Experimental evidence indicates that due to several effects associated with the Solid Rocket Booster's ignition and combustion pressures and associated vehicle motions, the gap between the tang and the clevis will open as much as .017 and .029 inches at the secondary and primary O-rings, respectively. a. This opening begins upon ignition, reaches its maximum rate of opening at about 200-300 milliseconds, and is essentially complete at 600 milliseconds when the Solid Rocket Booster reaches its operating pressure. 9. O-ring resiliency is directly related to its temperature. a. A warm O-ring that has been compressed will return to its original shape much quicker than will a cold O-ring when compression is relieved. Thus, a warm O-ring will follow the opening of the tang-to-clevis gap. A cold O-ring may not. b. A compressed O-ring at 75 degrees Fahrenheit is five times more responsive in returning to its uncompressed shape than a cold O-ring at 30 degrees Fahrenheit. c. As a result it is probable that the Orings in the right solid booster aft field joint were not following the opening of the gap between the tang and clevis at time of ignition. 10. Experiments indicate that the primary mechanism that actuates O-ring sealing is the application of gas pressure to the upstream (high-pressure) side of the O-ring as it sits in its groove or channel. a. For this pressure actuation to work most effectively, a space between the O-ring and its upstream channel wall should exist during pressurization. b. A tang-to-clevis gap of .004 inches, as probably existed in the failed joint, would have initially compressed the Oring to the degree that no clearance existed between the O-ring and its upstream channel wall and the other two surfaces of the channel. c. At the cold launch temperature experienced, the O-ring would be very slow in returning to its normal rounded shape. It would not follow the opening of the tang-to-clevis gap. It would remain in its compressed position in the O-ring channel and not provide a space between itself and the upstream channel wall. Thus, it is probable the O-ring would not be pressure actuated to seal the gap in time to preclude joint failure due to blow-by and erosion from hot combustion gases. 11. The sealing characteristics of the Solid Rocket Booster O-rings are enhanced by timely application of motor pressure. a. Ideally, motor pressure should be applied to actuate the O-ring and seal the joint prior to significant opening of the tang-to-clevis gap ( 100 to 200 milliseconds after motor ignition). b. Experimental evidence indicates that temperature, humidity and other variables in the putty compound used to seal the joint can delay pressure application to the joint by 500 milliseconds or more. c. This delay in pressure could be a factor ih initial joint failure. !!!!!!!!!!!! 12. Of 21 launches with ambient temperatures of 61 degrees Fahrenheit or greater, only four showed signs of O-ring thermal distress; i.e., erosion or blow-by and soot. Each of the launches below 61. degrees Fahrenheit resulted in one or more O-rings showing signs of thermal distress. a. Of these improper joint sealing actions, one-half occurred in the aft field joints, 20 percent in the center field joints, and 30 percent in the upper field joints. The division between left and right Solid Rockter Boosters was roughly equal. b. Each instance of thermal O-ring distress was accompanied by a leak path in the insulating putty. The leak path connects the rocket's combustion chamber with the O-ring region of the tang and clevis. Joints that actuated without incident may also have had these leak paths !!!!!!!!!!!!!!! 13. There is a possibility that there was water in the clevis of the STS 51-L joints since water was found in the STS-9 joints during a destack operation after exposure to less rainfall than STS Sl-L. At time of launch, it was cold enough that water present in the joint would freeze. Tests show that ice in the joint can inhibit proper secondary seal performance . 14. A series of puffs of smoke were observed emanating from the 51-L aft field joint area of the right Solid Rocket Booster between 0.678 and 2.500 seconds after ignition of the Shuttle Solid Rocket Motors. 15. This smoke from the aft field joint at Shuttle lift off was the first sign of the failure of the Solid Rocket Booster O-ring seals on STS 51-L. 16. The leak was again clearly evident as a flame at approximately 58 seconds into the flight. It is possible that the leak was continuous but unobservable or non-existent in portions of the intervening period. It is possible in either case that thrust vectoring and normal vehicle response to wind shear as well as planned maneuvers reinitiated or magnified the leakage from a degraded seal in the period preceding the observed flames. The estimated position of the flame, centered at a point 307 degrees around the circumference of the aft field joint, was confirmed by the recovery of two fragments of the right Solid Rocket Booster. a. A small leak could have been present that may have grown to breach the joint in flame at a time on the order of 58 to 60 seconds after lift off. b. Alternatively, the O-ring gap could have been resealed by deposition of a fragile buildup of aluminum oxide and other combustion debris. This resealed section of the joint could have been disturbed by thrust vectoring, Space Shuttle motion and flight loads induced by changing winds aloft. c. The winds aloft caused control actions in the time interval of 32 seconds to 62 seconds into the flight that were typical of the largest values experienced on previous missions. [Video - Challenger explosion] [Image - Challenger rocket booster seals] +-----+ Howell - questions Is CI really of help to these kinds of disaster? - perhaps just to extract information, but beyond that? ACM example - bringing in outside examples perhaps (NASA, airforce, ESA, Russian, etc). Also - "search for gaps, anomalies". Would it really be of help for HAZOPS? corrective measures (more likely useful here - especially for review boards)? Multi-physics modelling-design iterations now very important in some areas (eg antenna design with Evolutionary computation) - this is for [completely-specified, understood] systems Diversity of opinions and approaches is very important, especially at the start! "Veto-like" capabilities on safety - i.e. all must be comfortable, or at least express specific concerns ******************** Safety - Corporate verbal, emails & reports While sound analysis doesn't seem to of current interest for driverless cars, it has always been a target of CI research, along with linguistics and related themes Voice dictation software - step change in quality (more to come?), real-time, walk-down the street wearable systems : - multi-lingual translation of verbal conversation [voice dictation, semantic-linguistic changes, voice generation] between SEVERAL languages (not just two - eg international meeting without extremely expensive translation experts real-time), - documentation, speaker fingerprint - use along with other data [types, sources] Information is often [plentiful (Big Data), heterogeneous, disperse], but accessing it and using it is problematic Data [cull, organize, analysis, report] according to [objectives, needs] Computer analysis and report composition, distribution, decisions Next plausible sentence Ghost writer Safety implications - especially for "suppressed viewpoints" +-----+ Howell's SPINE reports Confabulation theory Confabulation makes for an extremely interesting and important contrast to Bayes theorem in statistics! [Hecht-Nielson p76-77] In my own clumsy way if putting it, Confabulation maximizes the expected truth of the inputs, whereas Bayes theorem maximizes the expected value of the outcome. Perhaps in an ideal sense, the two approaches would be mathematically equivalent in a roundabout fashion, but in practice there is a BIG difference. Worse, it appears that much of the success of applying Bayes theorem may be due to simplifications (“naïve Bayes formula”) that turn the Bayes approach into “cogency maximization” of Confabulation – in other words, it isn’t Bayes theorem other than in a pragmatic slang form of mathematics? Furthermore, Bayes theorem is not well adapted to explaining biological mechanisms, according to Hech-Nielson. This is critical for anyone with a neuroscience, biology, psychology perspective! I.6 A brief description of the Confabulation system for “Plausible Next Sentence” The “builders” of Confabulation Theory built an exercise around using two sentences to generate a “third plausible sentence. The Confabulation system itself was fed ?120 million sentences and 70 million sentence pairs? from quality newspapers, magazines etc. These were “serious sentences and sentence pairs” - not jokes, plays on words or other sources that may intentionally obscure meaning. Spelling, grammar, syntax were of good quality in the information used as the basis for their system. Several tests of the “next plausible sentence” generation by confabulation theory are provided in the book [Hecht-Nielson 2006?], and it is from those examples that the current exercise was drawn. Note that these test sentence examples were taken randomly from quality newspapers, magazines, and similar sources. They were NOT part of the training examples for the Confabulation system. So the latter had to “invent” new responses to new questions, so to speak. Those responses also required correct spelling, grammar, syntax, which is an outcome (side effect?) of confabulation theory. Image - Confabulation Theory, thalamocortical module Image - Confabulation Theory, knowledge base Image - Confabulation Theory, symbols connected via knowledge links Image - Confabulation Theory, Question 1 Image - Confabulation Theory, Question 2 Image - Confabulation Theory, Question 3 Image - Confabulation Theory, Question 4 Image - Confabulation Theory, Question 5 Image - Confabulation Theory, Question 6 Image - Confabulation Theory, Parting inspiration Confabulation answers : are ALL respondent 4 +-----+ GhostWriter Confabulation Theory CD Disk 2, file VTS_01_4.VOB 12:18 to 14:33 Video - Confabulation Theory, GhostWriter ******************** Safety - Hippocampal prosthesis Image - Cochlear implant http://www.extremetech.com/electronics/139875-mit-devises-biobattery-that-could-allow-the-human-ear-to-power-its-own-hearing-aid John Hewitt 08Nov2012 "MIT creates biobattery that could allow the human ear to power its own hearing aid" Hewitt 08Nov2012 Cochlear implant.jpg Image - Retinal prosthesis : http://bme.usc.edu/directory/faculty/core-faculty/theodore-w-berger/ Wentai Liu, Mark S. Humayun 2002 "Retinal prostheis project" Brown University, Intraocular Retinal Prosthesis Group, http://biomed.brown.edu/Courses/BI108/2006-108websites/group03retinalimplants/interoccular.htm Liu, Humayun 2002 Retinal prostheis project.jpeg http://spectrum.ieee.org/the-human-os/biomedical/bionics/new-startup-aims-to-commercialize-a-brain-prosthetic-to-improve-memory New Startup Aims to Commercialize a Brain Prosthetic to Improve Memory By Eliza Strickland Posted 16 Aug 2016 | 18:00 GMT Image: Wikimedia Commons The hippocampus is a key brain region involved in memory formation and storage Image: Ted Berger An implant could help someone whose hippocampus doesn't properly turn information into memories. A startup named Kernel came out of stealth mode yesterday and revealed its ambitious mission: to develop a ready-for-the-clinic brain prosthetic to help people with memory problems. The broad target market includes people with Alzheimer’s and other forms of dementia, as well as those who have suffered a stroke or traumatic brain injury. drawing of brain showing region called hippocampus Image: Wikimedia Commons The hippocampus is a key brain region involved in memory formation and storage. If the company succeeds, surgeons will one day implant Kernel’s tiny device in their patients’ brains—specifically in the brain region called the hippocampus. There, the device’s electrodes will electrically stimulate certain neurons to help them do their job—turning incoming information about the world into long-term memories. +-----+ Howell questions Anti-Engineering & anti-Murphy's law? enddoc