#] #] ********************* #] "$d_web"'AI apps/0_AI applications notes.txt' - ??? # www.BillHowell.ca 28Dec2022 initial # view in text editor, using constant-width font (eg courier), tabWidth = 3 48************************************************48 #24************************24 # Table of Contents, generate with : # $ grep "^#]" "$d_web"'AI apps/0_AI applications notes.txt' | sed "s/^#\]/ /" # ********************* "$d_web"'AI apps/0_AI applications notes.txt' - ??? 15Mar2023 [KEEP] Exploring the challenges of ChatGPT in education 15Mar2023 MidJourney art - fantastic [quality, composability?] 28Dec2022 search 'Google's equivalent to openAI chat' 28Dec2022 search 'Google's equivalent to openAI DALL-E' #24************************24 # Setup, ToDos, Here's my webSite directory to two of the current "AI" ("CI") applications many peoople are talking about : http://www.billhowell.ca/AI%20apps/ http://www.billhowell.ca/AI%20apps/DALL-E/0_DALL-E%20notes.txt 28Dec2022 At present, I've only used openAI.com' : openAI.com's [chat or chatGPT, DALL-E] Google [search engine? Lambda, Imagen] My stuff : http://www.billhowell.ca/AI%20apps/DALL-E/221218%20DALL-E%20Mike%20and%20Sarah%20in%20%20kyak/ http://www.billhowell.ca/AI%20apps/DALL-E/221224%20Mike%20and%20Sarah%20-%20legend%20of%20Quiniche/ http://www.billhowell.ca/AI%20apps/DALL-E/221219%20Giche%20and%20Catherine%20%5bmagic,%20alchemy%5d/ other AI apps (besides [chatGPT, Dall-E]) : google BARD https://docs.midjourney.com/ #24************************24 #08********08 #] ??Mar2023 #08********08 #] ??Mar2023 #08********08 #] ??Mar2023 #08********08 #] ??Mar2023 #08********08 #] ??Mar2023 #08********08 #] ??Mar2023 #08********08 #] ??Mar2023 #08********08 #] ??Mar2023 #08********08 #] 30Mar2023 https://arxiv.org/abs/2303.12093 [Submitted on 21 Mar 2023 (v1), last revised 25 Mar 2023 (this version, v2)] ChatGPT for Programming Numerical Methods Ali Kashefi, Tapan Mukerji ChatGPT is a large language model recently released by the OpenAI company. In this technical report, we explore for the first time the capability of ChatGPT for programming numerical algorithms. Specifically, we examine the capability of GhatGPT for generating codes for numerical algorithms in different programming languages, for debugging and improving written codes by users, for completing missed parts of numerical codes, rewriting available codes in other programming languages, and for parallelizing serial codes. Additionally, we assess if ChatGPT can recognize if given codes are written by humans or machines. To reach this goal, we consider a variety of mathematical problems such as the Poisson equation, the diffusion equation, the incompressible Navier-Stokes equations, compressible inviscid flow, eigenvalue problems, solving linear systems of equations, storing sparse matrices, etc. Furthermore, we exemplify scientific machine learning such as physics-informed neural networks and convolutional neural networks with applications to computational physics. Through these examples, we investigate the successes, failures, and challenges of ChatGPT. Examples of failures are producing singular matrices, operations on arrays with incompatible sizes, programming interruption for relatively long codes, etc. Our outcomes suggest that ChatGPT can successfully program numerical algorithms in different programming languages, but certain limitations and challenges exist that require further improvement of this machine learning model. Kashefi, Mukerji 21Mar2023 ChatGPT for Programming Numerical Methods #08********08 #] 15Mar2023 [KEEP] Exploring the challenges of ChatGPT in education Date: March 30, 2023 (Thursday) Time: 12:30 pm - 1:45 pm HKT (GMT+8) Speakers: Prof. Herman Cappelen (HKU), Prof. Irwin King (CUHK), Dr. Sean McMinn (HKUST), Prof. Eric Tsui (PolyU), Dr. Florin Constantin Serban (HKBU) Mar 30, 2023 12:30 PM in Hong Kong SAR Webinar ID : 956 8718 3410 Please click this URL to join. https://cuhk.zoom.us/w/95687183410?tk=s1G_rj8Chy9uj2goTgT6Qi_AS0BCdfwZFFD6PvfX1JE.DQMAAAAWR2aMMhZnYmVycWJlcVQxeVpwNDBvYjZ4OHB3AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA&pwd=d2t4M21OaEhzMVpLalVOSW56ZWZKdz09&uuid=WN_vfJPF15dQEyg_nmVgSvDJw #08********08 #] 15Mar2023 MidJourney art - fantastic [quality, composability?] used by KEEP (Irwin King) email illustrations https://docs.midjourney.com/ veert impressive #08********08 #] 28Dec2022 search 'Google's equivalent to openAI chat' +-----+ https://www.businessinsider.com/openai-chatgpt-not-likely-to-replace-google-says-morgan-stanley-2022-12?op=1 Could ChatGPT challenge Google? Morgan Stanley says the search giant has nothing to worry about. Emilia David Dec 15, 2022, 2:19 PM Analysis When ChatGPT went viral, people began using it as an alternative to Google Searches. Investors started seeing a world where generative AI would disrupt Google by taking some search traffic away. But Google has a first mover advantage and also heavily invests in AI — challenging it will be difficult. Google spent $100 billion in the past three years on AI and machine learning research and development. The company's R&D spending is expected to grow 13% annually until 2025. It's building natural language models like LaMDA and invested in a machine learning program called BERT that helps machines better understand the context of conversations. It also launched a project that teaches computer code to write, fix and update itself, which could reduce the number of engineers Google will hire.  It's even developing projects that are a lot like ChatGPT. For example, DeepMind, a Google-owned AI research lab, announced a new app called Dramatron that generates film scripts.  +-----+ https://www.cnbc.com/2022/12/15/google-vs-chatgpt-what-happened-when-i-swapped-services-for-a-day.html Google vs. ChatGPT: Here’s what happened when I swapped services for a day Published Thu, Dec 15 20222:39 PM EST Sofia Pitt Key Points ChatGPT has gone viral since OpenAI released the text-based artificial intelligence chatbot tool in November. Google has been bragging about its AI expertise for years, and some employees are wondering if they missed an opportunity, CNBC reported. Analysts are also wondering if AI chatbots could someday threaten Google’s dominance. So I decided to give it a try. Google is “building similar natural language models such as LaMDA” #08********08 #] 28Dec2022 search 'Google's equivalent to openAI DALL-E' +-----+ https://www.analyticsinsight.net/googles-imagen-vs-openais-dall-e-2-who-makes-the-best-images/ Google’s Imagen vs OpenAI’s DALL.E-2: Who Makes the Best Images? Satavisa Pati May 27, 2022 3 mins read Comparing Google’s Imagen with OpenAI’s DALL.E-2 as image-to-text generator. The AI imagery competition is getting tough. Google this week unveiled a new challenger to OpenAI’s vaunted DALLE-2 text-to-image generator — and took shots at its rival’s efforts. Both models convert text prompts into pictures. But Google’s researchers claim their system provides “unprecedented photorealism and deep language understanding.” Greetings humanoids. Qualitative comparisons between Imagen and DALL-E 2 on DrawBench prompts from the Conflicting category. +-----+ https://medium.com/augmented-startups/google-imagen-vs-openai-dall-e-2-383a8566beb2 Ritesh Kanjee 623 Followers May 30, 5 min read Google Imagen vs OpenAI DALL·E 2 Oh My God! It is NOT a great time for OpenAI right now. It’s been just over a month since DALL·E 2 was released and just a few days ago, Google decides to enter the ring with Imagen. In comparison, Imagen is a slap in the face for DALLE·2 mainly because it outperforms DALLE·2 in terms of AI Image generation precision and quality. DrawBench - To assess where Imagen stands in comparison to other text-to-image models, they introduced a benchmark called DrawBench. Human critics analyzed the results between VQ-GAN+CLIP, Latent Diffusion Models, Google’s Imagen as well as DALL·E 2. +-----+ https://www.siliconrepublic.com/machines/google-research-imagen-text-to-image-ai-openai-dall-e Google unveils its competitor to OpenAI’s text-to-image model by Leigh Mc Gowran 24 May 2022 Google Research said its Imagen AI model was preferred in tests over DALL-E 2 in terms of ‘sample quality and image-text alignment’. Google Research has developed a competitor for OpenAI’s text-to-image system, with its own AI model that can create artworks using a similar method. Google’s research team said its text-to-image model, Imagen, has an “unprecedented degree of photorealism” and a deep level of language understanding. Text-to-image AI models are able to understand the relationship between an image and the words used to describe it. Once a description is added, a system can generate images based on how it interprets the text, combining different concepts, attributes and styles. For example, if the description is ‘a photo of a dog’, the system can create an image that looks like a photograph of a dog. But if this description is altered to ‘an oil painting of a dog’, the image generated would look more like a painting. Imagen’s team has shared a number of example images that the AI model has created – ranging from a cute corgi in a house made from sushi, to an alien octopus reading a newspaper. OpenAI created the first version of its text-to-image model called DALL-E last year. But it unveiled an improved model called DALL-E 2 last month, which it said “generates more realistic and accurate images with four times greater resolution”. The AI company explained that the model uses a process called diffusion, “which starts with a pattern of random dots and gradually alters that pattern towards an image when it recognises specific aspects of that image”. In a newly published research paper, the team behind Imagen claims to have made several advances in terms of image generation. It says large frozen language models trained only on text data are “surprisingly very effective text encoders” for text-to-image generation. It also suggests that scaling a pretrained text encoder improves sample quality more than scaling an image diffusion model size. Google’s research team created a benchmark tool to assess and compare different text-to-image models, called DrawBench. Using DrawBench, Google’s team said human raters preferred Imagen over other models such as DALL-E 2 in side-by-side comparisons “both in terms of sample quality and image-text alignment”. # enddoc