/* ---- Google Analytics Code Below */

Sunday, April 30, 2023

Galactica: A Large Language Model for Science Research

Quite Useful idea for sharing orgaizedscience data with with Large Language Methods



Galactica: A Large Language Model for Science

Computer Science > Computation and Language

[Submitted on 16 Nov 2022]

Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, Robert Stojnic

Information overload is a major obstacle to scientific progress. The explosive growth in scientific literature and data has made it ever harder to discover useful insights in a large mass of information. Today scientific knowledge is accessed through search engines, but they are unable to organize scientific knowledge alone. In this paper we introduce Galactica: a large language model that can store, combine and reason about scientific knowledge. We train on a large scientific corpus of papers, reference material, knowledge bases and many other sources. We outperform existing models on a range of scientific tasks. On technical knowledge probes such as LaTeX equations, Galactica outperforms the latest GPT-3 by 68.2% versus 49.0%. Galactica also performs well on reasoning, outperforming Chinchilla on mathematical MMLU by 41.3% to 35.7%, and PaLM 540B on MATH with a score of 20.4% versus 8.8%. It also sets a new state-of-the-art on downstream tasks such as PubMedQA and MedMCQA dev of 77.6% and 52.9%. And despite not being trained on a general corpus, Galactica outperforms BLOOM and OPT-175B on BIG-bench. We believe these results demonstrate the potential for language models as a new interface for science. We open source the model for the benefit of the scientific community.

Subjects: Computation and Language (cs.CL); Machine Learning (stat.ML)

Cite as: arXiv:2211.09085 [cs.CL]

  (or arXiv:2211.09085v1 [cs.CL] for this version)


Focus to learn more

Submission history

From: Robert Stojnic [view email]

[v1] Wed, 16 Nov 2022 18:06:33 UTC (10,715 KB)

EleutherAI Examined

EleutherAI   Conner Leahy     is an open-source organization focused on advancing the state-of-the-art in large-scale AI models, particularly in the field of natural language processing (NLP). It was founded in 2020 by a group of researchers who were previously involved in the GPT-2 and GPT-3 projects at OpenAI. The name "Eleuther" comes from the Greek word for "freedom", reflecting the organization's commitment to promoting open and accessible AI research.

One of the primary goals of EleutherAI is to create large-scale language models that are more accessible and inclusive than those produced by large tech companies. To this end, they have developed a number of open-source tools and resources for training and fine-tuning large-scale language models, including the GPT-Neo series of models. These models are trained using open-source data and made available to the research community free of charge.

In addition to their work on large-scale language models, EleutherAI is also involved in a number of other AI research projects, including computer vision and generative models. The organization is entirely volunteer-based and relies on donations and community support to fund its research activities...  (via GPT)

Artificial Intelligence Still Can't Form Concepts

Artificial Intelligence Still Can't Form Concepts

By Bennie Mols

Commissioned by CACM Staff

Melanie Mitchell.

"If the goal is to create an AI system that has humanlike abstraction abilities, then it does not make sense to have to train it on tens of thousands of examples," Mitchell said. "The essence of abstraction and analogy is few-shot learning."

Machine translation, automatic speech recognition, and automatic text generation demonstrate the enormous progress artificial intelligence (AI) has made in processing human language. On the other hand, AI has made astonishingly little progress in forming concepts and abstractions. That is the research area of Melanie Mitchell, professor of complexity at the Santa Fe Institute and author of the book Artificial Intelligence – A Guide for Thinking Humans.

Mitchell argues forming concepts is absolutely crucial to unlock the full potential of AI. "A concept is a fundamental unit of understanding," Mitchell said during an interview at the 2023 American Association for the Advancement of Science (AAAS) Annual Meeting in Washington, D.C. "Neural networks can look at a picture and tell whether it contains a dog, a cat, or a car, but they do not have a rich understanding of any of those categories.

"Take the concept of a bridge. Humans can extend the notion of a bridge to abstract levels. We can talk about a bridge between people or bridging the gender gap. We can instantly understand what these expressions mean because we have a rich mental model of what a bridge can be."

Mitchell first started working on concepts and abstraction in 1984, as a Ph.D. student of Douglas Hofstadter. Inspired by Hofstadter's famous book Gödel, Escher, Bach: An Essential Golden Braid, Mitchell decided to contact him, and that was the start of their cooperation. Together they created an AI system called Copycat, which can solve simple letter-string analogy problems. For example, given the letter-strings ABC and PQR, which string follows after AABBCC? Copycat could then find the answer PPQQRR by using a mental model that included symbolic, sub-symbolic, and probabilistic elements.

Copycat had huge limitations: its architecture was ad hoc, it was unclear how general the architecture was, and it was unclear how to form new concepts beyond what was given in its prior conceptual repertoire. In the roughly three decades that have passed since Copycat was released, there have been various efforts to create AI systems that form abstractions and concepts, but the problem fundamentally is still unsolved.

In recent years, some scientists have shown that deep learning systems can perform better than the average human (see for example https://arxiv.org/abs/2012.01944) on Raven's Progressive Matrices, a widely used non-verbal test of general human intelligence and abstract reasoning (for example, given a set of visual geometric designs, the subject has to identify a missing piece at the end). However, Mitchell found that deep learning systems did not accomplish this by learning humanlike concepts, but by finding shortcuts. Furthermore, they needed a large corpus of training examples.

"If the goal is to create an AI system that has humanlike abstraction abilities, then it does not make sense to have to train it on tens of thousands of examples," Mitchell said. "The essence of abstraction and analogy is few-shot learning."

What about large language models, like GPT? Don't they have the capability to form humanlike concepts and abstractions? "Interestingly, they can make analogies to some extent," said Mitchell. "I have tried some letter-string problems in GPT-3, and in some cases it could solve them. It learned, for example, the concept of successorship. Not perfect, not robust, but I found it still surprising that it can do this. Therefore, I don't agree that these systems are only 'stochastic parrots', as some scientists have called them. I have seen evidence of GPT building simple internal models of situations."

Robots Under the Ice

 New dimensions of exploration.

Robot provides unprecedented views below Antarctic ice shelf

By James Dean, Cornell Chronicle, March 2, 2023

High in a narrow, seawater-filled crevasse in the base of Antarctica’s largest ice shelf, cameras on the remotely operated Icefin underwater vehicle relayed a sudden change in scenery.

Walls of smooth, cloudy meteoric ice abruptly turned green and rougher in texture, transitioning to salty marine ice.

Nearly 1,900 feet above, near where the surface of the Ross Ice Shelf meets Kamb Ice Stream, a U.S.-New Zealand research team recognized the shift as evidence of “ice pumping” – a process never before directly observed in an ice shelf crevasse, important to its stability.

Britney Schmidt’s Icefin team, Click to open gallery view

Credit:Icefin/NASA PSTAR RISE UP/Schmidt

Members of Britney Schmidt’s Icefin team after completing their first mission exploring conditions beneath Antarctica’s Ross Ice Shelf, near where it meets Kamb Ice Stream, in December 2019.

“We were looking at ice that had just melted less than 100 feet below, flowed up into the crevasse and then refrozen,” said Justin Lawrence, visiting scholar at the Cornell Center for Astrophysics and Planetary Science in the College of Arts and Sciences (A&S). “And then it just got weirder as we went higher up.”

The Icefin robot’s unprecedented look inside a crevasse, and observations revealing more than a century of geological processes beneath the ice shelf, are detailed in “Crevasse Refreezing and Signatures of Retreat Observed at Kamb Ice Stream Grounding Zone,” published March 2 in Nature Geoscience.

The paper reports results from a 2019 field campaign to Kamb Ice Stream supported by Antarctica New Zealand and other New Zealand research agencies, led by Christina Hulbe, professor at the University of Otago, and colleagues. Through support from NASA’s Astrobiology Program, a research team led by Britney Schmidt, associate professor of astronomy and earth and atmospheric sciences in A&S and Cornell Engineering, was able to join the expedition and deploy Icefin. Schmidt’s Planetary Habitability and Technology Lab has been developing Icefin for nearly a decade, beginning at the Georgia Institute of Technology.

Combined with recently published investigations of the fast-changing Thwaites Glacier – explored the same season by a second Icefin vehicle – the research is expected to improve models of sea-level rise by providing the first high-resolution views of ice, ocean and sea floor interactions at contrasting glacier systems on the West Antarctic Ice Sheet.

Thwaites, which is exposed to warm ocean currents, is one of the continent’s most unstable glaciers. Kamb Ice Stream, where the ocean is very cold, has been stagnant since the late 1800s. Kamb currently offsets some of the ice loss from western Antarctica, but if it reactivates could increase the region’s contribution to sea-level rise by 12%.

“Antarctica is a complex system and it’s important to understand both ends of the spectrum – systems already undergoing rapid change as well as those quieter systems where future change poses a risk,” Schmidt said. “Observing Kamb and Thwaites together helps us learn more.”

NASA funded Icefin’s development and the Kamb exploration to extend ocean exploration beyond Earth. Marine ice like that found in the crevasse may be an analog for conditions on Jupiter’s icy moon Europa, the target of NASA’s Europa Clipper orbital mission slated for launch in 2024. Later lander missions might one day search directly for microbial life in the ice.

Saturday, April 29, 2023

Exploratory Data Analysis with Pandas Python 2023

Nicely done general,  beginners piece on data analysis

by Rob Mulla


122,509 views  Premiered Dec 31, 2021  Medallion Python Data Science Coding Videos

In this video about exploratory data analysis with pandas and python, Kaggle grandmaster Rob Mulla will teach you the basics of how to explore data using python and pandas. Exploratory Data Analysis it a necessary tool for any data scientist. Pandas is a MUST for anyone getting into data science with python. Python is the #1 coding language for data science and has been growing over the years as an essential tool, with Pandas being the main data wrangling module. Kaggle Grandmaster Rob goes over it all in this video. In this video we discuss the basics of how to use explore data including...


00:00 Introduction

01:00 Imports and reading data

03:35 Data Understanding

06:40 Data Preparation

20:57 Feature Understanding

27:35 Feature Relationships

35:30 Asking a Question about the Data

40:00 Final Thoughts

Follow me on twitch for live coding streams: https://www.twitch.tv/medallionstallion

AI Poised to Transform Video-Compression Landscape Apple’s WaveOne purchase

AI Poised to Transform Video-Compression Landscape Apple’s WaveOne purchase heralds a new era in smart-streaming of AR and video  ... CRAIG S. SMITH

Apple’s surprise purchase at the end of last month of WaveOne, a California-based startup that develops content-aware AI algorithms for video compression, showcases an important shift in how video signals are streamed to our devices. In the near-term Cuppertino’s purchase will likely lead to smart video-compression tools in Apple’s video-creation products and in the development of its much-discussed augmented-reality headset.

However, Apple isn’t alone. Startups in the AI video codec space are likely to prove acquisition targets for other companies trying to keep up.

For decades video compression used mathematical models to reduce the bandwidth required for transmission of analog signals, focusing on the changing portions of a scene from frame to frame. When digital video was introduced in the 1970s, improving video compression became a major research focus, leading to the development of many compression algorithms called codecs, short for “coder-decoder,” that compress and decompress digital media files. These algorithms paved the way for the current dominance of video in the digital age. ... ' 

Dall-E Illustrating the facts



By Bryant Walker Smith on April 3, 2023 at 8:31 am

A new article, written in 2022 and published in 2023 -- with pictures!

The article asks a leading AI tool for image generation to illustrate the facts of a leading law school case. It introduces machine learning generally, summarizes the seminal case of Palsgraf v. Long Island Railroad, presents images that the tool created based on the facts as the majority and dissent recount them, and then translates this exercise into lessons for how lawyers and the law should think about AI.

A few of its takeaways:

1. Humans, societies, and legal processes are also nondeterministic systems!

2. Our status quo is not perfect. It is also filled with frequently unacknowledged distortions, ambiguities, and uncertainties -- with which AI tools can force a reckoning.

3. AI tools are like funhouse mirrors: They can "exacerbate, mitigate, reinforce, or challenge" existing problems such as invidious bias and invidious discrimination.

4. Discussions of AI often overlook the dangers of overreliance, which are common in many forms of automation.

5. AI tools will become not just the authors of but also the intended audience for many communications.

6. In the future, "debates will be much less about whether systems should be human or machine and much more about whether these systems should be centralized or decentralized: Should there be a single DALL-E or a million?"

More here.

Focus Areas: Architecture and Public PolicyIntermediary LiabilityRobotics

Related Projects: Legal Aspects of Autonomous Driving

Related Topics: Automated Driving  .... ' 

Looking at the Future of Big Robotics: Boston Robotics

Very Good:  

Subject: Watch "Robert Playter: Boston Dynamics CEO on Humanoid and Legged Robotics | Lex Fridman Podcast #374" on YouTube


Very nice interview piece on the future of industrial robotics from Boston dynamics  view.  Now part of Hyundai.   We looked at the Aibo dog from Sony.     Looking also at AI connections.   … check it out!   

Friday, April 28, 2023

Map of Evolutionary Tree LLMs

Good resources,  gives an interesting indication  about how much has been done.

Yann Lecun     VP & Chief AI Scientist at Meta     (Technical)

A survey of LLMs with a practical guide and evolutionary tree.

Number of LLMs from Meta = 7

Number of open source LLMs from Meta = 7

The architecture nomenclature for LLMs is somewhat confusing and unfortunate.

What's called "encoder only" actually has an encoder and a decoder (just not an auto-regressive decoder).

What's called "encoder-decoder" really means "encoder with auto-regressive decoder"

What's called "decoder only" really means "auto-regressive encoder-decoder"

https://lnkd.in/eZKhwmuz    -   Automated map appears below: 

We build an evolutionary tree of modern Large Language Models (LLMs) to trace the development of language models in recent years and highlights some of the most well-known models, in the following figure:   ... 

AR Art Takes Over British City

A novel idea. 

AR Art Takes Over British City

Wired, Elissaveta M. Brandon, April 24, 2023

The U.K. city of Sheffield has transformed its rooftops into an augmented reality (AR)-based art display. Launched in February, the "Look Up!" project is an "art trail" comprised of four buildings in the city's center, each coupled to a quick response (QR) code on the sidewalk below. Viewers can use a free smartphone application to scan the codes and follow animated arrows directing their gaze upward, to watch a stick figure made of multicolored balloons, a giant cat, and other whimsical animated characters onscreen. U.K. company Megaverse created the app and platform in partnership with U.S.-based AR developer Niantic, while local firms Universal Everything and Human Studio supplied the artworks. More than 1,500 people had downloaded the app and nearly 2,000 QR codes had been scanned in the week following the trail's launch.  ... 

How Does Remote Work Affect Innovation?

Had not seen the innovating teams and remote work aspect being examined. 

How Does Remote Work Affect Innovation?

by James Heskettin  in HBSwk.edu

Many companies are still trying to figure out how to manage teams that have limited in-person contact. Remote work will likely lead to new ideas, but what kind? asks James Heskett.

When former Google CEO Eric Schmidt tells how the company’s ad algorithm—the heart of its financial success—was revamped, here’s what he says:

One Friday afternoon in May 2002, (company co-founder) Larry Page was playing around on the Google site, typing in search terms and seeing what sort of results and ads he’d get back. He wasn’t happy with what he saw…. Some of the ads were completely unrelated to the search….

In a normal company, the CEO, seeing a bad product, would call the person in charge of the product. There would be a meeting or two or three…. Instead, he printed out the pages containing the results he didn’t like, highlighted the offending ads, posted them on a bulletin board on the wall of the kitchen by the pool table, and wrote THESE ADS SUCK in big letters across the top. Then he went home….

At 5:05 a.m. the following Monday…. Jeff Dean sent out an email. He and four colleagues…. had seen Larry’s note on the wall…. (Dean’s email) included a detailed analysis of why the problem was occurring, described a solution, included a link to a prototype 9 implementation of the solution the five had coded over the weekend…. And the kicker? Jeff and team weren’t even on the ads team… It was the culture that attracted…. These five engineers…. To the company in the first place.

How would this story play out if Google had relied heavily on remote work at the time? Would Jeff Dean and his colleagues even have been in the office on Friday afternoon?

OK, so they could just as easily have seen a post from Larry Page online if they had still been working after 5 p.m. on a Friday afternoon. Working from their homes and seeing a post that simulated the bulletin board at the office, would five of them have organized themselves, again online? Would they have decided to give up their weekend to come up with a better idea? Would they have agreed to break the routine of Mondays working remotely to share their response in the relative privacy of the office (vs. online)?

Would the five even be sharing the same values and “way we do things around here”? Would the result have been the same?

We can at least hypothesize several notions based on early research regarding remote work. Many talented people love it. Some of the reasons they love it, such as the ability to gain more control over their lives, are not always in the best interests of their employers. Some couldn’t do what they’re doing without the opportunity.

Employers appear to be less enthusiastic about remote work. Many feel that they have to offer it in order to access talent that would not otherwise be approachable. Although employees claim that remote work improves their productivity, mainly by eliminating commute time, the evidence thus far suggests two things: We don’t yet know how to measure productivity changes from remote work and that, even when we learn, the impact may not be very significant.

Many employers are just learning how to manage remote work. Some are doing a terrible job of it, with little preparation and training for middle managers primarily responsible for the success of the process. Also, the impact of remote work on organization culture has yet to be determined.  ... '


Opencog.org — July 15-16, 2020

OpenCog Foundation, SingularityNET and TrueAGI are hosting a 2-day online event aimed at spreading the word about some of the interesting things currently being done with the OpenCog proto-AGI platform and toolset — and some of the future possibilities under current discussion, including the potential implementation of new and substantially different versions of core components. […]

Posted on July 7th, 2020

Written by Ben Goertzel

Notable OpenCog Related Events to be Featured at AGI-20

As the premier conference series devoted to promotion of and research into Artificial General Intelligence, this years AGI-20 conference will feature several important events related to furthering OpenCog development. In particular, the conference will feature a workshop on “Next Generation AGI Architectures,” and tutorials on “Probabilistic reasoning and pattern mining using OpenCog” and on “Applied […]

Also SingularityNET,  which is a market place of APIs that are interconnected as a marktplace and 'AI,,, intriguting idea. 

Apple Said to Be Developing an AI Health Coach

Health Coaching,  Easy?

Apple Said to Be Developing an AI Heath Coach

Codenamed Quartz, the monthly subscription service will guide users through their exercise, diet, and sleep-related goals.

By Adrianna Nine April 28, 2023

Apple is reportedly working on bringing AI to its devices to coach users through their health goals. Bloomberg chief correspondent Mark Gurman, known best for his reliable Apple reporting, wrote Tuesday that the Cupertino-based company was developing a new bid to convert and maintain users interested in health-related features. 

Referred to internally as Quartz, the service offers users tips and motivation for exercise, eating habits, and sleep. Quartz will pair user data gleaned from the Apple Watch with its custom AI to “make suggestions and create coaching programs tailored to specific users,” according to Gurman’s sources. Users must pay a monthly subscription fee to use Quartz, which will have its own exclusive app. 

Though Apple declined to comment when Gurman asked about the service, insiders said the company will likely roll out Quartz sometime in 2024. (Like virtually any release window, this is always subject to change.) In the interim, Apple is expected to introduce an iPad version of the iPhone Health app under iPadOS 17 this fall. The app will allow users to visualize their health data in larger formats, which are valuable for spotting long-term trends and reading electrocardiogram (ECG) results. Users can also start tracking their emotions in either version of the app by answering mood-related questions and comparing how they’ve felt over longer periods. ...' 

PRC and Chat

 Very nicely done,good basics and hints.

The perils of AI (Artificial Intelligence) in the PRC  (China) 

April 17, 2023 @ 6:49 am · Filed by Victor Mair under Artificial intelligence, Computational linguistics, Language and politics

« previous post | next post »

Here at Language Log, for the last couple months, we've been having long, intense discussions about ChatGPT and other AI chatbots and LLM (Large Language Model) applications.  Now, it seems that the battle over such AI programs has reached the level of ideological warfare.

"America, China and a Crisis of Trust"

Opinion | The New York Times (4/14/23)

Indeed, a story making the rounds in Beijing is that many Chinese have begun using ChatGPT to do their ideology homework for the local Communist Party cell, so they don’t have to waste time on it.

I have some evidence that this might well be true.  Already about half-a-dozen years ago, my M.A. students from the PRC whose parents were CCP members told me that the government required daily interaction with the propaganda installed on their phones — upon pain of being demoted or dismissed.  They had to read a specified amount of Xi-speak and answer questions about the content.  This demanded a serious investment of time (hours).  It was considered to be especially onerous for those CCP members whose day jobs (doctors, bureaucrats, stock brokers, etc., etc.) already demanded a very full work schedule in the office.  So many, if not most of them, hired various human and electronic services to meet the obligations.

What Kind of Mind does ChatGPT Have?

Interesting thoughts.

What Kind of Mind Does ChatGPT Have?

By The New Yorker, April 14, 2023

illustration of a box with a simple wire speaking and listening device.

ChatGPT is amazing, but in the final accounting it is clear that what has been unleashed is more automaton than golem.

Credit: Nicholas Konrad / The New Yorker

With GPT-3, OpenAI made a significant leap forward in the study of artificial intelligence. But once we have taken the time to open up the black box and poke around the springs and gears found inside, we discover that it does not represent an alien intelligence with which we must now learn to coexist; instead, it runs on the well-worn digital logic of pattern-matching, pushed to a radically larger scale.

It is hard to predict exactly how ChatGPT and similar large language models will end up integrated into our lives, but we can be assured that they are incapable of hatching diabolical plans and are unlikely to undermine our economy.  ....

Thursday, April 27, 2023

Do Retailers Need to Have the AI Talk With Consumers?

Likely not, but since its easy to do expect some tests

Do Retailers Need to Have the AI Talk With Consumers?    by Tom Ryan in RetailWire

Recent surveys show consumers are interested in but also increasingly concerned about potential threats from artificial intelligence (AI) with the arrival of ChatGPT and other generative AI.

A recent survey conducted by Forbes Advisor found 76 percent of U.S consumers were concerned with misinformation from AI tools such as Google Bard, ChatGPT and Bing Chat. Most were concerned about AI’s use for product descriptions, product reviews, chatbots answering questions and personalized advertising.

The findings suggest “a consumer demand for transparency and ethical AI practices to foster trust between businesses and their customers,” according to Forbes.

A survey from CX platform DISQO taken in early March found 34 percent of U.S. adults don’t think generative AI tools should be used for most consumer-facing content (43 percent among Boomers versus 21 percent for Gen Z).

The top-five concerns around AI were poorer accuracy, cited by 45 percent; lack of human touch, 38 percent; negative impact on jobs, 36 percent; low emotional depth, 35 percent; and more bias, 29 percent. Sixty-eight percent had a low overall knowledge level of AI-generated content tools.

“Consumers are wary and need to be informed and educated about what’s in it for them,” Patrick Egan, director of research and insights, DISQO, said in a statement.

A Morning Consult survey taken in mid-February found that while more than half of the U.S. public believes AI integrations into products and services are the future of technology, just one-third think AI technologies will be developed responsibly. One-third trust AI to provide factual results.

“We don’t need to be afraid of it, but we do need to be in control of it,” Massachusetts Congressman Jake Auchincloss told Morning Consult. “It can’t be like social media where it was allowed to scale and to influence much of our private and public lives before we really got a handle on it — and frankly, still haven’t gotten a handle on it.” ...'

Why Hasn’t One-Click Checkout Gained Much Traction Beyond Amazon?

For now, this tech has stalled.

Why Hasn’t One-Click Checkout Gained Much Traction Beyond Amazon?    By Tom Ryan

Research from Cornell University finds adding “one-click” checkout leads online shoppers to increase their website visits, purchase a broader range of merchandise and spend an average of 28.5 percent more versus previous buying levels.

“Because one-click takes so much pain away from the shopping experience, we see consumers willing to spend more time on the site and search for more items,” said Murat Unal, a former Cornell professor who’s now an economist at Amazon.com, in an article for the Cornell Chronicle.

One-click checkout requires customers to store payment and delivery information with the retailer beforehand, eliminating the tedium of doing so with every checkout.

Amazon is known for its “Buy Now” button partly because it held the “1-Click” patent from 1999 until 2017. Amazon licensed the technology to Apple just before the launch of iTunes.

In a Knowledge at Wharton podcast recorded soon after the patent expired, Kartik Hosanagar, a Wharton marketing professor, described one-click as a “huge asset” for Amazon and “a very important event in the history of e-commerce.” He elaborated, “First, it was a very simple and intuitive system and generated a lot of controversy — could something so simple and obvious be patented? Second, it became an important part of the experience that Amazon offered and became a flag bearer for the convenient shopping experience that Amazon came to be known. And finally, it showed how e-commerce was as much about technology and data as it was about retail.”

Paypal, Apple and Shopify have introduced one-click offerings with the promise of reducing cart abandonment, but one-click isn’t as pervasive as some predicted it would become after the patent’s expiration. One-click checkout startups have also faced turbulence with Stripe-backed Fast shutting down last year and Bolt undergoing layoffs earlier this year ...'

Wednesday, April 26, 2023

GPTChat in Business Doing More or Less than Expected in Construction.

 Interesting example here.  Early experience yes. 

A case of GPTChat doing less than expected? 

 Causes?  Next?   Mostly positive, but the interviewer was completely skeptical.  Surprised theydid not disagree with using wheelbarrows.   These methods will not do physics but will manage the data and its relationships to many kinds of tasks and schedules. 

ARTIFICIAL INTELLIGENCE Published April 26, 2023 5:00pm EDT

AI set to transform construction industry

Supply chain, building material software company DigiBuild using ChatGPT with spectacular results

By Breck Dumas FOXBusiness

FIRST ON FOX – Artificial intelligence has entered the construction industry, and early adopters say the efficiencies and cost-cutting measures will revolutionize the $10 trillion sector of the global economy for the better.

Supply chain and building material software company DigiBuild has been using OpenAI's ChatGPT to bolster its program for months, and is set to unveil the results at an event in Miami on Wednesday evening.

DigiBuild, a supply chain and building material software company, has been using ChatGPT for months.

But ahead of the announcement, DigiBuild CEO Robert Salvador gave FOX Business an exclusive sneak peek of how the powerful AI tool has improved efficiency and slashed costs for the firm's clients, and he says the technology will be "market changing."

The construction industry is still dogged by the high material costs and supply chain woes brought on by the pandemic, and DigiBuild's software aims to help developers and contractors save money and improve their schedules. The help of AI has provided a remarkable boost to that end.

To the company's knowledge, DigiBuild is the first to introduce ChatGPT into the construction supply chain, and the firm has some inside help. The building software firm is backed by major investors, including Y Combinator – which trained OpenAI CEO Sam Altman – and has an exclusive Slack channel with OpenAI that allows experts to build together.

Construction workers are shown with the Manhattan skyline and Empire State Building behind them in Brooklyn, New York City, on Jan. 24, 2023. (Ed Jones / AFP via Getty Images / Getty Images)

DigiBuild has been around five years and has automated the job of sifting through suppliers to find materials and working out scheduling. Now, what used to take a team of humans hundreds of labor hours using Excel spreadsheets, notebooks and manual phone calls has been reduced to a matter of seconds with the help of language learning models.

"ChatGPT has taken us to the next level," Salvador said. "Supersonic."


"Instead of spending multiple hours probably getting a hold of maybe five or six suppliers, ChatGPT can find 100 of them and even automate outreach and begin communications with those 100 suppliers and say, 'Hey, we're DigiBuild. We need to find this type of door, can you provide a quote and send it back here?'" he said. "We can talk to 100 suppliers in one minute versus maybe a handful in a couple hours."

DigiBuild CEO Robert Salvador says ChatGPT has taken the company "supersonic."

The CEO offered a real-world example of a job where material costs were literally slashed by more than half using the new technology.

One of DigiBuild's clients, VCC Construction, needed closet shelving for a project in Virginia, and the builder could only find one quote for $150,000 with limited availability. With the click of a button, DigiBuild was able to find a vendor in the Midwest that provided the shelving and delivered it within weeks for $70,000.

Salvador says to imagine those results for a $500 million job or across the industry. He expects AI technology to become widely adopted.

"Before companies like us, the construction industry was still early in its digital transformation – they were late to the party," he told FOX Business. But now, "It's very much going all in on that, finally."

What's Behind the ChatGPT History Change?

 Apparently some considerable changes/interpretations of data and their use in Europe and Beyond.  Via GDPR.  And to some degree preventing the use of your data for training.  May be quite restrictive in practice.  Longer term implications unclear.


What's Behind the ChatGPT History Change? How You Can Benefit + The 6 New Developments This Week

9,704 views  Apr 26, 2023

Underneath a simple-sounding tweet about changes to chat history within ChatGPT is a data controversy that could change the near-term future of GPT models. This video will cover not only the new features that you now have access to, it will cover why the announcement was made, ChatGPT Business, the wave of lawsuits and data policy changes underway this week and much more. 

You will find out ways to check if your data has been used to train the models, learn more about the secret ‘Pile’ and ‘Common Crawl’ that may be behind GPT 4 and discover some of the potentially illicit ways the model may have been trained. I also cover how OpenAI may not fully be in control of what is in the dataset. 

In an ironic twist I’ll also show how Bard may have been caught training on ChatGPT and how OpenAI is set to trademark GPT, which if successful could change the naming ecosystem we have become familiar with. But, ultimately, with GPT 4 be able to outsmart even data litigation?   ... 

How Unilever Expedites Product Innovation, AI, Automation and Robots

We saw similar trial adoption in the 80s and 90s.

How Unilever Expedites Product Innovation, AI, Automation and Robotics

Liz Dominguez 

Image  Unilever  Dove

Robots are having a heavy hand in product innovation at Unilever, thanks in large part to the efforts being implemented at the company’s Materials Innovation Factory (MIF) in Liverpool, according to a recent blog post. 

These innovations are having a global impact, influencing product discovery, research, and manufacturing in not just the U.K., but across the sea in the U.S. as well. There’s three special “ladies” to thank, per Unilever, and they’re named Ariana, Shirley, and Gwen. 

These three robots are working alongside 250 R&D experts at Unilever’s 120,000-square-foot facility to help develop science-backed products through the power of automation. 

According to Unilever, MIF has the highest concentration of robots doing material chemistry in the world, and each machine is designed to crunch “colossal amounts of data and maintain consistency across samples and testing.”

The company makes it clear, however, that it isn’t looking to replace human efforts; it’s merely freeing up time by reducing time-consuming, repetitive jobs so that experts can focus on invention and exploration.

“The MIF’s purpose is to create a community of talented future research leaders, exchanging ideas with academic colleagues and accelerating the discovery process,” said Unilever. “Our partnership here allows us to tap into the best minds and resources in robotics, which strengthens our insights and capabilities to power next-level innovation, scientific discovery, and produce products with superior performance.”

Will robot management be a job of the future? CGT thinks so. Find out more.

Robots   Ariana

Beauty bot Ariana is preparing mass amounts of hair fiber samples in mere seconds in order to create hair products for Unilever’s brands, including Dove’s Intensive Repair line. Using a patented Fiber Repair Actives technology, Unilever can help consumers reconstruct inner hair fibers to reduce breakage and repair from within the hair strand. 


This robot is helping to expedite and mimic the process of hair washing and rising, running through 120 samples of hair every 24 hours. Shirley can rise, detangle, and blow dry hair, speeding up the analysis process so researchers can create accurate haircare product formulas, such as for the TRESemmĂ©’s Colour Radiance Booste product line. The range of products uses tech that Shirley helped invent in order to better protect hair surfaces and keep color vibrant longer within hair fibers.


Robot Gwen plays an important role in the sensory aspect of products, generating, measuring, and analyzing foam. As Unilever uses foam in many of its products to deliver ingredients, the company said it’s important it can accurately attribute performance related to the amount, quality, and type of bubbles and froth. 

“Understanding its physical, chemical and consumer relevance is important in product development,” said the company. 

How to Leverage AI for Substantial Business Value

Irving Wladawsky-Berger does his usual good job,   I remind you that there are many links below that are worth looking at, so do click through to get it all. I have gotten many versions of the question below over the years.

How to Leverage AI for Substantial Business Value

“AI initiatives at many organizations are too small and too tentative,” wrote Babson professor Tom Davenport and Deloitte principal consultant  Nitin Mittal in “Stop Tinkering with AI,” a recently published HBR article. The article is adapted from their book All-in on AI: How Smart Companies Win Big with Artificial Intelligence which was published earlier this year.

AI technologies have significantly advanced over the past few years. But, while leading edge firms are placing AI at the center of their business strategies, a number of recent surveys continue to show that the majority of enterprises are still in the early stages of AI experimentation and deployment and risk being left further behind. For example, the latest Deloitte survey on the “State of AI in the Enterprise” reached out to over 2,800 executives from advanced economies and found that 28% of respondents were deploying AI at scale and achieving high outcomes, but 46% were still in the early stages of deployment with no significant outcomes. ... '

In another recent survey, Accenture reached out to over 1,600 C-suite executives of the world’s largest companies and found that only 12% had the strategic and operational AI capabilities needed to achieve superior growth, while the majority of firms, 63%, were still at the experimenting stage and had only average AI capabilities.  ... ' 

Wrinkles’ in Time Experience Linked to Heartbeat

Intriguing research.

Wrinkles’ in time experience linked to heartbeat   By James Dean, Cornell Chronicle, March 6, 2023

How long is the present? The answer, Cornell researchers suggest in a new study, depends on your heart.   They found that our momentary perception of time is not continuous but may stretch or shrink with each heartbeat.

The research builds evidence that the heart is one of the brain’s important timekeepers and plays a fundamental role in our sense of time passing – an idea contemplated since ancient times, said Adam K. Anderson, professor in the Department of Psychology and in the College of Human Ecology (CHE).

“Time is a dimension of the universe and a core basis for our experience of self,” Anderson said. “Our research shows that the moment-to-moment experience of time is synchronized with, and changes with, the length of a heartbeat.”

Saeedeh Sadeghi, M.S. ’19, a doctoral student in the field of psychology, is the lead author of “Wrinkles in Subsecond Time Perception are Synchronized to the Heart,” published March 2 in the journal Psychophysiology. Anderson is a co-author with Eve De Rosa, the Mibs Martin Follett Professor in Human Ecology (CHE) and dean of faculty at Cornell, and Marc Wittmann, senior researcher at the Institute for Frontier Areas of Psychology and Mental Health in Germany.

Time perception typically has been tested over longer intervals, when research has shown that thoughts and emotions may distort our sense time, perhaps making it fly or crawl. Sadeghi and Anderson recently reported, for example, that crowding made a simulated train ride seem to pass more slowly.

Such findings, Anderson said, tend to reflect how we think about or estimate time, rather than our direct experience of it in the present moment.

To investigate that more direct experience, the researchers asked if our perception of time is related to physiological rhythms, focusing on natural variability in heart rates. The cardiac pacemaker “ticks” steadily on average, but each interval between beats is a tiny bit longer or shorter than the preceding one, like a second hand clicking at different intervals.

The team harnessed that variability in a novel experiment. Forty-five study participants – ages 18 to 21, with no history of heart trouble – were monitored with electrocardiography, or ECG, measuring heart electrical activity at millisecond resolution. The ECG was linked to a computer, which enabled brief tones lasting 80-180 milliseconds to be triggered by heartbeats. Study participants reported whether tones were longer or shorter relative to others.

The results revealed what the researchers called “temporal wrinkles.” When the heartbeat preceding a tone was shorter, the tone was perceived as longer. When the preceding heartbeat was longer, the sound’s duration seemed shorter.

“These observations systematically demonstrate that the cardiac dynamics, even within a few heartbeats, is related to the temporal decision-making process,” the authors wrote.

The study also showed the brain influencing the heart. After hearing tones, study participants focused attention on the sounds. That “orienting response” changed their heart rate, affecting their experience of time.

“The heartbeat is a rhythm that our brain is using to give us our sense of time passing,” Anderson said. “And that is not linear – it is constantly contracting and expanding.”

The scholars said the connection between time perception and the heart suggests our momentary perception of time is rooted in bioenergetics, helping the brain manage effort and resources based on changing body states including heart rate.

The research shows, Anderson said, that in subsecond intervals too brief for conscious thoughts or feelings, the heart regulates our experience of the present.

“Even at these moment-to-moment intervals, our sense of time is fluctuating,” he said. “A pure influence of the heart, from beat to beat, helps create a sense of time.”  ....  '

Tuesday, April 25, 2023

How AI Is Building the Next Blockbuster Videogames

Building better yet Video Games with AI. And how might we build better yet games that represent a means for constructing embedded games that solve for true value reports.   Seeing the opportunity for that.

How AI Is Building the Next Blockbuster Videogames

By The Wall Street Journal, April 25, 2023

ChatGPT-like technology is being used to develop games faster and make them more interactive.

Videogame companies are using generative artificial intelligence (AI) tools to create next-generation games faster, at less cost, and with more advanced interactivity.

Electronic Arts is employing generative AI to produce digital sketches for visualizing concepts like challenges and game levels in hours rather than weeks.

Meanwhile, Roblox is developing its own generative AI tools to help developers build new types of materials and movements on screen via text prompts.

Ubisoft Entertainment's Ghostwriter AI tool writes first drafts of background or nonplayable characters' chatter.

Industry executives doubt the technology will replace human developers, who are required to input the prompts and to refine the results.

From The Wall Street Journal

View Full Article - 

Researchers From Google AI and UC Berkeley Propose an AI Approach That Teaches LLMs to Debug

Researchers From Google AI and UC Berkeley Propose an AI Approach That Teaches LLMs to Debug its Predicted Program via Few-Shot Demonstrations

By Aneesh Tickoo -April 14, 2023  in MarketTech

Producing accurate code in a single effort for many programming jobs can be challenging. With several applications, including code synthesis from natural languages, programming by examples, and code translation, code creation has long been a problem. Recent big language models, in particular, have substantially improved over earlier deep neural networks. One line of research has developed reranking techniques to choose the best candidate from multiple samples, typically requiring tens of samples. These techniques were inspired by observations that correct code is much more likely to be predicted when various programs are sampled from the model.

It makes intuitive sense that a programmer’s first piece of code is usually inaccurate. Humans often examine the code, check into the execution outcomes, and then make adjustments to fix implementation flaws rather than entirely rejecting faulty code. Previous research has suggested deep learning algorithms to correct the anticipated code, which shows considerable performance improvements on various coding jobs. Nevertheless, these methods call for extra training for the code repair model.

Prior studies suggest that large language models are not yet able to correct code in the absence of external feedback, such as unit tests or human instructions, despite some recent studies showing that these models have the potential to generate feedback messages to critique and refine their outputs for some natural language and reasoning domains. In this study, researchers from Google Research and UCB offer SELF-DEBUGGING, using few-shot prompting to educate the huge language model on debugging its own projected code. SELFDEBUGGING commands the model to run the code, then create a feedback message based on the code and the execution outcome without needing extra model training.  ... ' 

Deloitte Trends Excerpts

Been a while since I have taken a look a Deloitte trends work.  Had several connections there. Here is a short overview of the latest, worth a further look.

Tech Trends 2023,    The technology forces shaping tomorrow

Deloitte Insights

Helping future-focused leaders navigate what's next, Unleashing value from digital transformation: Paths and pitfalls,  Our research reveals how certain actions can increase digital transformation value—and, just as importantly, how they can erode it.


Staying ahead of the sustainability curve

Explore what sustainable businesses are doing to stay ahead of the curve and how strategic foresight is shaping sustainability in business.


Illustration of man managing different data inputs

Digital frontier: A technology deficit in the boardroom

Deloitte's new research of global directors and corporate leaders uncovers gaps in board engagement on digital transformation and technology topics

Intelligent enterprise fueling the supply chain of the future

Advanced digital technologies can redefine how an enterprise operates to create an agile and disruption-proof supply chain


Economics spotlight: The (true) cost of a low-carbon future

Do standard measures of economic growth accurately capture the economic impact of climate change policies? 

Top 10 Reading Guide

Explore the top business insights your peers are reading this quarter.


The workforce well-being imperative


What does it take to run a metaverse?


Managing workforce risk in an era of unpredictability and disruption


Talent/workforce, Digital transformation, Diversity, equity, and inclusion


What are the key trends, challenges, and opportunities that may impact your business and influence your strategy in the coming year? Explore our 2023 trends series below for perspectives and insights.

Digital Media Trends, Government Trends, Human Capital Trends

Global Marketing Trends, Tech Trends, TMT Predictions


Deloitte Insights Magazine

Issue 31: Capacity for change

Deloitte Insights Podcasts

Hear from influential voices on the business trends and challenges that matter most to you  ... '

The Power of Low-Power GPS Receivers for Nanosats

From The New Yorker   View Full Article

Home/Magazine Archive/November 2022 (Vol. 65, No. 11)/Technical Perspective: The Power of Low-Power GPS.../Abstract


Technical Perspective: The Power of Low-Power GPS Receivers for Nanosats

By Karthik Dantu   Communications of the ACM, November 2022, Vol. 65 No. 11, Page 13210.1145/3559769

Technical Advancements in embedded systems, sensing technology, and an understanding of the Earth's atmosphere have allowed us to deploy satellites for various applications. A more recent phenomenon is the use of low-Earth (<2,000km from Earth) and medium-Earth (between 2,000km and 35,000km from Earth) orbits to deploy smaller satellites called nanosats to perform applications such as surveillance, mapping, estimating sea levels and areas of forests and lakes. Most of these satellites use GPS to localize themselves. A unique challenge at the scale of a nanosat is that the size, weight, and energy required to run a typical GPS receiver might be more than what is affordable for long-term operation.

The work explored in the following paper focuses on the energy consumption of a typical GPS receiver and its operational challenges in a nanosat setting. The challenges include: power draw of a typical GPS receiver could be as high as 20% of overall power consumption; high-speed travel of nanosatellites (~7.8km/s) and relative speed to the GPS satellites (which themselves travel at 3.8km/s) makes getting a fix with a GPS satellite very challenging with high Doppler shift; lack of attitude control (typical of low power satellites) results in loss of GPS signal and corresponding loss in a fix; and, small delays result in large error requiring precise computation at low power.  ... ' 

Gates Says AI Will Teach Literacy


Bill Gates: AI will be teaching kids literacy within 18 months   By Ryan Daws | April 24, 2023 | TechForge Media

Ryan is a senior editor at TechForge Media with over a decade of experience covering the latest technology and interviewing leading industry figures. He can often be sighted at tech conferences with a strong coffee in one hand and a laptop in the other. If it's geeky, he’s probably into it. Find him on Twitter (@Gadget_Ry) or Mastodon (@gadgetry@techhub.social)

AI chatbots could be used to improve children’s reading and writing skills within the next 18 months, according to Microsoft co-founder Bill Gates.

In a fireside chat at the ASU+GSV Summit in San Diego, Gates explained that the “AIs will get to that ability, to be as good a tutor as any human ever could.”

AI chatbots such as OpenAI’s ChatGPT and Google’s Bard have developed rapidly in recent months and can now compete with human-level intelligence on some standardised tests.
Teaching writing skills has traditionally been difficult for computers, as they lack the cognitive ability to replicate human thought processes, Gates said. However, AI chatbots are able to recognise and recreate human-like language.

New York Times tech columnist Kevin Roose has already used ChatGPT to improve his writing, using the AI’s ability to quickly search through online style guides. Some academics have also been impressed by chatbots’ ability to summarise and offer feedback on text or even to write full essays.  The technology must improve before it can become a viable tutor, and Gates said that AI must get better at reading and recreating human language to better motivate students.

While it may be surprising that chatbots are expected to excel at reading and writing before maths, the latter is often used to develop AI technology and chatbots have difficulties with mathematical calculations.

If a solved math equation already exists within the datasets that the chatbot is trained on, it can provide the answer. However, calculating its own solution is more complex and requires improved reasoning abilities, Gates explained.

Gates is confident that the technology will improve within the next two years and he believes that it could help make private tutoring available to a wide range of students who may not otherwise be able to afford it.

While some free versions of chatbots already exist, Gates expects that more advanced versions will be available for a fee, although he believes that they will be more affordable and accessible than one-on-one tutoring with a human instructor.

You can watch the full talk with Bill Gates below:
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week....  "

Tech Industry Pioneer Sees Way for U.S. to Lead in Advanced Chips

 Worked with IvanSutherland way back when.


Tech Industry Pioneer Sees Way for U.S. to Lead in Advanced Chips

By The New York Times, April 21, 2023

Ivan Sutherland was instrumental in helping to create todays dominant approach to making computer chips.   Sutherland is arguing that an alternative technology that predates CMOS, and has had many false starts, should be given another look.

Ivan Sutherland, who helped pioneer the complementary metal-oxide semiconductor decades ago, believes the U.S. can regain the global lead in advanced chipmaking.

Sutherland, the 1988 ACM Turing Award recipient, said computer designers will be able to create faster systems via supercooled electronic circuits that switch without electrical resistance and produce no excess heat at higher speeds.

Also, superconductor-based systems might address the cooling problems that hound the world's datacenters.   Sutherland said such technologies also could be critical to national security, with their high speed and low power requirements benefiting next-generation 6G chips that could replace Chinese-dominant 5G technology.

He also suggested the U.S. should consider training young engineers to conceive of alternative concepts, rather than continuing to focus on ever-less-reliable and costly chip technology.

From The New York Times    

View Full Article - May Require Paid Subscription   

Costs of Chat

ChatGPT could cost over $700,000 per day to operate. Microsoft is reportedly trying to make it cheaper.

Aaron Mok

ChatGPT could cost OpenAI up to $700,000 a day to run due to "expensive servers," an analyst told The Information. 

ChatGPT requires massive amounts of computing power on expensive servers to answer queries.

Microsoft is secretly building an AI chip to reduce the cost, per The Information. ... '

Monday, April 24, 2023

Microsoft Visual ChatGPT

Microsoft Introduces:    Visual ChatGPT     

This is a demo to the work Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models.

This space connects ChatGPT and a series of Visual Foundation Models to enable sending and receiving images during chatting ...   '

Supporting Technical Paper: https://arxiv.org/abs/2303.04671

Have only tried minimally, most interesting the means of being able to draw AND Edit images.  How well is unclear.

Change of the Buyer-Supplier Dynamic

Inflation changing things.   Implications?

How Economic Uncertainty Is Changing the Buyer-Supplier Dynamic

April 24, 2023Robert J. Bowman,  in SupplyChainBrain

The buyer-supplier relationship is in constant flux. Both sides are forced to pivot in response to changes in inflation, interest rates and the general economic climate. And the balance of power between the two shifts accordingly.

In recent years, buyers have, for the most part, been dictating terms. In an attempt to hold on to cash for as long as possible, they’ve progressively stretched out payments to suppliers, many of whom have struggled to stay solvent as a result. Now, however, there are signs that the advantage is beginning to swing back to suppliers, although they continue to be challenged by an uncertain economy.

Inflation is a concern for both sides, says Maureen Sullivan, head of supply chain finance with MUFG, the global bank with headquarters in Japan. There are currently two primary challenges that companies are facing as they struggle to achieve supply chain stability, she says: “access to goods and management of cost.” Both are tied directly to inflation, which leads to rising interest rates and, ultimately, higher prices.

As the Federal Reserve continues to notch up interest rates, buyers and suppliers cast about for ways to contain their financing costs. They’re turning to programs that take over the supplier’s receivables for early payment, while allowing the buyer to maintain longer terms.

Such financing programs are of particular value at times when banks are tightening up on lending. “They may make access to traditional types of financing more challenging for suppliers,” Sullivan says. “Financing programs provide an alternative source of liquidity.” At the same time, the supplier can obtain a financing rate based on the buyer’s credit profile, which tends to be more favorable.  .... '

Computing on the Brain Small Spheres of Neurons show Promise for Drug Testing and Computation

Sounds a remarkable step.

Organoid Intelligence: Computing on the Brain Small spheres of neurons show promise for drug testing and computation    By Michael Nolan

In parallel to recent developments in machine learning like GPT-4, a group of scientists has recently proposed the use of neural tissue itself, carefully grown to recreate the structures of the animal brain, as a computational substrate. After all, if AI is inspired by neurological systems, what better medium to do computing than an actual neurological system? Gathering developments from the fields of computer science, electrical engineering, neurobiology, electrophysiology, and pharmacology, the authors propose a new research initiative they call “organoid intelligence.”

OI is a collective effort to promote the use of brain organoids—tiny spherical masses of brain tissue grown from stem cells—for computation, drug research and as a model to study at a small scale how a complete brain may function. In other words, organoids provide an opportunity to better understand the brain, and OI aims to use that knowledge to develop neurobiological computational systems that learn from less data and with less energy than silicon hardware.

The development of organoids has been made possible by two bioengineering breakthroughs: induced pluripotent stem cells and 3D cell culturing techniques.

Taking the existing field of neuromorphic computing, where the structure of neurons and the connections between them are studied and mimicked in silicon architectures, OI extends the engineering analogy with the opportunity to directly program desired behaviors into the firing activity of animal brain cell cultures.

Organoids typically measure 500 microns in diameter—roughly the thickness of your fingernail. As organoids develop, the researchers say, organoids’ constituent neurons begin to interconnect in networks and patterns of activity that mimic the structures of different brain regions. The development of the organoids field has been made possible by two bioengineering breakthroughs: induced pluripotent stem cells (IPSCs) and 3D cell culturing techniques. IPSCs are stem cells–notably capable of developing into any cell found in an animal’s body–that are created by turning an adult cell back into the stem cell.

 These induced stem cells are then biochemically coaxed into the specific neurons and glia needed to construct a given organoid. More recently developed 3D-scaffolding methods allow biologists to grow iPSC-derived neural tissues vertically as well as horizontally, allowing organoids to develop the interneuronal networks seen in an animal’s brain. Scientists have studied 2D-cultures for decades, but monolayer tissues are not able to grow into brain-like networks in the ways the organoids can .... '

Sunday, April 23, 2023

Google Bard Now Supports Code Generation

Will be trying this with python and potentially other code types.

Google is updating its Bard AI chatbot to help developers write and debug code. Rivals like ChatGPT and Bing AI have supported code generation, but Google says it has been “one of the top requests” it has received since opening up access to Bard last month.   In The Verge.

Bard can now generate code, debug existing code, help explain lines of code, and even write functions for Google Sheets. “We’re launching these capabilities in more than 20 programming languages including C++, Go, Java, Javascript, Python and Typescript,” explains Paige Bailey, group product manager for Google Research, in a blog post.

You can ask Bard to explain code snippets or explain code within GitHub repos similar to how Microsoft-owned GitHub is implementing a ChatGPT-like assistant with Copilot. Bard will also debug code that you supply or even its own code if it made some errors or the output wasn’t what you were looking for.

Speaking of errors, Bailey admits that Bard “may sometimes provide inaccurate, misleading or false information while presenting it confidently,” much like many AI-powered chatbots. “When it comes to coding, Bard may give you working code that doesn’t produce the expected output, or provide you with code that is not optimal or incomplete,” says Bailey. “Always double-check Bard’s responses and carefully test and review code for errors, bugs and vulnerabilities before relying on it.” Bard will also cite the source of its code recommendations if it quotes them “at length.”

Google is pushing ahead with its Bard chatbot despite reports that suggest employees repeatedly criticized the chatbot and labeled it “a pathological liar.” Google has reportedly sidelined ethical concerns to keep up with rivals like OpenAI and Microsoft. In our tests comparing Bard, Bing, and ChatGPT, we found Google’s Bard chatbot to be less accurate than its rivals.  ...'

AlphaFold Spreads through Protein Science

More on a topic we worked on,   A yet better approach emerges for use.

AlphaFold Spreads through Protein Science

By Chris Edwards

Communications of the ACM, May 2023, Vol. 66 No. 5, Pages 10-12  10.1145/3586582

AlphaFold-predicted structure of estrogen receptor protein, seen binding to DNA

Two years ago, as the COVID-19 pandemic swept across the world, researchers at DeepMind, the artificial intelligence (AI) and research laboratory subsidiary of Alphabet Inc., demonstrated how it could use machine learning to achieve a breakthrough in the ability to predict how proteins, the work-horses of the living cell, fold into the intricate shapes they take on. The work gave hope to biologists that they could use this kind of tool to tackle diseases such as the SARS-CoV-2 coronavirus much more quickly in the future.

Researchers were able to assess the abilities of DeepMind's AlphaFold2 thanks to its inclusion in the 14th Critical Assessment of Structure Prediction (CASP14), a benchmarking competition that ran through 2020 and which added a parallel program to uncover the structures of key proteins from the SARS-CoV2 virus to try to accelerate vaccine and drug development. The organizers of CASP14 declared the tool represented "an almost complete solution to the problem of computing three-dimensional structure from amino-acid sequences," though some caveats lie behind that statement.

Figure. An AlphaFold protein prediction with a very high (greater than 90 out of 100) per-residue confidence score.

In principle, quantum mechanical simulations can predict which collection of folds leads to the lowest combined energy of all the chemical bonds in the shape and the water and other molecules around it. However, this remains beyond the capacity of even today's computers and may not even be practical in most cases.

John Jumper, senior staff research scientist at DeepMind, points out that to perform a full molecular-dynamic simulation is not just computationally complex; it requires a complete specification of the environment around the protein in question. "Proteins are exquisitely sensitive machines and extremely finely balanced. We can't write down really good energy functions for them. Even small changes, like getting the salt concentration wrong or not specifying some condition, can cause them not to fold at all. And you have no hope of writing down all the correct conditions of every protein in the human cell," he says.

When biologists produce structures for proteins experimentally, they find ways to fix the molecule in what they hope is a representative conformation. One method is to isolate and crystallize the protein and then use X-ray diffraction to estimate the positions of atoms in the complex structure. Another increasingly common method is cryogenic electron microscopy (cryo-EM): freezing the isolated protein and then using the scattering of electron beams by the atoms to work out how the protein chain bends and folds. Years of effort have populated publicly accessible databases such as the Protein Data Bank (PDB) set up by a group of U.K. and U.S. laboratories in 1971. Though painstaking to create, this data has proven crucial to the growing efficacy of AI-based models.... '    

Not Training GPT5. Is a Pause?

Is this part of the pause suggested?

OpenAI is not currently training GPT-5

By Ryan Daws | April 17, 2023 | TechForge Media

Categories: Applications, Artificial Intelligence, Chatbots, Companies, Development, Ethics & Society,

Ryan is a senior editor at TechForge Media with over a decade of experience covering the latest technology and interviewing leading industry figures. He can often be sighted at tech conferences with a strong coffee in one hand and a laptop in the other. If it's geeky, he’s probably into it. Find him on Twitter (@Gadget_Ry) or Mastodon (@gadgetry@techhub.social)

Experts calling for a pause on AI development will be glad to hear that OpenAI isn’t currently training GPT-5.

OpenAI CEO Sam Altman spoke remotely at an MIT event and was quizzed about AI by computer scientist and podcaster Lex Fridman.

Altman confirmed that OpenAI is not currently developing a fifth version of its Generative Pre-trained Transformer model and is instead focusing on enhancing the capabilities of GPT-4, the latest version.

Altman was asked about the open letter that urged developers to pause training AI models larger than GPT-4 for six months. While he supported the idea of ensuring AI models are safe and aligned with human values, he believed that the letter lacked technical nuance regarding where to pause.

“An earlier version of the letter claims we are training GPT-5 right now. We are not, and won’t for some time. So in that sense, it was sort of silly,” said Altman.

“We are doing things on top of GPT-4 that I think have all sorts of safety issues that we need to address.”

GPT-4 is a significant improvement over its predecessor, GPT-3, which was released in 2020. 

GPT-3 has 175 billion parameters, making it one of the largest language models in existence. OpenAI has not confirmed GPT-4’s exact number of parameters but it’s estimated to be in the region of one trillion.

OpenAI said in a blog post that GPT-4 is “more creative and collaborative than ever before” and “can solve difficult problems with greater accuracy, thanks to its broader general knowledge and problem-solving abilities.”

In a simulated law bar exam, GPT-3.5 scored around the bottom 10 percent. GPT-4, however, passed the exam among the top 10 percent.

OpenAI is one of the leading AI research labs in the world, and its GPT models have been used for a wide range of applications, including language translation, chatbots, and content creation. However, the development of such large language models has raised concerns about their safety and ethical implications.

Altman’s comments suggest that OpenAI is aware of the concerns surrounding its GPT models and is taking steps to address them.

While GPT-5 may not be on the horizon, the continued development of GPT-4 and the creation of other models on top of it will undoubtedly raise further questions about the safety and ethical implications of such AI models.  ...'

Microsoft Releases Copilot for Viva, as it Rolls Out Generative AI to Apps

Next steps with Microsoft, now integrated with AI from OpenAI.   Will Microsoft be controlling an AI powered world?

Microsoft releases Copilot for Viva, as it Rolls Out Generative AI to apps   By Sharon Goldman   @sharongoldman     in Venturebeat

Since last month’s announcement of Microsoft’s generative AI-powered 365 Copilot — which was described as changing “work as we know it,” the company has been busy integrating the technology into its other applications.

Today, Microsoft announced it has added Copilot to Viva, its employee engagement and experience platform that launched in February 2021 and was part of a bet on the future of remote work. Now the company is betting on the power of generative AI in Viva: According to a blog post, Copilot in Viva “is built on the Microsoft 365 Copilot System, to give leaders an entirely new way to understand and engage their workforce.”

Microsoft 365 Copilot combines OpenAI’s GPT-4 with Microsoft Graph data (from your calendar, emails, chats, documents, meetings) and Microsoft 365 apps including Teams, Word, Outlook and Excel. The company said Copilot in Microsoft Viva will begin rolling out to customers later this year. 

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

Microsoft Copilot in Viva streamlines workforce communications

New Microsoft research released today, the Work Trend Index Special Report, found that high employee engagement correlates with stronger financial performance, and that employee engagement is a key part of overall performance. According to the study, companies with highly-engaged employees focus on clarity via intentional employee communications and goal setting, and they use data to build a powerful “feedback flywheel” to continuously improve over time. 

That’s where AI comes in: According to a company blog post, the new Copilot features for Viva focus on streamlining workforce communications and creating better organizational alignment.

For example, Copilot in Viva Goals can draft OKR recommendations based on existing Word documents, summarize the status of OKRs, identify blockers and suggest next steps. Viva Engage with Copilot can spur post ideas for company intranet pages, and Copilot in Viva Glint, which analyzes employee feedback and will be added to the platform in July, can help summarize thousands of employee survey comments.  ... ' 

Saturday, April 22, 2023

Advanced AI Faces New Regulatory Push in Europe

More regulatory pressure arises.

Advanced AI Faces New Regulatory Push in Europe

By The Wall Street Journal, April 21, 2023

The European Parliament in session.

Members of the European Parliament have been charged with hammering out a new draft of what the European Union calls its AI Act.

In an open letter published April 17, a group of EU lawmakers said regulators should be given authority to govern the development of artificial intelligence (AI) technologies.

The lawmakers, tasked with developing a draft of the AI Act, said the bill will direct AI development "in a direction that is human centric, safe, and trustworthy."

They added that the bill "could serve as a blueprint for other regulatory initiatives in different regulatory traditions and environments around the world."

The letter called for a high-level global AI summit with a focus on preliminary governing principles for deploying AI, and requested the U.S.-EU Trade and Technology Council develop an agenda for the summit at its next meeting.

From The Wall Street Journal    View Full Article -  

A Google AI Model Developed an Unexpected Skill


A Google AI Model Developed an Unexpected Skill

Google CEO Sundar Pichai.

Google CEO Sundar Pichai said there are elements of how artificial intelligence systems learn and behave that still surprise experts.

Concerns about AI developing skills independently of its programmers' wishes have long absorbed scientists, ethicists, and science fiction writers. A recent interview with Google's executives may be adding to those worries.

In an interview on CBS's 60 Minutes on April 16, James Manyika, Google's SVP for technology and society, discussed how one of the company's AI systems taught itself Bengali, even though it wasn't trained to know the language. "We discovered that with very few amounts of prompting in Bengali, it can now translate all of Bengali," he said.

CEO Sundar Pichai confirmed that there are still elements of how AI systems learn and behave that still surprises experts: "There is an aspect of this which we call— all of us in the field call it as a 'black box'. You don't fully understand. And you can't quite tell why it said this." Pichai said the company has "some ideas" why this could be the case, but it needs more research to fully comprehend how it works.

CBS's Scott Pelley then questioned the reasoning for opening to the public a system that its own developers don't fully understand, but Pichai responded: "I don't think we fully understand how a human mind works either."

From Quartz

View Full Article    

Northrup Grumman on the Webb Telescope

Happened on a writeup on what Northrup Grumman did with the Webb Space Telescope.    Good overview.   Recalled some of our work with them in supply chain oriented topics. Getting re-acquainted to their skills. 

What it Was Like to Work on Webb

Before we could begin to build the most complex telescope ever, we had to invent technologies that never existed before. For many Northrop Grumman employees, the opportunity to work on Webb was the opening for boundless opportunity. From a new element on the Periodic Table and embracing digital transformation to creating zero gravity here on earth, Northrop Grumman employees developed something that the entire universe can be proud of. 

And in cyber systems.  ...

AI Tools Will Inspire Hacks

Inevitable, especially as they are easier to test and use.

AI Tools like ChatGPT likely to empower hacks, NSA cyber boss warns

By Colin Demarest, Wednesday, Apr 12  in c4isrnet.com 

WASHINGTON — Generative artificial intelligence that fuels products like ChatGPT will embolden hackers and make email inboxes all the more tricky to navigate, according to the U.S. National Security Agency cybersecurity director.

While much-debated AI tools will not automate or elevate every digital assault, phishing scheme or hunt for software exploits, NSA’s Rob Joyce said April 11, what it will do is “optimize” workflows and deception in an already fast-paced environment.

“Is it going to replace hackers and be this super-AI hacking? Certainly not in the near term,” Joyce said at an event hosted by the Center for Strategic and International Studies think tank. “But it will make the hackers that use AI much more effective, and they will operate better than those who don’t.”

U.S. officials consider mastery of AI critical to long-term international competitiveness — whether that’s in defense, finance or another sector. At least 685 AI projects, including several tied to major weapons systems, were underway at the Pentagon as of early 2021.

With enough training, the technology can handle menial tasks, such as answering questions and digging up contact information, or augment military operations by parsing tides of incoming information and facilitating exploration of areas deemed too dangerous for troops.

Something as sophisticated as OpenAI’s ChatGPT, Joyce said Tuesday, can be used to “craft very believable native-language English text” that can then be applied to phishing attacks or foreign influence campaigns. ChatGPT is capable of holding humanlike conversations with enough prompting, and it can provide content like poetry, essays or computer code within seconds.

“That’s going to be a problem,” Joyce said ....'

FREE Code Camp:A Guide to Prompt Engineering for ChatGPT

Very nicely done Free Code Camp for beginners and beyond

How to Communicate with ChatGPT – A Guide to Prompt Engineering

By Hillary Nyakundi

AI has become an integral part of our lives and businesses. Over the past few years, we’ve seen the rapid rise of AI tools, and their impact on our day-to-day activities can't be ignored.

From virtual assistants to chatbots, AI just keeps getting smarter with more functionalities than before. This technology has changed the way we interact with both humans and machines.

As this evolution continues, there's a constant need to improve the communication between humans and machines. By fully understanding how to effectively communicate with AI, it can take us a step closer to unlocking its full potential.

This will not only enable us to extract relevant information but also allow us to gain new insights, making us more informed on different fields of interest. To get these advantages, understanding prompt engineering is essential.

As a growing developer, I spend the better part of my time on learning and implementation. In the process, I may need to do research, and it might take forever to find what I need browsing the net. But with new technologies like ChatGPT, I am able to easily get what I need as long as I ask the right questions.

Just like many others, figuring out the platform wasn't easy. It took me a while before I could understand how to communicate with the model. A key aspect is knowing how to structure and phrase the prompts. With this, you will be able to improve the quality and accuracy of the responses you get.

In this guide, you’ll learn what prompt engineering is and how you can use it to improve your communication with AI tools. In addition to this, we’ll also explore different categories of prompts and the design principles used to craft effective prompts.

By the end of this guide, you should be able to write good prompts and tailor them to your needs, facilitating a better interaction between you and the language models.

Let's get started!

What is Prompt Engineering?

Communication with AI is crucial and understanding how to communicate with it effectively is helpful. The entire communication process revolves around writing commands which are referred to as prompts.

With that said, we can easily define prompt engineering as the step-by-step process of creating inputs that determine the output to be generated by an AI language model.

High quality inputs will result in better output. Similarly, poorly defined prompts will lead to inaccurate responses or responses that might negatively impact the user. After all, "With great power comes great responsibility".

Prompt engineering cuts across different applications, including chatbots, content generation tools, language translation tools, and virtual assistants. But you might be wondering how AI technology generates its responses. Let’s find out in the next section.

How do Language Models Work?

AI language models such as GPT-4 rely on deep learning algorithms and natural language processing (NLP) to fully understand human language.

All this is made possible through training that consists of large datasets. These datasets include articles, books, journals, reports, and so on. This helps the language models develop their language understanding capabilities. With the data, the model is fine-tuned in a way that enables it to respond to particular tasks assigned to it.

Depending on the language model, there are two main learning methods – supervised or unsupervised learning.

Supervised learning is where the model uses a labeled dataset where the data is already tagged with the right answers. In unsupervised learning, the model uses unlabeled datasets, meaning the model has to analyze the data for possible and accurate responses. Models like GPT-4 use the unsupervised learning technique to give responses.

The model has the ability to generate text based on the prompt given. This process is referred to as language modeling, and it's the foundation of many AI language applications. Learn more about Supervised vs Unsupervised Learning from IBM.

At this point, you should understand that the performance of an AI language model mainly depends on the quality and quantity of the training data. Training the model with tons of data from different sources will help the model understand human language including grammar, syntax, and semantics  .... (much more) '

Friday, April 21, 2023

Free MIT Press Book on Deep Learning

 An MIT Press book   https://www.deeplearningbook.org/ 

Ian Goodfellow and Yoshua Bengio and Aaron Courville

The Deep Learning textbook is a resource intended to help students and practitioners enter the field of machine learning in general and deep learning in particular. The online version of the book is now complete and will remain available online for free.

The deep learning textbook can now be ordered on Amazon.

For up to date announcements, join our mailing list.

Citing the book

To cite this book, please use this bibtex entry:

    title={Deep Learning},
    author={Ian Goodfellow and Yoshua Bengio and Aaron Courville},
    publisher={MIT Press},

To write your own document using our LaTeX style, math notation, or to copy our notation page, download our template files.

Errata in published editions

NVIDIA Text to Video


147,333 views  Apr 19, 2023  #chatgpt #AI #Robotics

NVIDIA's NEW AI 'Text To Video Takes the Industry By STORM! (NOW UNVEILED!)


Welcome to our channel where we bring you the latest breakthroughs in AI. From deep learning to robotics, we cover it all. Our videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on our latest videos.

Was there anything we missed?

(For Business Enquiries)  contact@theaigrid.com

#LLM #Largelanguagemodel #chatgpt