/* ---- Google Analytics Code Below */

Sunday, March 26, 2023

Machine Learning Street talk: Wolfram Announcement

I mentioned the below announcement of being able to plug in WolframAlpha capabilities in ChatGPT.  I found the the talk here to be insightful about how language models could interact with computational models.  And how can this be used yet further improve AI.  Technical, but could be pointing us to yet more wonderful things.

Machine Learning Street Talk    https://youtu.be/z5WZhCBRDpU

132,828 views  Mar 23, 2023  #110  Episode #110


ChatGPT + Wolfram: The Future of AI is Here!

Pod version: https://podcasters.spotify.com/pod/sh...

Support us! https://www.patreon.com/mlst 

MLST Discord: https://discord.gg/aNPkGUQtc5

Stephen's announcement post: https://writings.stephenwolfram.com/2... 

OpenAI's announcement post: https://openai.com/blog/chatgpt-plugins 

In an era of technology and innovation, few individuals have left as indelible a mark on the fabric of modern science as our esteemed guest, Dr. Steven Wolfram. 

Dr. Wolfram is a renowned polymath who has made significant contributions to the fields of physics, computer science, and mathematics. A prodigious young man too, Wolfram earned a Ph.D. in theoretical physics from the California Institute of Technology by the age of 20. He became the youngest recipient of the prestigious MacArthur Fellowship at the age of 21.

Wolfram's groundbreaking computational tool, Mathematica, was launched in 1988 and has become a cornerstone for researchers and innovators worldwide. In 2002, he published "A New Kind of Science," a paradigm-shifting work that explores the foundations of science through the lens of computational systems.

In 2009, Wolfram created Wolfram Alpha, a computational knowledge engine utilized by millions of users worldwide. His current focus is on the Wolfram Language, a powerful programming language designed to democratize access to cutting-edge technology.

Wolfram's numerous accolades include honorary doctorates and fellowships from prestigious institutions. As an influential thinker, Dr. Wolfram has dedicated his life to unraveling the mysteries of the universe and making computation accessible to all.

First of all... we have an announcement to make, you heard it FIRST here on MLST! ....

[00:00] Intro

[02:57] Big announcement! Wolfram + ChatGPT!

[05:33] What does it mean to understand?

[13:48] Feeding information back into the model

[20:09] Semantics and cognitive categories

[23:50] Navigating the ruliad

[31:39] Computational irreducibility

[38:43] Conceivability and interestingness

[43:43] Human intelligible sciences

More on Plugins and ChatGPT

 Intro below,   saw the intro from wolfram. Am inclined to think this is a big deal,   Like Apps within searches.    Will be exploring this in the coming weeks.

What are ChatGPT plugins? Here’s everything you need to know  from PocketNow

By SANUJ BHATIA, OpenAI has been making headlines since the end of last year, primarily due to the immense popularity of its flagship service, ChatGPT. This AI-powered chatbot has taken the world by storm and has proven to be a useful tool for many people. The company has been on a run of announcements, recently unveiling its GPT-4 model, and has now introduced ChatGPT plugins. In this article, we’ll explain what ChatGPT plugins are, what they can do for you, and what plugins are available right now.

What are ChatGPT Plugins?

One of the limitations of ChatGPT is that it can reply to a user's query based only on the training data it has, which is limited to 2021. This means that ChatGPT is unaware of the latest events or even those that occurred in the last year. Plugins will essentially allow ChatGPT to access the world of the internet and retrieve information from it.

Think of plugins as apps for ChatGPT

Plugins will allow the service to interact with live data from the web and specific websites. Think of plugins as apps for ChatGPT. Using these plugins, the AI chatbot will be able to perform a number of tasks that it has not been able to do until now. OpenAI says that plugins are like ChatGPT's "eyes and ears" and have the potential to turn the chatbot into a versatile interface for a variety of services and websites.

Using the plugins, ChatGPT will be able call APIs of compatible third-party services to perform actions. In a post, the company wrote "For instance, if a user asks, "Where should I stay in Paris for a couple nights?", the model may choose to call a hotel reservation plugin API, receive the API response, and generate a user-facing answer combining the API data and its natural language capabilities."

Gates Agrees about Importance of Current AI

 More agreement about AI advances.

Bill Gates: AI is most important tech advance in decades

By Tom Gerken   BBC Technology reporter

Microsoft co-founder Bill Gates says the development of artificial intelligence (AI) is the most important technological advance in decades.

In a blog post on Tuesday, he called it as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone.

"It will change the way people work, learn, travel, get health care, and communicate with each other," he said.

He was writing about the technology used by tools such as chatbot ChatGPT.

Developed by OpenAI, ChatGPT is an AI chatbot which is programmed to answer questions online using natural, human-like language.

The team behind it in January 2023 received a multibillion dollar investment from Microsoft - where Mr Gates still serves as an advisor.

But it is not the only AI-powered chatbot available, with Google recently introducing rival Bard. ... ' 

Metal Detecting from the Air

Recall a DOD project looking a a related problem.   Hope this does not become an issue in greater Europe.

Metal-Detecting Drone Could Autonomously Find Land Mines A drone with 5 degrees of freedom can safely detect buried objects from the air.  By Evan Ackerman 

Several stitched-together photographs of gray three-propellered drones with metal detectors hovering just above grass

This composite photo shows how a tricopter drone with a lidar and metal detector can fly around an obstacle close to the ground.

Metal detecting can be a fun hobby, or it can be a task to be completed in deadly earnest—if the buried treasure you’re searching for includes land mines and explosive remnants of war. This is an enormous, dangerous problem: Something like 12,000 square kilometers worldwide are essentially useless and uninhabitable because of the threat of buried explosives, and thousands and thousands of people are injured or killed every year.

While there are many different ways of detecting mines and explosives, none of them are particularly quick or easy. For obvious reasons, sending a human out into a minefield with a metal detector is not the safest way of doing things. So, instead, people send anything else that they possibly can, from machines that can smash through minefields with brute force to well-trained rats that take a more passive approach by sniffing out explosive chemicals.

Because the majority of mines are triggered by pressure or direct proximity, it may seem that a drone would be the ideal way to detect them nonexplosively. However, unless you’re only detecting over a perfectly flat surface (and perhaps not even then) your detector won’t be positioned ideally most of the time, and you might miss something, which is not a viable option for mine detection.

But now a novel combination of a metal detector and a drone with 5 degrees of freedom is under development at the Autonomous Systems Lab at ETH Zurich. It may provide a viable solution to remote land-mine detection, by using careful sensing and localization along with some twisting motors to keep the detector reliably close to the ground.  ...' 

A Legal Challenge to Algorithmic Recommendations

Regulation required.

A Legal Challenge to Algorithmic Recommendations

By Pamela Samuelson

Communications of the ACM, March 2023, Vol. 66 No. 3, Pages 32-34

Credit: Andrij Borys Associates

A young American student, Nohemi Gonzalez, was one of 149 people murdered in Paris in 2015 by ISIS terrorists. Her family blames Google for her death, claiming that YouTube's algorithms provided material support to the terrorist organization by recommending violent and radicalizing ISIS videos to its users based on their previous viewing histories. (The Gonzalez complaint levies the same charges against Twitter and Facebook, but to keep things simple, this column refers only to Google.)

Gonzalez' family sued Google for damages for this wrongful death. Both a trial and an appellate court agreed with Google that it could not be held liable for this tragic death under a federal immunity shield widely known as § 230 of the Communications Decency Act (CDA). However, the U.S. Supreme Court has decided to hear Gonzalez' appeal and consider whether YouTube's algorithmic recommendations are beyond the shelter of § 230.... '

AI Godfather says New Tech Could be Dangerous

 Well known AI Scientist Geoffrey Hinton,  often mentioned here,  who we followed closely n the 80s, says AI is real, and conceivably threatening humanity.    Equivalent to Invention of Wheel or electricity.  Scary statements from him. 

Artificial intelligence 'godfather' on AI possibly wiping out humanity: ‘It's not inconceivable’

Artificial Intelligence pioneer Geoffrey Hinton said the development of artificial intelligence is happening rapidly

Andrea VacchianoBy Andrea Vacchiano | Fox News

Geoffrey Hinton, a computer scientist who has been called "the godfather of artificial intelligence", says it is "not inconceivable" that AI may develop to the point where it poses a threat to humanity.

The computer scientist sat down with CBS News this week about his predictions for the advancement of AI. He compared the invention of AI to electricity or the wheel.

Hinton, who works at Google and the University of Toronto, said that the development of general purpose AI is progressing sooner than people may imagine. General purpose AI is artificial intelligence with several intended and unintended purposes, including speech recognition, answering questions and translation.

"Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI. And now I think it may be 20 years or less," Hinton predicted. Asked specifically the chances of AI "wiping out humanity," Hinton said, "I think it's not inconceivable. That's all I'll say." 

Geoffrey Hinton, chief scientific adviser at the Vector Institute, speaks during The International Economic Forum of the Americas (IEFA) Toronto Global Forum in Toronto, Ontario, Canada, on Thursday, Sept. 5, 2019. 

Geoffrey Hinton, chief scientific adviser at the Vector Institute, speaks during The International Economic Forum of the Americas (IEFA) Toronto Global Forum in Toronto .. 

Artificial general intelligence refers to the potential ability for an intelligence agent to learn any mental task that a human can do. It has not been developed yet, and computer scientists are still figuring out if it is possible.  Hinton said it was plausible for computers to eventually gain the ability to create ideas to improve themselves. 

"That's an issue, right. We have to think hard about how you control that," Hinton said.   ... ' 

Saturday, March 25, 2023

Large Language Models: A Cognitive and Neuroscience Perspective

Irving does an excellent review and links to much work about LLMs, Large Language Models, and oter topics that are now much in the news. Below an intro. I plan to read all the articles pointed to at the link.  A considerable weakness in the current directions?   Implications to all this,  will provide.

A collection of observations, news and resources on the changing nature of innovation, technology, leadership, and other subjects.  By Irving Wladawsky-Berger  March 23, 2023

Large Language Models: A Cognitive and Neuroscience Perspective

Over the past few decades, powerful AI systems have matched or surpassed human levels of performance in a number of tasks such as image and speech recognition, skin cancer classification, breast cancer detection, and highly complex games like Go. These AI breakthroughs have been based on increasingly powerful and inexpensive computing technologies, innovative deep learning (DL) algorithms, and huge amounts of data on almost any subject. More recently, the advent of large language models (LLMs) are taking AI to the next level. And, for many technologists like me, LLMs and their associated chatbots have introduced us to the fascinating world of human language and cognition.

I recently learned the difference between form, communicative intent, meaning, and understanding from “Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data,” a 2020 paper by linguistic professors Emiliy Bender and Alexander Koller. These linguistic concepts helped me understand the authors’ argument that “in contrast to some current hype, meaning cannot be learned from form alone. This means that even large language models such as BERT do not learn meaning; they learn some reflection of meaning into the linguistic form which is very useful in applications.”

A few weeks ago, I came across another interesting paper, “Dissociating Language and Thought in Large Language Models: a Cognitive Perspective,” published in January, 2023 by principal authors linguist Kyle Mahowald and cognitive neuroscientist Anna Ivanova and four additional co-authors. The paper nicely explains how the study of human language, cognition and neuroscience sheds light on the potential capabilities of LLMs and chatbots. Let me briefly discuss what I learned.

“Today’s large language models (LLMs) routinely generate coherent, grammatical and seemingly meaningful paragraphs of text,” said the paper’s abstract. “This achievement has led to speculation that these networks are — or will soon become — thinking machines, capable of performing tasks that require abstract knowledge and reasoning. “Here, we review the capabilities of LLMs by considering their performance on two different aspects of language use: formal linguistic competence, which includes knowledge of rules and patterns of a given language, and functional linguistic competence, a host of cognitive abilities required for language understanding and use in the real world.

The authors point out that there’s a tight relationship between language and thought in humans. When we hear or read a sentence, we typically assume that it was produced by a rational person based on their real world knowledge, critical thinking, and reasoning abilities. We generally view other people’s statements not just as a reflection of their linguistic skills, but as a window into their mind. .... '

Capturing What is Said

The value of good speech to text. 

Capturing What is Said,  By Esther Shein

Commissioned by CACM Staff, March 23, 2023

A very basic flow chart for the conversion of speech to text.

New AI-enabled capabilities for speech-to-text systems include taking actions based on a transcript, prompting someone to ask a follow-up question, and summarizing a conversation at the end of a call, said Christine McAllister at Forrester Research.

ChatGPT and generative artificial intelligence (AI) may be having a moment, but don't underestimate the value of speech-to-text transcription, sometimes referred to as automatic speech recognition (ASR) software, which continues to improve.

ASR technology converts human speech into text using machine learning and AI. There are two types: synchronous transcription, which is typically used in chatbots, and asynchronous, where transcription occurs after the fact to capture customer/agent conversations, notes Cobus Greyling, chief evangelist at HumanFirst, which makes a productivity suite for natural language data.

ASR made some waves in recent months with the announcement of Whisper from OpenAI, the organization that created ChatGPT. Whisper was trained on 680,000 hours of multilingual and supervised data collected from the Web. OpenAI claims that large and diverse dataset has improved the accuracy of the text it produces; the company says Whisper also can transcribe text from speech in multiple languages.

"What that means is that it's extremely accurate—right off the top—without much tuning or training,'' says Christina McAllister, a senior analyst at research and advisory company Forrester Research. "The large language model aspect, which is based on huge amounts of data, is what's new and is the most innovative aspect of the ASR market today,'' she says.

Because of its ability to transcribe meetings and interviews more efficiently and accurately, one of the broadest enterprise use cases for speech-to-text is in customer call centers. The next phase in the development of ASR is to use artificial intelligence to analyze call center conversations for customer sentiment and to validate compliance in regulated industries, according to Annette Jump, a vice president analyst at Gartner.

The benefits of ASR in the call center context are its ability to identify customer problems early and to improve customer satisfaction by resolving issues sooner, says Jump.

Other use cases include generating closed captions for movies, television, video games, and other forms of media. ASR is widely used in healthcare by physicians to convert dictated clinical notes into electronic medical records.

Speech vendors typically leverage a third-party ASR engine so they don't have to build their own, McAllister says. That frees them up so they can "do all the rest of their magic from the transcript point forward,'' she says.

Some of the new AI capabilities for speech-to-text systems include taking actions based on a transcript, prompting someone when it's appropriate to ask a follow-up question, and summarizing a conversation at the end of a call, McAllister says.

One frequently used AI-powered speech-to-text transcription service is Otter.ai, which has added capabilities aimed at improving meetings, including integration with collaboration tools such as Zoom and Microsoft Outlook.  ... ' 

Gordon Moore, Intel co-founder and creator of Moore's Law, dies aged 94

Gordon Moore, Intel co-founder and creator of Moore's Law, dies aged 94

Published, 14 hours ago  in the BBC

Silicon Valley pioneer and philanthropist Gordon Moore has died aged 94 in Hawaii.

Mr Moore started working on semiconductors in the 1950s and co-founded the Intel Corporation.

He famously predicted that computer processing powers would double every year - later revised to every two - an insight known as Moore's Law.

That "law" became the bedrock for the computer processor industry and influenced the PC revolution.

Two decades before the computer revolution began, Moore wrote in a paper that integrated circuits would lead "to such wonders as home computers - or at least terminals connected to a central computer - automatic controls for automobiles, and personal portable communications equipment".

He observed, in the 1965 article, that thanks to technological improvements the number of transistors on microchips had roughly doubled every year since integrated circuits were invented a few years earlier.

His prediction that this would continue became known as Moore's Law, and it helped push chipmakers to target their research to make this come true.

After Moore's article was published, memory chips became more efficient and less expensive at an exponential rate. ...   '

Would a TikTok Ban Cripple Influencer Marketing?

Been following this recently on Youtube, novelty key.  Much more by marketing experts at the link.

Would a TikTok Ban Cripple Influencer Marketing?

Mar 21, 2023,  by Tom Ryan

The Biden administration last week threatened to ban TikTok in the U.S. if the app’s Chinese owners refuse to sell their stakes, raising questions about if competitors Instagram, Snap and YouTube can fill the void for the influencer community.

For the third year in a row, TikTok held the record as the most downloaded app with nearly 40 percent of TikTok’s advertising audience aged between 18 and 24.

Influencer Marketing Hub’s “The State of Influencer Marketing 2023” report found that TikTok is now the most popular influencer marketing channel (utilized by 56 percent of brands using influencer marketing), surpassing Instagram (51 percent) for the first time. Facebook (42 percent) and YouTube (38 percent) follow behind.

Thomas Walters, Europe CEO and co-founder of creator agency Billion Dollar Boy, told Campaign that factors such as TikTok’s personalized algorithm, lack of emphasis on follower count and its shift towards “spontaneous, raw, unfiltered content” are all helping drive the platform’s popularity.

In a recent blog entry, Kolsquare, the influencer marketing platform, said that while facing challenges with conversion, TikTok is “the clear leader when it comes to driving trends and engagement amongst the youngest users of social media” and has an edge in driving awareness over other platforms.

Both the Trump and Biden administrations have said that the app poses a national security threat amid concerns China could tap user data to spread misinformation. On March 23, testimony by TikTok CEO Shou Zi Chew before the House Energy and Commerce committee may raise the rhetoric.

Some content creators are seeking to diversify to Instagram Reels or YouTube Shorts, but others doubt the TikTok ban will take effect, according to The Wall Street Journal.

In terms of potential purchasers for the app, a New York Times article speculated that many buyers could not afford TikTok (valuation at $50 billion or more) or would not want to deal with the antitrust scrutiny of an acquisition.

Many brands, according to Advertising Age, continue to put marketing dollars behind TikTok, given the likely delays or challenges that would come with enacting an outright ban. Becca Millstein, CEO of tinned fish brand Fishwife, told Adage, “We believe that TikTok can be a powerful source of organic discovery and hope that we have the opportunity to utilize it as such.”  ... '

Yes, ChatGPT Is Coming for Your Office Job

Have now seen a number of posts brainstorming this,  and considering what I have seen, there re many reasonable ideas, and in particularly easy to test with even current systems.  Only overstepping regulation could get in the way,  and even that could be reasonably addressed.  Likely GPT based systems will be seen everywhere.  Agree the disruption may not be the what we expect.

Yes, ChatGPT Is Coming for Your Office Job,     By Wired,  March 23, 2023

With companies like Microsoft, Slack, and Salesforce adding ChatGPT or similar AI tools to their products, we are likely to see the impact on office life soon enough.

Anyone who has spent a few minutes playing with ChatGPT will understand the worries and hopes such technology generates when it comes to white-collar work. The chatbot is able to answer all manner of queries—from coding problems to legal conundrums to historical questions—with remarkable eloquence.

Assuming companies can overcome the problematic way these models tend to "hallucinate" incorrect information, it isn't hard to imagine they might step in for customer support agents, legal clerks, or history tutors. Such expectations are fueled by studies and media reports claiming that ChatGPT can get a passing grade on some legal, medical, and business exams. With companies like Microsoft, Slack, and Salesforce adding ChatGPT or similar AI tools to their products, we are likely to see the impact on office life soon enough.

A couple of research papers posted online this week suggest that ChatGPT and similar chatbots may be very disruptive—but not necessarily in the ways you expect.

From Wired

View Full Article  

Google Glass is Over

 Too bad, tried it and it was nicely done. Is the category over for now?   Has to be a place for this. 

Google Glass is finally shattered   in Popsci

10 years after its debut, Google finally shutters the headset for good.  By Andrew Paul

What happened to Google Glass?


Back in 2013, Google attempted to get way ahead of the augmented reality game with its Google Glass headset. Although first billed as a tech game changer, the $1,500 price tag and privacy concerns made it a wholesale commercial flop. But despite some belated success among medical professionals and first responders via Google’s 2017 Glass Enterprise revamp, the much-memed product never really broke through to the masses. Yesterday, Google officially announced the demise of its Glass product line.

According to a company statement, headsets are no longer available for purchase, while support for Glass Enterprise Edition will continue through mid-September of this year. “Thank you for over a decade of innovation and partnership,” writes Google, in the brief end to one of the more infamous modern tech rollouts.

[Related: Doctors are wearing the new Google Glass while seeing patients.]

Initially resembling frameless eyewear, Google’s headset included a small, rectangular, transparent glass (hence the name) above the wearer’s right eye. A miniature onboard computer system beamed bits of information through the prism. Users could then utilize features like map directions, photo and video capabilities, and weather forecasts in front of them while maintaining a clear vision of their surrounding, real-world environment. Future iterations resembled protective eyewear designs, and were often utilized in industries such as factory manufacturing.

The announcement likely comes as no surprise to most—alongside a “Wait, Google still made those?” from many others—as the Big Tech giant’s last edition of Glass Enterprise came out almost four years’ ago in 2019 alongside a $999 price tag. Since then, Google’s chief rivals at Meta and Apple have poured massive amounts of cash into their own respective AR projects. In 2021, Meta collaborated with Ray-Ban to release camera-embedded sunglasses, albeit with no augmented display features, and (until recently) was going all in on pushing a “metaverse” experience via its Meta Quest headset line. Meanwhile, Apple is widely reported to be on the cusp of rolling out its own wearable AR/VR product line....' 

Security in Cyberspace

Considerable piece in Fraunhofer mag,  intro below

More security in cyberspace

Web special Fraunhofer magazine 2.2022

The invasion of Ukraine shows that fighting is no longer just on the battlefield, but also in virtual space - with highly professional hacker attacks and targeted disinformation. How can Germany become more defensive?

Months before Putin gave his troops the marching orders, the war on the Internet began. Hackers have been preparing the Russian invasion since at least December of last year. This is the conclusion of Prof. Haya Shulman, who studied the cyber attacks on Ukrainian infrastructure. Shulman heads the "Cybersecurity Analytics and Defences" department at the Fraunhofer Institute for Secure Information Technology SIT in Darmstadt, coordinates the "Analytics Based Cybersecurity" research area at the National Research Center for Applied Cybersecurity ATHENE and holds a chair at the Goethe University in Frankfurt am Main.

One of their findings: The malware that caused the systems of the communications satellite KA-SAT to fail on the day Russia invaded Ukraine was smuggled in months ago. KA-SAT provides broadband internet to customers across Europe and is used by the Ukrainian Army for emergency communications. »The goal of the hackers was to stop the communication – and they succeeded.

It took a month for the damage to be repaired, at least for the most part,” says Shulman. The Russian attack on KA-SAT was also not without consequences in Germany and all of Central Europe: 5,800 wind turbines could no longer be maintained and controlled remotely. Systems in remote locations that are connected to the Internet via a satellite connection were affected. They continued to supply electricity. However, technical problems could only be identified and rectified on site.  ...... ' 

Researchers Develop Soft Robot That Shifts from Land to Sea with Ease

Mobile robotics in multiple domains.


Researchers Develop Soft Robot That Shifts from Land to Sea with Ease

By Carnegie Mellon University, March 21, 2023

Actuators allow the reconfigurable robot to curl its body to swiftly roll away.

Credit: Morphing Matter Lab

Soft robots developed by researchers at Carnegie Mellon University can transition from walking to swimming or crawling to rolling, shifts found in most animals.

The researchers created a bistable actuator using three-dimensionally printed soft rubber with alloy springs that contract in response to electrical currents, allowing the actuator to bend. The robot remains in the new shape until it reverts to its previous configuration in response to another electrical charge. Only a hundred milliseconds of electrical charge is needed to change shape.

The researchers created robots that can walk and swim, crawl and jump, and crawl and roll.

"Our bistable actuator is simple, stable and durable, and lays the foundation for future work on dynamic, reconfigurable soft robotics," says Dinesh K. Patel, a post-doctoral fellow in CMU's Morphing Matter Lab.

From Carnegie Mellon University

View Full Article  

Quantum Computers May Finally Have Practical Use

 A means to generate truly random numbers for cryptology


Quantum Computers May Finally Have Practical Use

By New Scientist, March 24, 2023

Google's Sycamore quantum computer. 

Quantum computers such as Google’s Sycamore could be put to use creating numbers that are guaranteed to be truly random.

University of Texas at Austin (UT Austin) researchers have developed a method for certifying that quantum computers generate truly random numbers without having to inspect the process.

This involves asking a quantum computer to complete a test in which a series of pseudorandom operations are run on its qubits and measuring the outputs, which act as truly random numbers.

If the resulting outputs cannot be simulated on a classical computer, they are confirmed to be the result of quantum processes, truly random, and suitable for cybersecurity applications.

Said UT Austin's Scott Aaronson, "The huge advantage with this proposal is that you can actually do it with devices that currently exist."

From New Scientist

View Full Article - May Require Paid Subscription   

Friday, March 24, 2023

Plugins to be Added to ChatGPT, to Create a Platform

Announcement relates to previous message on Wolfram Plugins, but promises more yet.

OpenAI turns ChatGPT into a platform overnight with addition of plugins  in Venturebeat

Sharon Goldman@sharongoldman,     Michael Nuñez@MichaelFNunez    March 23, 2023 

OpenAI booth at NeurIPS 2019 in Vancouver, Canada

OpenAI today announced its support of new third-party plugins for ChatGPT, and it already has Twitter buzzing about the company’s potential platform play.

In a blog post, the company stated that the plugins are “tools designed specifically for language models with safety as a core principle, and help ChatGPT access up-to-date information, run computations, or use third-party services.”  ... 

A sign of OpenAI’s accelerating dominance

The announcement was quickly received by the public as a signal of OpenAI‘s ambitions to further its dominance by turning ChatGPT into a developer platform.

“OpenAI is seeing ChatGPT as a platform play,” tweeted Marco Mascorro, cofounder of Fellow AI.

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

And @gregmushen tweeted: “I think the introduction of plugins to ChatGPT is a threat to the App Store. It creates a new platform with new monetization methods.”

In sharing the announcement, OpenAI CEO Sam Altman tweeted: “We are starting our rollout of ChatGPT plugins. you can install plugins to help with a wide variety of tasks. we are excited to see what developers create!”    ... ' 

What Does it Mean to be Smart in an Age of AI

Excerpt from McKinsey  Complete video at the link.

Author Talks: In the ‘age of AI,’ what does it mean to be smart?

March 16, 2023 | Interview

As artificial intelligence gets better at predicting human behavior, a business psychologist encourages people to strengthen the uniquely human skills that machine learning has yet to tap.

In this edition of Author Talks, McKinsey Global Publishing’s Raju Narisetti chats with Tomas Chamorro-Premuzic about his new book, I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique (Harvard Business Review Press, February 2023). Chamorro-Premuzic explains why some AI algorithms model humanity as a simple species, how attention has become commoditized, and why the right questions are now more valuable than the right answers. An edited version of the conversation follows.

Why did you write this, your 12th book, now?

An unanticipated problem was encountered, check back soon and try again


I’m a professor of business psychology at Columbia University and UCL [University College London] and the chief innovation officer at ManpowerGroup. I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique is a book about the behavioral consequences or impact of artificial intelligence, including the dark side of human behavior and what we should do to upgrade ourselves as a species.

The book is written at a time that, in my view, could only be described as the AI age. Humans have always relied on technological inventiveness and innovation to shape their cultural and social evolution, and I think there can be very little doubt that the definitive technology of today is artificial intelligence, or AI.

Now, even the wider public is talking about things like ChatGPT and other conversational interfaces, and the tech giants are described mostly as data companies and as algorithmic prediction businesses.

The book was very much written in the midst of the AI age, or under the influence of AI, because I wrote the bulk of this at the height of the pandemic when we had very little physical interaction or contact with other people outside of our nuclear families. This means I was heavily influenced by hyperconnectedness and the datafication of me. Everything I did was being datafied and subjected to the predictive powers of AI during 2020 and 2021.

I can’t say that there won’t be a better era to read the book, but it certainly wouldn’t have had the same connotation and impact if we had published it five or ten years ago.

Haven’t humans always blamed technology for every problem they face?

There is a common tendency for people to overreact to things that are novel, whether in a good way or in a bad way, and technologies are a very good example of this.

Perhaps the best example is how, when the written newspaper first scaled up and productized, people feared that humans would never meet in person ever again because there would be no information or even gossip to exchange if all the news was in written form. Also, from the 1950s onward, people showed concern that television would lead to less intellectual activities, but I don’t think they were wrong because reading habits went down since mass TV was introduced.

What I tried to do with this book is not be at one extreme or the other. What’s important to me is to not miss the opportunity to highlight the behavioral impact and consequences that we have already seen artificial intelligence have on us. This is not a book about AI, but about humans in the AI age.  ... ' 

Wolfram can Plug into GPT for new SuperPowers

Saw this hinted at a while back. Wolfram Alpha has had unique analytic capabilities for some time, now you can link these to ChatGPT.  Via a plugin.   Apparently there is the ability to add other plugins as well. Thinking about some possibilities.   How to hinted at blow, note current restrictions with OpenAI updates.   Good intro to wolfram capabilities,  which were used by some in out enterprise,  most for component testing analytics.    Wolfram also gives you the bonus of some good integrated analytics visualization. The article below contains some very interesting examples.  And in addition describes how the neural nets of large language models relate to the Neural nets that Wolfram has been talking about for years relate.   All this is quite exciting,  but you may have to wait until Plugins are available in ChatGPT. 

ChatGPT Gets Its “Wolfram Superpowers”!

March 23, 2023

To enable the functionality described here, select and install the Wolfram plugin from within ChatGPT.

Note that this capability is so far available only to some ChatGPT Plus users; for more information, see OpenAI’s announcement.

In Just Two and a Half Months…

Early in January I wrote about the possibility of connecting ChatGPT to Wolfram|Alpha. And today—just two and a half months later—I’m excited to announce that it’s happened! Thanks to some heroic software engineering by our team and by OpenAI, ChatGPT can now call on Wolfram|Alpha—and Wolfram Language as well—to give it what we might think of as “computational superpowers”. It’s still very early days for all of this, but it’s already very impressive—and one can begin to see how amazingly powerful (and perhaps even revolutionary) what we can call “ChatGPT + Wolfram” can be.

Back in January, I made the point that, as an LLM neural net, ChatGPT—for all its remarkable prowess in textually generating material “like” what it’s read from the web, etc.—can’t itself be expected to do actual nontrivial computations, or to systematically produce correct (rather than just “looks roughly right”) data, etc. But when it’s connected to the Wolfram plugin it can do these things. So here’s my (very simple) first example from January, but now done by ChatGPT with “Wolfram superpowers” installed:

How far is it from Tokyo to Chicago?

It’s a correct result (which in January it wasn’t)—found by actual computation. And here’s a bonus: immediate visualization:

Show the path

How did this work? Under the hood, ChatGPT is formulating a query for Wolfram|Alpha—then sending it to Wolfram|Alpha for computation, and then “deciding what to say” based on reading the results it got back. You can see this back and forth by clicking the “Used Wolfram” box (and by looking at this you can check that ChatGPT didn’t “make anything up”):  .... '

Thursday, March 23, 2023

Security Issues in AI Emerge

There seems to have been an urge to get some of these systems out before securing them.

ChatGPT bug leaked users' conversation histories  in the BBC

OpenAI launched ChatGPT last November    By Ben Derico       BBC News, San Francisco

A ChatGPT glitch allowed some users to see the titles of other users' conversations, the artificial intelligence chatbot's boss has said.

On social media sites Reddit and Twitter, users had shared images of chat histories that they said were not theirs.

OpenAI CEO Sam Altman said the company feels "awful", but the "significant" error had now been fixed.   Many users, however, remain concerned about privacy on the platform.

Millions of people have used ChatGPT to draft messages, write songs and even code since it launched in November of last year.  Each conversation with the chatbot is stored in the user's chat history bar where it can be revisited later.

Is the world prepared for the coming AI storm?

But as early as Monday, users began to see conversations appear in their history that they said they hadn't had with the chatbot,One user on Reddit shared a photo of their chat history including titles like "Chinese Socialism Development", as well as conversations in Mandarin.  

On Tuesday, the company told Bloomberg that it had briefly disabled the chatbot late on Monday to fix the error.  They also said that users had not been able to access the actual chats.

OpenAI's chief executive tweeted that there would be a "technical postmortem" soon. But the error has drawn concern from users who fear their private information could be released through the tool. The glitch seemed to indicate that OpenAI has access to user chats.

The company's privacy policy does say that user data, such as prompts and responses, may be used to continue training the model. But that data is only used after personally identifiable information has been removed. 

The blunder also comes just a day after Google unveiled its chatbot Bard to a group of beta testers and journalists. Google and Microsoft, a major investor in OpenAI, have been jostling for control of the burgeoning market for artificial intelligence tools. But the pace of new product updates and releases has many concerned missteps like these could be harmful or have unintended consequences.  ... ' 

ML Physics Platform NVIDIA Modulus is Open

 New things out of NVIDIA Resources, Fascinating,  Sample Uses?

Machine Learning Platform NVIDIA Modulus Is Now Open Source

By Bhoomi Gadhia, Ram Cherukuri and Kristen Perez

Physics-informed machine learning (physics-ML) is transforming high-performance computing (HPC) simulation workflows across disciplines, including computational fluid dynamics, structural mechanics, and computational chemistry. Because of its broad applications, physics-ML is well suited for modeling physical systems and deploying digital twins across industries ranging from manufacturing to climate sciences.

NVIDIA Modulus is a state-of-the-art physics-ML platform that blends physics with deep learning training data to build high-fidelity, parameterized surrogate models with near-real-time latency. The surrogate models built using NVIDIA Modulus help a wide range of solutions including weather forecasting, reducing power plant greenhouse gasses, and accelerating clean energy transitions.

NVIDIA Modulus customer success stories are proving the platform’s incredible utility across industries. However, physics-ML is a relatively new field in the deep learning arena, with significant challenges both at the research level as well as at the application front. This is due to the unique requirements needed to satisfy physics-ML rules:

The need for a deep learning model to obey the governing principles of a physical system.

The need for new deep learning model architectures for a specific class of problems, such as those that can satisfy fluid mechanics laws.

The need for generalizable model architectures and algorithms that can serve across different applications.

These challenges require innovation and research across several domains. More importantly, these problems require a strong collaboration between respective domains, industries, and deep learning experts. This level of collaboration requires tools and technologies that remove barriers between researchers, teams, and even industries, to enable the community to build on each other’s work.

Because simulations are critical to these disciplines and the industries that employ them, there is a demand for building and demonstrating confidence that AI can meet and surpass the current simulation approaches. This requires transparency so research can identify the limitations and provide breakthroughs to enable more transformative technologies.

Now, to facilitate the collaboration, transparency, and accountability needed, NVIDIA Modulus has become an open-source platform available for physics-ML.

Physics-ML open-source workflows

This new open-source environment provides significant benefits for AI developers and domain experts across industries in several ways:

Collaboration: An open-source workflow enables you to collaborate more easily with colleagues and share your work with a wider community. By making data, code, and methods openly accessible, you can work together more effectively to address complex physics-ML questions.

Transparency: Open-source workflows can increase the transparency and reproducibility of physics-ML research. By publishing code and data, you can enable other researchers to verify and replicate your results, which can help to build greater trust in the scientific findings.

Innovation: Open-source workflows can facilitate innovation by enabling you to build on other researchers’ work more easily. By providing access to a shared repository of tools and techniques, open-source workflows can help to accelerate the pace of discovery and enhance the quality of research outputs.

Accessibility: Open-source workflows can help to make research more accessible to a wider range of stakeholders, including drug development managers, national lab directors, policymakers, journalists, and members of the public. By providing clear, accessible information about physics-based modeling research, open-source workflows can help build greater awareness and understanding.

Overall, an open-source workflow can help AI developers and engineering and science domain experts to work more collaboratively, transparently, innovatively, and with greater accessibility to enhance the impact and relevance of their research.

Accessing NVIDIA Modulus open-source software

NVIDIA Modulus is available as open-source software (OSS) under the simple Apache 2.0 license. .... ' 

AI being used to cherry-pick organs for transplant

 AI being used to cherry-pick organs for transplant

By Duncan MacRae | March 2, 2023 | MarketingTech  in ArtificialIntelligence

Duncan is an award-winning editor with more than 20 years experience in journalism. Having launched his tech journalism career as editor of Arabian Computer News in Dubai, he has since edited an array of tech and digital marketing publications, including Computer Business Review, TechWeekEurope, Figaro Digital, Digit and Marketing Gazette.

A new method to assess the quality of organs for donation is set to revolutionise the transplant system – and it could help save lives and tens of millions of pounds.

The National Institute for Health and Care Research (NIHR) is contributing more than £1 million in funding to develop the new technology, which is known as Organ Quality Assessment (OrQA). It works in the same way as Artificial Intelligence-based facial recognition to evaluate the quality of an organ.

It is estimated the technology could result in up to 200 more patients receiving kidney transplants and 100 more receiving liver transplants a year in the UK.

Colin Wilson, transplant surgeon at Newcastle upon Tyne Hospitals NHS Foundation Trust and co-lead of the project, said: “Transplantation is the best treatment for patients with organ failure, but unfortunately some organs can’t be used due to concerns they won’t function properly once transplanted.

“The software we have developed ‘scores’ the quality of the organ and aims to support surgeons to assess if the organ is healthy enough to be transplanted.  “Our ultimate hope is that OrQA will result in more patients receiving life-saving transplants and enable them to lead healthier, longer lives.”

Professor Hassan Ugail, director of the Centre for Visual Computing at the University of Bradford, whose team is working on image analysis as part of the research, said: “Currently, when an organ becomes available, it is assessed by a surgical team by sight, which means, occasionally, organs will be deemed not suitable for transplant.

“We are developing a deep machine learning algorithm which will be trained using thousands of images of human organs to assess images of donor organs more effectively than what the human eye can see. “This will ultimately mean a surgeon could take a photo of the donated organ, upload it to OrQA and get an immediate answer as to how best to use the donated organ.”

There are currently nearly 7,000 patients awaiting organ transplant in the UK. An organ can only survive out of the body for a limited time. In most cases, only one journey from the donor hospital to the recipient hospital is possible. This means it is essential that the right decision is made quickly.

The project is being supported by NHS Blood and Transplant (NHSBT), Quality in Organ Donation biobank and an NIHR Blood and Transplant Research Unit to deliver research for the NHS. It also involves academics from the Universities of Oxford and New South Wales.?

Professor Derek Manas, medical director of NHSBT Organ Donation and Transplantation, said: “This is an exciting development in technological infrastructure that, once validated, will enable surgeons and transplant clinicians to make more informed decisions about organ usage and help to close the gap between those patients waiting for and those receiving lifesaving organs. We at NHSBT are extremely committed to making this exciting venture a success.”

Health Minister Neil O’Brien said: “Technology has the ability to revolutionise the way we care for people and this cutting edge technology will improve organ transplant services. Developed here in the UK, this pioneering new method could save hundreds of lives and ensure the best use of donated organs.

“I encourage everyone to register their organ donation decision. Share it with your family so your loved ones can follow your wishes and hopefully save others.”

Chief executive of the NIHR Professor, Lucy Chappell, said: “Funded by our Invention for Innovation Program, this deep machine learning algorithm aims to increase the number of liver and kidney donor organs suitable for transplantation. This is another example of how AI can enhance our healthcare system and make it more efficient. Once clinically validated and tested, cutting edge technology such as this holds the real promise of saving and improving lives.”

‘Proof of concept’ work has been carried out in liver, kidney and pancreas transplantation as well as at an advanced stage of pre-clinical testing in liver and kidney. ... '

Wave of Stealthy China Cyberattacks Hits U.S., Private Networks, Google Says

 Wave of Stealthy China Cyberattacks Hits U.S., Private Networks, Google Says

By The Wall Street Journal, March 22, 2023

China has routinely denied hacking into businesses or governments in other countries.

Said Charles Carmakal, Mandiant’s chief technology officer, “There is a lot of intrusion activity going undetected. We think the problem is a lot bigger than we know today.”

Researchers in Google's Mandiant division found that state-sponsored hackers in China have been using techniques that allow them to evade common cybersecurity tools and spy on government and business networks for years without being detected.

The researchers said hackers are compromising devices on the edge of the network and targeting software from VMware Inc. or Citrix Systems Inc., among others, which often run on computers without antivirus or endpoint detection software.

Mandiant's Charles Carmakal said the attacks, which generally exploit previously undetected flaws, likely are more widespread than previously known.

Carmakal noted this cyberattack method "is a lot harder for us to investigate, and it is certainly exponentially harder for victims to discover these intrusions on their own. Even with our hunting techniques, it's hard for them to find it."

From The Wall Street Journal

View Full Article - 

Persistent Computing and Virtual Worlds

 Worked a number of virtual world projects, never considered them persistent computing.  Useful thought.

Yu Yuan on Building A Persistent Virtual World The IEEE Standards Association president discusses technology for the metaverse  by EDD GENT

Despite tech giants including Meta, Microsoft, and Nvidia investing billions of dollars in the development of the metaverse, it is still little more than a fantasy. Making it a reality is likely to require breakthroughs in a range of sectors such as storage, modeling, and communication.

To spur progress in the advancement of those technologies, the IEEE Standards Association has launched the Persistent Computing for Metaverse initiative. As part of the IEEE’s Industry Connections Program, it will bring together experts from both industry and academia to help map out the innovations that will be needed to make the metaverse a reality.

Although disparate virtual-reality experiences exist today, the metaverse represents a vision of an interconnected and always-on virtual world that can host thousands, if not millions, of people simultaneously. The ultimate goal is for the virtual world to become so realistic that it is almost indistinguishable from the real one.

Today’s technology is a long way from making that possible, says Yu Yuan, president of the IEEE Standards Association. The Institute spoke with Yuan to find out more about the initiative and the key challenges that need to be overcome. His answers have been edited for clarity.

The Institute: What is persistent computing?

Yu Yuan: I have been working in virtual reality and multimedia for more than 20 years, I just didn’t call my work metaverse. After metaverse became a buzzword, I asked myself, ‘What’s the difference between metaverse and VR?’ My answer is: persistence, or the ability to leave traces in a virtual world.

Persistent computing refers to the combination of all the technologies needed to support the development and operation of a persistent virtual world. In other words, a metaverse. There are different kinds of VR experiences, but many of them are one-time events. Similar to how video games work, every time a user logs in, the entire virtual world resets. But users in the metaverse can leave traces. For example, they can permanently change the virtual world by destroying a wall or building a new house. Those changes have to be long-lasting so there will be a meaningful virtual society or meaningful economy in that virtual world.

What are the key components that are required to make persistent computing possible?

Yuan: The first is storage. In most of today’s video games, users can destroy a building, only for it to be restored the next time the user logs in to the game. But in a persistent virtual world the current status of the virtual world needs to be stored constantly. Users can create or destroy something in that world and the next time they log in, those changes will still be there. These kinds of things have to be properly stored—which means a very large amount of data needs to be stored.   ... ' 

Bard is Out and Being Tested in Changed World of High Expectations

 Google has announced their model, has been getting some criticism.  But as they say it is still just a  test.  There are lots of models jumping out there.    Google seems to have moved early without enough testing.    Expectations are high.  Also,  it competes directly with very profitable Google Search. Use directly with caution.

Try Bard and share your feedback

Mar 21, 2023  in the Google Blog.

We’re starting to open access to Bard, an early experiment that lets you collaborate with generative AI. We're beginning with the U.S. and the U.K., and will expand to more countries and languages over time.

SissieHsiao.png, Sissie Hsiao, VP, Product

Eli Collins, VP, Research

Animation of text: “Meet Bard, an early experiment by Google.” Followed by sentences explaining what Bard can do, like draft a packing list for a fishing and camping trip.

Today we’re starting to open access to Bard, an early experiment that lets you collaborate with generative AI. This follows our announcements from last week as we continue to bring helpful AI experiences to people, businesses and communities.

You can use Bard to boost your productivity, accelerate your ideas and fuel your curiosity. You might ask Bard to give you tips to reach your goal of reading more books this year, explain quantum physics in simple terms or spark your creativity by outlining a blog post. We’ve learned a lot so far by testing Bard, and the next critical step in improving it is to get feedback from more people.

Bard can help you brainstorm some ways to read more books this year.

About Bard

Bard is powered by a research large language model (LLM), specifically a lightweight and optimized version of LaMDA, and will be updated with newer, more capable models over time. It’s grounded in Google's understanding of quality information. You can think of an LLM as a prediction engine. When given a prompt, it generates a response by selecting, one word at a time, from words that are likely to come next. Picking the most probable choice every time wouldn’t lead to very creative responses, so there’s some flexibility factored in. We continue to see that the more people use them, the better LLMs get at predicting what responses might be helpful.

While LLMs are an exciting technology, they’re not without their faults. For instance, because they learn from a wide range of information that reflects real-world biases and stereotypes, those sometimes show up in their outputs. And they can provide inaccurate, misleading or false information while presenting it confidently. For example, when asked to share a couple suggestions for easy indoor plants, Bard convincingly presented ideas…but it got some things wrong, like the scientific name for the ZZ plant.  .... ' 

AI Voice Scan

Vocal scams commonly used.

AI Voice Scam, an example of some of the dangers involved.  

They Thought Loved Ones Were Calling for Help. It Was an AI Scam  By The Washington Post, March 14, 2023

In 2022, impostor scams were the second-most-popular racket in America, according to data from the Federal Trade Commission.

Technology is making it easier and cheaper for bad actors to mimic voices, convincing people, often the elderly, that their loved ones are in distress.

Credit: Elena Lacey/The Washington Post

More sophisticated artificial intelligence (AI) tools are being used to replicate a person's voice.

Fraudsters increasingly are using such tools for impostor scams, which often target the elderly, making them believe loved ones are in trouble and in need of quick cash.

University of California, Berkeley's Hany Farid said AI voice-generating software can recreate the pitch, timbre, and individual sounds of a person's voice using a short audio sample.

Said Farid, "If you have a Facebook page ... or if you've recorded a TikTok and your voice is in there for 30 seconds, people can clone your voice."

Software from the startup ElevenLabs, for example, allows users to turn a short audio sample into a synthetically generated voice using a text-to-speech tool. The software is free or costs $5 to $330 per month, depending on the amount of audio generated.

From The Washington Post

View Full Article - May Require Paid Subscription  

Wednesday, March 22, 2023

GitHub unveils Copilot X: The Future of AI-powered Software Development

Now here is a future that has some real value ... 

GitHub unveils Copilot X: The future of AI-powered software development

Michael Nuñez in Venturebeat

@MichaelFNunez, March 22, 2023

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

GitHub, the leading platform for software development collaboration, has announced today the next step in AI-driven software development with the introduction of Copilot X. As a pioneer in the use of generative AI for code completion, GitHub is now taking its partnership with OpenAI further by adopting the latest GPT-4 model and expanding Copilot’s capabilities.

Launched less than two years ago, GitHub Copilot has already made a significant impact on the world of software development. GitHub reported today that the AI-powered tool, built using OpenAI’s Codex model, currently writes 46% of the code on the platform and has helped developers code up to 55% faster. By auto-completing comments and code, Copilot serves as an AI pair programmer that keeps developers focused and productive.

GitHub Copilot X, the upgraded version being released today, represents a bold vision for the future of AI-powered software development. With an emphasis on accessibility, the upgraded Copilot will now be available throughout the entire development life cycle, going beyond mere code completion. By incorporating chat and voice features, developers can communicate with Copilot more naturally. Additionally, Copilot X will be integrated into pull requests, command lines and documentation, providing instant answers to questions about projects.

The transformative potential of AI in software development is on full display with GitHub Copilot X. By reducing boilerplate and manual tasks, developers can focus on more complex and innovative work. This new level of productivity will allow developers to concentrate on the bigger picture, fostering innovation and accelerating human progress.  ... ' 

Large language models also Work for Protein Structures

We asked this question long ago, usefully answered?  Very big deal if so.


Large language models also work for protein structures

Training on raw protein sequences allows the AI to make inferences about structure.

JOHN TIMMER - 3/16/2023, 3:01 PM

The success of ChatGPT and its competitors is based on what's termed emergent behaviors. These systems, called large language models (LLMs), weren't trained to output natural-sounding language (or effective malware); they were simply tasked with tracking the statistics of word usage. But, given a large enough training set of language samples and a sufficiently complex neural network, their training resulted in an internal representation that "understood" English usage and a large compendium of facts. Their complex behavior emerged from a far simpler training.

A team at Meta has now reasoned that this sort of emergent understanding shouldn't be limited to languages. So it has trained an LLM on the statistics of the appearance of amino acids within proteins and used the system's internal representation of what it learned to extract information about the structure of those proteins. The result is not quite as good as the best competing AI systems for predicting protein structures, but it's considerably faster and still getting better.  

LLMs: Not just for language

The first thing you need to know to understand this work is that, while the term "language" in the name "LLM" refers to their original development for language processing tasks, they can potentially be used for a variety of purposes. So, while language processing is a common use case for LLMs, these models have other capabilities as well. In fact, the term "Large" is far more informative, in that all LLMs have a large number of nodes—the "neurons" in a neural network—and an even larger number of values that describe the weights of the connections among those nodes. While they were first developed to process language, they can potentially be used for a variety of tasks. .... ' 

UK Says AI is Getting Very Important

In part pushed by recent advances?

AI News

Editorial: UK puts AI at the centre of its Budget

By Ryan Daws | March 16, 2023 | TechForge Media

Categories: Artificial Intelligence, Development, Enterprise, Industries, Legislation & Government, Machine Learning, Quantum Computing,

Ryan is a senior editor at TechForge Media with over a decade of experience covering the latest technology and interviewing leading industry figures. He can often be sighted at tech conferences with a strong coffee in one hand and a laptop in the other. If it's geeky, he’s probably into it. Find him on Twitter (@Gadget_Ry) or Mastodon (@gadgetry@techhub.social)

British Chancellor Jeremy Hunt announced the country’s Spring Budget this week and supporting the AI industry was at the centre.

The UK is Europe’s AI leader. Indeed, behind the US and China, the country’s tech sector overall has the third-highest amount of VC investment in the world – more than Germany and France combined – and has produced more than double the number of $1 billion tech firms than any other European country.

Gerard Grech, CEO of Tech Nation, said:

“As a nation uniquely positioned between two economic powerhouses, the US and the EU, we must harness innovative regulation that will enable us to propel ourselves as an international hub and leader for AI, quantum computing, and deep tech.

This is a critical step towards creating a distinctive, value-driven tech ecosystem in the UK, setting us apart from other tech hubs.”

To support British startups, an ‘AI Sandbox’ was announced by the chancellor. The sandbox features a number of initiatives designed to encourage AI research and investment.

Among them is a prize pot containing millions of pounds. £1 million will be up for grabs every year over the next decade for the best AI innovations created by teams and individuals.

Ludovico Lugnani, Technology Solicitor at BDB Pitmans, comments:

“Following yesterday’s news of Open AI’s launch of its upgraded GPT-4 chatbot, the Budget’s announcement as to the creation of an AI sandbox offers a promising outlook for the UK to speed up the arrival of AI products to market.

As part of this, particular emphasis should be placed on providing effective guidance as to the implications of copyright law on generative AI applications following the recent claim by Getty Images against Stability AI over breach of copyright.”

Elsewhere, £2.5 billion is being ploughed into advancing quantum computing. The powerful machines will enable a literal “quantum leap” in AI.

“The power that AI’s complex algorithms need can be provided by quantum computing,” the chancellor told the Commons.

£900 million is also being invested to create an exascale supercomputer that will be several times more powerful than the country’s biggest computers and advance not just AI research, but also science, healthcare, defense, weather modelling, and more.

“[The supercomputer] should be a huge boost to the UK’s ability to support cutting-edge research in areas requiring complex modelling and simulations, such as climate change, pharmaceutical development and hi-tech engineering,” commented Nick White, Partner at law firm Charles Russell Speechlys.

Only one exacomputer is currently known to exist. The computer, known as Frontier, is housed at the Oak Ridge National Laboratory in Tennessee, United States.

Other relevant announcements in the Spring Budget are targeted less at the AI industry specifically but aim to solidify the UK’s ranking as the second-best country after the US to invest and launch a business.

Under the ‘Full Expensing’ plans, companies investing in R&D and IT equipment to boost growth will benefit. Every pound a company spends on new IT equipment and machinery can be deducted in full from taxable profits.  .... '

Replacing Coders Already?


Startups Are Already Using GPT-4 to Spend Less on Human Coders

GPT-4 "saves a lot of time and a lot of money, obviously, because we haven't had to hire additional resources."   By Chloe Xiang, March 20, 2023, 9:00am



Since GPT-4 was released last week, many users have noticed its advanced coding abilities. GPT-4, OpenAI’s latest version of the large language model that ChatGPT is built on, has been able to code games like Pong and make simple apps after being given prompts written in conversational English. Naturally, this has led to widespread fear from a number of computer science students and software developers who are afraid that their jobs will soon be rendered obsolete by AI. ....' 

Google Bard

 Got into Google Bard from the wait list in just a few hours.  See Bard.google.com    Here is how they describe it.   And Perhaps naturally, each interaction also includes the ability to use a google search .  Says it uses the Web as a source of knowledge, as well as books, articles, other accessible resources.  


Google Bard is Google's experimental, conversational, AI chat service. It is meant to function similarly to ChatGPT, with the biggest difference being that Google's service will pull its information from the web.

Google Bard was named after the Bards of Celtic cultures, who were professional story-tellers, musicians, and oral historians. Google Bard is designed to be a modern-day equivalent, using artificial intelligence to generate creative text formats of text content, like poems, code, scripts, musical pieces, email, letters, etc.

Google Bard is still under development, but it has learned to perform many kinds of tasks, including:

- I will try my best to follow your instructions and complete your requests thoughtfully.

- I will use my knowledge to answer your questions in a comprehensive and informative way, even if they are open ended, challenging, or strange.

- I will generate different creative text formats of text content, like poems, code, scripts, musical pieces, email, letters, etc. I will try my best to fulfill all your requirements ....

Seems well done, does seem to allow for some stranger questions.    They warn you its new and still a work in progress, experimental, can make mistakes, etc.   It seems now most every one wants such a capability added to their world of tools. 

More Fears of Tok

TikTok: UK ministers banned from using Chinese-owned app on government phones

TikTok on a phone, By Chas Geiger & Zoe Kleinman, BBC News

British government ministers have been banned from using Chinese-owned social media app TikTok on their work phones and devices on security grounds.

The government fears sensitive data held on official phones could be accessed by the Chinese government.

Cabinet Minister Oliver Dowden said the ban was a "precautionary" move but would come into effect immediately.

TikTok has strongly denied allegations that it hands users' data to the Chinese government.

Theo Bertram, the app's vice-president of government relations and public policy in Europe, told the BBC it believed the decision was based on "more on geopolitics than anything else".

"We asked to be judged not on the fears that people have, but on the facts," he added.

The Chinese embassy in London said the move was motivated by politics "rather than facts" and would "undermine the confidence of the international community in the UK's business environment".

Mr Dowden said he would not advise the public against using TikTok, but they should always "consider each social media platform's data policies before downloading and using them".

Prime Minister Rishi Sunak had been under pressure from senior MPs to follow the US and the European Union in barring the video-sharing app from official government devices.

But government departments - and individual ministers - have embraced TikTok as a way of getting their message out to younger people.

Use of the app has exploded in recent years, with 3.5 billion downloads worldwide.

Its success comes from how easy it is to record short videos with music and fun filters, but also from its algorithm which is good at serving up videos which appeal to individual users.

It is able to do this because it gathers a lot of information on users - including their age, location, device and even their typing rhythms - while its cookies track their activity elsewhere on the internet.

US-based social media sites also do this but TikTok's Chinese parent company ByteDance has faced claims of being influenced by Beijing.   ... ;

Robot Security Guards

Inevitable, increasing in smart application.

Robots are your new office security guard      in Axios

Jennifer A. Kingson

See Cobalt Robotics.

They stand five feet tall and glide at three miles per hour, patrolling office buildings for everything from broken fire alarms to suspicious activity: Security robots are starting to replace human guards in workplaces and beyond.

Why it matters: Despite some hiccups, robots armed with sensors and artificial intelligence are making inroads in diverse fields — from window washing and pizza making to bartending and caring for the elderly.

Driving the news: Lower costs mean it's now substantially cheaper for companies to use robots than traditional guards for 24/7 security.

Robots can check in visitors and issue badges, respond to alarms, report incidents, and see things security cameras can't.

Security robots don't get bored, tired, or distracted by their phones — and it's safer for them to confront intruders and other hazards.

Two-way communications systems allow employees to report problems or request human help by talking to the robot.

By the numbers: Using a robot guard vs. a human can save a company $79,000 per year, according to a recent report by Forrester Research.

What they're saying: "All this money has really poured into service robotics because of the money that has gone into autonomous vehicles," says Mike LeBlanc, president and COO of Cobalt Robotics, which is leading the charge to populate offices with non-human security guards.

Two views of a Cobalt Robotics robot in the office, showing how it roams hallways and has an interactive tablet that allows people to communicate with a call center.

At left, a Cobalt Robotics unit prowls hallways to scout for problems and allows people to report concerns. At right, an employee uses the robot's tablet to communicate with a specialist at a remote call center. Photos courtesy of Cobalt Robotics.

How it works: Cobalt's robots are built to the specifications of a particular building's ramps and elevators.   They roam hallways looking for possible problems — like unusual motion at night or a door that's been propped open — and report back to a human-staffed call center.

"They have fabric, so they're designed to look like a piece of high-end office furniture," LeBlanc tells Axios. "And they have a tablet on the front that allows people to interact with our 24/7 specialists at any given time."

"People can tap on the screen of the robot, a person will come up on the screen, and they'll be able to ask them what's going on," LeBlanc said. "They can say, 'There's a leak or spill over here,' or 'There's someone in the office who's making me uncomfortable.'"

Case study: Food delivery startup DoorDash is using Cobalt robots across its corporate sites, for everything from COVID-19 temperature checks to routine security patrols, alarm responses, and security escort services. ...  

Humans get X-Ray Vision in Augmented Reality

 Seen this in previous worlds we experimented with.   Concept interesting. 

MIT researchers invented an augmented reality headset that gives humans X-ray vision. The invention, dubbed X-AR, combines wireless sensing with computer vision to enable users to see hidden items. X-AR can help users find missing items and guide them toward these items for retrieval. This new technology has many applications in retail, warehousing, manufacturing, smart homes, and more.

For more information, check out:

Website: https://Xar.media.mit.edu

Paper: https://www.mit.edu/~fadel/papers/XAR-paper.pdf

Instagram: @mit_sk_lab (https://instagram.com/mit_sk_lab?igshid=YmMyMTA2M2Y=)

Authors: Tara Boroushaki, Maisy Lam, Laura Dodds, Aline Eid, Fadel Adib

Video Production: Maisy Lam, Jimmy Day

UI design: Maisy Lam, Yuechen Wang

Funding: NSF, Sloan Foundation, MIT Media Lab  .... 

Tuesday, March 21, 2023

Startups Using GPT-4

GPT-4is impressive so far ....

Startups Are Already Using GPT-4 to Spend Less on Human Coders

GPT-4 "saves a lot of time and a lot of money, obviously, because we haven't had to hire additional resources."

By Chloe Xiang  in Vice.com

March 20, 2023, 9:00am

Since GPT-4 was released last week, many users have noticed its advanced coding abilities. GPT-4, OpenAI’s latest version of the large language model that ChatGPT is built on, has been able to code games like Pong and make simple apps after being given prompts written in conversational English. Naturally, this has led to widespread fear from a number of computer science students and software developers who are afraid that their jobs will soon be rendered obsolete by AI.    ... '

Google's Bard Is Here (Waitlist)

Yet another chatty AI,   examining.  From the Lamda Language model.

-  You can sign up to try Bard at bard.google.com. We'll begin rolling out access in the U.S. and U.K. today and expanding over time to more countries and languages. Until next time, Bard out! -

I am now on the waitlist, more will follow here.

Bard: Google's rival to ChatGPT launches for over-18s

The Bard chatbot answering questions

By Zoe Kleinman, Technology editor

Google will begin rolling out its AI chatbot Bard today, but it will only be available to certain users and they will have to be over the age of 18.

Unlike its viral rival ChatGPT, it can access up-to-date information from the internet and has a "Google it" button which accesses search.

It also namechecks its sources for facts, such as Wikipedia.

But Google warned Bard would have "limitations" and said it might share misinformation and display bias.

This is because it "learns" from real-world information, in which those biases currently exist - meaning it is possible for stereotypes and false information to show up in its responses.

How do chatbots work?

AI chatbots are programmed to answer questions online using natural, human-like language.

They can write anything from speeches and marketing copy to computer code and student essays.

When ChatGPT launched in November 2022, it had more than one million users within a week, said OpenAI, the firm behind it.

Microsoft has invested billions of dollars in it, incorporating the product into its search engine Bing last month.

It has also unveiled plans to bring a version of the tech to its office apps including Word, Excel and Powerpoint.

Google has been a slower and more cautious runner in the generative AI race with its version, Bard, which launches in the US and UK to begin with. Users will have to register to try it out.

Bard is a descendant of an earlier language model of Google's called Lamda, which was never fully released to the public. It did, however, attract a lot of attention when one of the engineers who worked on it claimed its answers were so compelling that he believed it was sentient. Google denied the claims and he was fired.

Digital Twins in Consumer Package Goods

 In the process of looking for in process applications 

Digital Twins & CPG Manufacturing Transformation

By Justin Honaman, Worldwide Head of Consumer Products – Food & Beverage, AWS

Good intro piece

dig·​i·​tal: of, relating to, or utilizing devices constructed or working by the methods or principles of electronics; [1]also: characterized by electronic and especially computerized technology; : composed of data in the form of especially binary digits

/twin/: something containing or consisting of two matching or corresponding parts.

For the last few years, the term Digital Twin has been at the top of the buzzword list for manufacturers and industrial companies, often meaning different things in different production environments. Most of these organizations have developed Digital Twins to improve operations and product offerings and deliver more business value to their end customers. The concept of digital twins is not new and dates back to the early days of the space program. The Apollo 13 mission in the 1960s is an early use case of using twins to model the state of the damaged spacecraft and bring the astronaut crew safely back to Earth.

In recent years, the core ideas of Digital Twin have been commonly attributed to Michael Grieves of the Digital Twin Institute, who developed the concept throughout the 2000s, and NASA’s John Vickers, who coined the term Digital Twin in 2010. Customers today are seeking to deploy Digital Twins across a broad range of applications, including the design of complex equipment, manufacturing operations, preventive maintenance, precision medicine, digital agriculture, city planning, 3D immersive environments, and most recently, metaverse-type applications.

Digital Twin is often applied broadly to describe any virtual model, including engineering simulations, CAD models, IoT dashboards, or gaming environments. Digital Twins are more than just a marketing term, but rather a new technology that has only become feasible in the past few years with the convergence of at-scale computing, modeling methods, and IoT connectivity.

Let’s first define Digital Twin and how to integrate existing modeling methods into Digital Twins.

What Is a Digital Twin?

A Digital Twin is a living digital representation of an individual physical system that is dynamically updated with data to mimic the true structure, state, and behavior of the physical system, to drive business outcomes.  .... (much more at the link) 

BBC advises staff to delete TikTok from work phones

Security addressed  at the BBC

BBC advises staff to delete TikTok from work phones

Published, 22 hours ago, By Zoe Kleinman, Andre Rhoden-Paul & Chris Vallance

BBC News

The BBC has advised staff to delete TikTok from corporate phones because of privacy and security fears.

The BBC seems to be the first UK media organisation to issue the guidance - and only the second in the world after Denmark's public service broadcaster.

The BBC said it would continue to use the platform for editorial and marketing purposes for now. TikTok has consistently denied any wrongdoing.   The app has been banned on government phones in the UK and elsewhere.

Countries imposing bans include the US, Canada, New Zealand and Belgium, while the same applies to anyone working at the European Commission.

However, it is still permitted on personal devices.

The big fear is that data harvested by the platform from corporate phones could be shared with the Chinese government by TikTok's parent company ByteDance, because its headquarters are in Beijing.

TikTok says the bans are based on "fundamental misconceptions".

ByteDance employees were found to have tracked the locations of a handful of Western journalists in 2022. The company says they were fired.  Alicia Kearns, who chairs the Foreign Affairs Committee, was asked for her view on the BBC's decision, and tweeted: "If protecting sources isn't a priority, that's a major problem."

'Encouraging TikTok, not banning it'

Dominic Ponsford, editor-in-chief of journalism industry trade publication the Press Gazette, said it would be interesting to see what other media organisations decide to do.

He told the BBC: "I suspect everyone's chief technical officer will be looking at this very closely.

"Until now, news organisations have been very keen to use TikTok, because it's been one of the fastest-growing social media platforms for news publishers over the last year, and it's been a good source of audience and traffic.

"So most of the talk in the news media has been around encouraging TikTok rather than banning it."

BBC taking security 'incredibly seriously'

The short-video platform is known for its viral dance crazes, sketches and filters and is hugely popular among young people, with more than 3.5 billion downloads worldwide.  Channel 4 News presenter Krishnan Guru-Murthy tweeted in reaction to the decision: "BBC News making big play for views on TikTok but now the BBC is telling staff not to have it on their phones".

A BBC spokesperson said it took the safety and security of its systems, data and people "incredibly seriously".

TikTok banned from official UK government phones

How much data does TikTok collect?.... '

Use cases of GPT-4

 Directions and examples.....

MIT Technology Review


How AI experts are using GPT-4

Plus: Chinese tech giant Baidu just released its answer to ChatGPT.

By Melissa Heikkiläarchive page


This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

WOW, last week was intense. Several leading AI companies had major product releases. Google said it was giving developers access to its AI language models, and AI startup Anthropic unveiled its AI assistant Claude. But one announcement outshined them all: OpenAI’s new multimodal large language model, GPT-4. My colleague William Douglas Heaven got an exclusive preview. Read about his initial impressions.  

Unlike OpenAI’s viral hit ChatGPT, which is freely accessible to the general public, GPT-4 is currently accessible only to developers. It’s still early days for the tech, and it’ll take a while for it to feed through into new products and services. Still, people are already testing its capabilities out in the open. Here are my top picks of the fun ways they’re doing that.


In an example that went viral on Twitter, Jackson Greathouse Fall, a brand designer, asked GPT-4 to make as much money as possible with an initial budget of $100. Fall said he acted as a “human liaison” and bought anything the computer program told him to. 

GPT-4 suggested he set up an affiliate marketing site to make money by promoting links to other products (in this instance, eco-friendly ones). Fall then asked GPT-4 to come up with prompts that would allow him to create a logo using OpenAI image-generating AI system DALL-E 2. Fall also asked GPT-4 to generate content and allocate money for social media advertising. 

The stunt attracted lots of attention from people on social media wanting to invest in his GPT-4-inspired marketing business, and Fall ended up with $1,378.84 cash on hand. This is obviously a publicity stunt, but it’s also a cool example of how the AI system can be used to help people come up with ideas. 


Big tech companies really want you to use AI at work. This is probably the way most people will experience and play around with the new technology. Microsoft wants you to use GPT-4 in its Office suite to summarize documents and help with PowerPoint presentations—just as we predicted in January, which already seems like eons ago. 

Not so coincidentally, Google announced it will embed similar AI tech in its office products, including Google Docs and Gmail. That will help people draft emails, proofread texts, and generate images for presentations.  

Health care

I spoke with Nikhil Buduma and Mike Ng, the cofounders of Ambience Health, which is funded by OpenAI. The startup uses GPT-4 to generate medical documentation based on provider-patient conversations. Their pitch is that it will alleviate doctors’ workloads by removing tedious bits of the job, such as data entry. 

Buduma says GPT-4 is much better at following instructions than its predecessors. But it’s still unclear how well it will fare in a domain like health care, where accuracy really matters. OpenAI says it has improved some of the flaws that AI language models are known to have, but GPT-4 is still not completely free of them. It makes stuff up and presents falsehoods confidently as facts. It’s still biased. That’s why the only way to deploy these models safely is to make sure human experts are steering them and correcting their mistakes, says Ng.

Writing code

Arvind Narayanan, a computer science professor at Princeton University, saysit took him less than 10 minutes to get GPT-4 to generate code that converts URLs to citations. 

Narayanan says he’s been testing AI tools for text generation, image generation, and code generation, and that he finds code generation to be the most useful application. “I think the benefit of LLM [large language model] code generation is both time saved and psychological,” he tweeted. 

In a demo, OpenAI cofounder Greg Brockman used GPT-4 to create a website based on a very simple image of a design he drew on a napkin. As Narayanan points out, this is exactly where the power of these AI systems lies: automating mundane, low-stakes, yet time-consuming task... '